Argonne National Lab just wrapped up a two-day event celebrating 30 years of parallel computing. The event hosted many of the visionaries at the lab and at other institutions who initiated and contributed to Argonne’s history of advancing parallel computing and computational science.
Argonne National Laboratory seeks solutions to pressing national problems in science and technology. The nation’s first national laboratory, Argonne conducts leading-edge basic and applied scientific research in virtually every scientific discipline. Argonne researchers work closely with researchers from hundreds of companies, universities, and federal, state and municipal agencies to help them solve their specific problems, advance America’s scientific leadership and prepare the nation for a better future.
The tradition continues as Argonne explores new paths and paves the way toward exascale computing. Read the Full Story.
Today Italian HPC solution provider NICE announced the release of the EnginFrame 2013.0 software. Designed for technical computing users in a broad range of markets, EnginFrame simplifies engineering and scientific workflows, increasing productivity and streamlining data and resource management.
With EnginFrame 2013.0 we have further strengthened our technology leadership in the HPC Portal market” , said Giuseppe Ugolotti, CEO of NICE. “NICE EnginFrame is a critical component for anyone who wants to create a technical Cloud that can run at the same time both HPC and interactive workload.”
As an HPC Portal, EnginFrame 2013.0 now offers built-in management of 3D and 2D remote visualization sessions, improved data transfer capabilities and a great number of new features and enhancements addressing both end users’ and system administrators’ needs. Leveraging all the major HPC job schedulers and remote visualization technologies, EnginFrame translates user clicks into the appropriate actions to submit HPC jobs, create remote visualization sessions, and monitor workloads on distributed resources.
Today Mellanox announced plans to acquire photonics leader Kotura, Inc. for approximately $82 million. The acquisition is expected to expand Mellanox’s ability to deliver cost-effective, high-speed networks with next generation optical connectivity, allowing data center customers to meet the growing demands of high-performance, Web 2.0, cloud, data center, database, financial services and storage applications. Mellanox believes that the Kotura acquisition will enhance its ability to provide leading technologies for high speed, scalable and efficient end-to-end interconnect solutions.
Operating networks at 100 Gigabit per second rates and higher requires careful integration between all parts of the network. We believe that silicon photonics is an important component in the development of 100 Gigabit InfiniBand and Ethernet solutions, and that owning and controlling the technology will allow us to develop the best, most reliable solution for our customers,” said Eyal Waldman, president, CEO and chairman of Mellanox Technologies. “We expect that the proposed acquisition of Kotura’s technology and the additional development team will better position us to produce 100Gb/s and faster interconnect solutions with higher-density optical connectivity at a lower cost. We welcome the great talent from Kotura and look forward to their contribution to Mellanox’s continued growth.”
Think of digital computers, the Internet, lasers, and genome sequencing, all of which are underpinned by basic science, and all of which received federal funding in their early stages. The silliest part of the proposed legislation is that it mandates that the research be “ground breaking,” an attribute that is impossible to predict. It’s like saying unless the research will win a Noble Prize, it’s not worth doing. Such wording reflects a fundamental misunderstanding of how science works.
Over at the Xcelerit Blog, Jörg Lotze and Hicham Lahlou write that code portability is the key to success in a hybrid computing world with so many available processing architectures.
Therefore, often compromises are taken: typically easy maintenance is favoured and performance is sacrificed. That is, the code is not optimised for a particular platform and developed for a standard CPU processor, as maintaining code bases for different accelerator processors is a difficult task and the benefit is not known beforehand or does not justify the effort. The best solution however would be a single code base that is easy to maintain, written in such a way that it can run on a wide variety of hardware platforms – for example using the Xcelerit SDK. This allows to exploit hybrid hardware configurations to the best advantage and is portable to future platforms.
Over at NICS, Scott Gibson writes that researchers have applied HPC to produce a highly efficient graphics engine that reveals in 3D what’s going on in very complicated astrophysical flows. These simulations also allow researchers to present their results to a wider audience.
McKinney and his research team colleagues convey in recent a Science paper how, through the use of simulations, they discovered that the behavior of black holes that have thick accretion disks differs from longstanding assumptions. The belief has been that accretion disks lie flat along the outer edges of black holes while the relativistic jets shoot out perpendicularly to the disks. However, the simulations showed that the configuration becomes more complex at large distances from the black hole spin axis, with the jets becoming parallel to, but offset from, the accretion disk’s rotational axis; in the process, the disk warps and the jet bends, influencing what one sees at different viewing angles. McKinney explained that key in making this discovery was being able to reduce the symmetry of the problem in their numerical code. To do that, the researchers used spherical polar coordinates that employ radius and two different angles to describe the coordinates. As a result of their approach, they were able to capture the black hole’s asymmetrical shape.
Over at the The Genesis Block, “Phillip Archer” writes that the bitcoin network is now eight times more powerful than the TOP500 supercomputers combined.
While aggregated compute cycles on a network is a far cry from a supercomputer, the comparison does show the remarkable growth of the bitcoin network.
Interestingly, the estimate may still be useful for estimating how well other supercomputers and distributed networking projects would be able to mine bitcoins. Their speed is measured in FLOPS, but they also have the capability of performing the integer operations used in hashing. What would happen if the top 10 supercomputers all switched to bitcoin mining? How much would that affect the network? Lets reverse the equation, and say that they would receive 1 hash for every 12.7k FLOP. The fastest computer, Sequoia, would measure at about 1.6% of the bitcoin network. Their combined speed is 48 petaFLOPS, roughly equivalent to 5% of the bitcoin network. In fact, the top 500 supercomputers have a combined speed of 12% of the bitcoin network.
According to the Wikipedia, Bitcoin is accepted in trade by merchants and individuals in many parts of the world. The processing of bitcoin transactions is secured by servers called Bitcoin miners, which communicate over an internet-based network and confirm transactions by adding them to a ledger which is updated and archived periodically. In addition to archiving transactions each new ledger update creates some newly-minted bitcoins.
Over at HPC for Energy, Carl Bauer writes that High Performance Computing is the key to meeting the daunting energy challenges that face the nation.
U.S. high-performance computing capabilities resident at our national laboratories can turn these challenges into an opportunity for competitive advantage. What was once only available for unique, extremely important and expensive government research projects or the largest corporations is now available to benefit society on a greater scale. Furthermore, the breadth and depth of an educated and talented work force to utilize these tools is expanding. The world-wide competitive advantage this will provide is beginning to be realized across various domestic and international industry sectors. The HPC for Energy initiative is a very important and timely program that can accelerate the realization of the benefits of better-informed deployment of HPC across all aspects of the U.S. energy supply chain.
With ISC’13 coming up in June, a number of ancillary events have been scheduled in Leipzig to take advantage of this annual gathering of over 2500 supercomputing professionals.
The PRACE Scientific Conference will be held on Sunday, June 16 at the Congress Center Leipzig, Hall 4. Top European scientists present results and advances in large scale simulations obtained with support of PRACE, the Partnership for Advanced Computing in Europe.
The HPC Advisory Council 2013 European Conference takes place on Sunday, June 16th, at the Congress Center Leipzig, Hall 5. The workshop will focus on HPC productivity, and advanced HPC topics and futures, and will bring together system managers, researchers, developers, computational scientists and industry affiliates to discuss recent developments and future advancements in High-Performance Computing.
HP-CAST 20will take place in Leipzig, Germany on June 14-15 at the Westin Leipzig Hotel. HP-CAST is an organization of HP customers and partners who provide input to HP to increase the capabilities of HP solutions for large-scale, scientific and technical computing.
Moabcon 2013 Europe will be held on June 15-16th at the Westin Leipzig Hotel. As the annual European user group meeting for Adaptive Computing, Moabcon offers in-depth technical sessions on Moab and Torque software.
If your organization is planning a meeting in Leipzig, please let us know and we will list it here.