In this video, researchers from Utrecht University use grid computing to digitally modify molecules found in cone snail venom in an effort to develop new anesthetics. The grid allows them to run a lot of trial and error tests extremely quickly to look for the right molecular shape that will be the perfect fit for the pain receptors in humans.
This week Minnesota Startup Silicon Informatics has been awarded a Small Business Technology Transfer (STTR) contract by the U.S. Army Research Office to advance scalable parallel random number generation technology into products for HPC applications. Scholars from The University of Texas at San Antonio and Florida State University will participate in the research, which will ultimately lead to the development and commercialization of software tools that can help software applications realistically mimic complex phenomena.
The extent to which computer modeling can reflect reality is often limited by the quality and scalability of the random number generation methods. The random number generator and the quality evaluation tool developed in this project will help remove this limitation,” said Boppana. “We feel very privileged to be selected by Silicon Informatics for this research and expect the methods we create to be applicable to a wide range of industries that model complex behaviors, from entertainment and finance to science and engineering.”
Melting icecaps are not a new phenomenon, but the causes of past deglaciations has remained a mystery. Now, researchers are using ORNL supercomputers to pinpoint the last deglaciation on planet Earth.
The simulations, conducted by Feng He and Zhengyu Liu of UW-Madison and Bette Otto-Bliesner of NCAR, help to recreate the climate during the first half of the last deglaciation period and identify why temperatures and deglaciation rates differed between the hemispheres. The research builds on earlier simulations performed at ORNL and featured in Science in 2009 and Nature in 2012. Their latest finding detailing ocean circulation as the primary cause of early deglacial warming in the Southern Hemisphere appears in the February 7 issue of Nature.
The OLCF has given the project nearly 4 continuous years of access, allowing the team to run climate simulations over 22,000 years and produce nearly 300 terabytes of data.
Over at CIO Australia, Hamish Barwick writes that newly developed algorithms could lower energy bills in HPC datacenters.
According to Professor Albert Zomaya at the University of Sydney, the University has patented a “very sophisticated” algorithm that deals with energy consumption by manipulating voltages at a processor level.
We know that modern processors can operate at different voltage levels and by manipulating these voltages we are able to run a workload without compromising the execution time or the quality of service while at the same time reducing the energy consumption of the platform,” he said. “From the results we obtained from our extensive algorithm simulations we can see that, depending on the nature of the [HPC] application, the savings can run from five per cent to 35 per cent.”
Zomaya went on to say that they plan to have hardware properly tuned to deal with different case studies. Read the Full Story.
This week Indiana University announced it has received a $1.1 million federal grant to develop faster supercomputer software. The Center for Research in Extreme Scale Technologies will use the funding from the DoE to increase the technology’s speed and programmability. The funding is part of a $7.05 million grant for the XPRESS (eXascale PRogramming Environment and System Software) project, led by Sandia National Laboratories as part of the DOE Office of Science Advanced Scientific Computing Research X-Stack program.
IU created the CREST program in 2011 as part of the Pervasive Technology Institute to pioneer research at the frontiers of exascale computing. Two of supercomputing’s foremost thinkers, Andrew Lumsdaine and Thomas Sterling, both professors in the School of Informatics and Computing at IU Bloomington, lead CREST as director and associate director, respectively. Sterling also serves as CREST chief scientist.
We’re writing software that moves execution from static to dynamic, allowing supercomputers to use new information as it is being revealed,” said Sterling, chief scientist on the project. “By doing so, supercomputers will ‘think’ about how they use their resources, as well as where and when they schedule various concurrent tasks.”
This week Fujitsu Laboratories announced the development of transceiver circuits capable of communicating at 32 Gbps, a world record. The company said the new technology will support inter-processor communications at roughly twice today’s rates, leading to improved performance in next-generation of servers and supercomputers.
Figure 2: Schematic of transmitter circuit and breakdown of power consumption
Transmitter circuits transmit data from multiple channels that have been multiplexed into a single channel. The final-stage multiplexer not only consumes considerable amount of power, but also will approach the limit of its operating speed as data rates increase. Fujitsu Laboratories has developed a transmitter circuit that eliminates the need for a final-stage multiplex circuit (2-to-1 multiplexer). Rather than using conventional binary values (0, 1) in the transmitted signals, the new circuit uses ternary values (0, 1, 2). This makes it possible to restore the original data on the receiving end using only the existing receiver circuit functionality, without having to add any special circuitry (Figure 2, left). As a result, it exceeds the speed limit of conventional transmitter units. This is also why power consumption can be reduced by roughly 30% compared to the existing technology (Figure 2, right).
Will tomorrow’s supercomputers leverage superconductivity to go even faster? Over at Military Aerospace Electronics, John Keller writes that the IARPA Cryogenic Computing Complexity (C3) program seeks to substitute superconducting computing and superconducting switching for computing systems based on complementary metal-oxide-semiconductor (CMOS) switching devices and metal interconnects.
IARPA expects that the C3 program will be a five-year, two-phase program. The first phase will last for three years and develop the technologies necessary to demonstrate a small superconducting processor. The second phase, which will last two years, will integrate those new technologies into a small-scale working model of a superconducting computer.
A formal solicitation for the IARPA C3 program should be released to industry on or before briefings scheduled for March 12, 2013. Read the Full Story.
This week NERSC announced the winners of their inaugural HPC Achievement Awards at their User Group meeting at the Lawrence Berkeley National Laboratory. The awardees are all NERSC users who have either demonstrated an innovative use of HPC resources to solve a scientific problem, or whose work has had an exceptional impact on scientific understanding or society.
High performance computing is changing how science is being done, and facilitating breakthroughs that would have been impossible a decade ago,” says NERSC Director Sudip Dosanjh. “The 2013 NERSC Achievement Award winners highlight some of the ways this trend is expanding our fundamental understanding of science, and how we can use this knowledge to benefit humanity.”
In an effort to encourage young scientists who are using HPC in their research, NERSC also presented two early career awards. Winners include: Jeff Grossman, David Cohen, Tanmoy Das, Peter Nugent, and Edgar Solomonik. Read the Full Story.
In this video, Ankita Kejriwal from Stanford presents: The RAMCloud Project.
The RAMCloud project is creating a new class of storage, based entirely in DRAM, that is 2-3 orders of magnitude faster than existing storage systems. If successful, it will enable new applications that manipulate large-scale datasets much more intensively than has ever been possible before. In addition, we think RAMCloud, or something like it, will become the primary storage system for cloud computing environments such as Amazon’s AWS and Microsoft’s Azure.
Over at MIT Technology Review, David Talbot writes that researchers at IBM have assembled 10,000 carbon nanotube transistors on a silicon chip, research that points toward a possible new way of continuing to produce smaller, faster, more efficient computers.
In the samples the researchers have created so far, the nanotube transistors are about 150 nanometers apart. They’ll have to get closer if the new technology is to beat today’s silicon transistors and keep ahead of improved generations over the next decade. “We need to lay down a single layer of carbon nanotubes spaced a few nanometers apart,” says Supratik Guha, director of physical sciences at the lab. His group must also work out how to add individual electrical contacts, envisioned as atomic-scale vertical posts, to each of billions of transistors; right now the wafer acts as the gate switching the nanotubes on and off.
Over at IT World, Joab Jackson writes that Python just got a big data boost from DARPA with a $3 million award to software provider Continuum Analytics. The funding will help foster the development of Python’s data processing and visualization capabilities for big data jobs.
The money will go toward developing new techniques for data analysis and for visually portraying large, multi-dimensional data sets. The work aims to extend beyond the capabilities offered by the NumPy and SciPy Python libraries, which are widely used by programmers for mathematical and scientific calculations, respectively. More mathematically centered languages such as the R Statistical language might seem better suited for big-data number crunching, but Python offers an advantage of being easy to learn.
The work is part of DARPA’s XData research program, a four-year, $100 million effort to give the Defense Department and other U.S. government agencies tools to work with large amounts of sensor data and other forms of big data. Read the Full Story.
In this video from PyData NYC 2012, Stephen Diehl from Continuum Analytics presents on Blaze, a next-generation NumPy designed as a foundational set of abstractions on which to build out-of-core and distributed algorithms. Blaze generalizes many of the ideas found in popular PyData projects such as Numpy, Pandas, and Theano into one generalized data-structure. Together with a powerful array-oriented virtual machine and run-time, Blaze will be capable of performing efficient linear algebra and indexing operations on top of a wide variety of data backends.
NSCA reports that simulations carried out using the Blue Waters petascale supercomputer have determined the structure of the rabbit hemorrhagic disease virus (RHDV), which causes a highly infectious and often fatal illness in domestic and wild rabbits. This research, carried out collaboratively by researchers at the University of Illinois, the University of California-San Diego and several Chinese research institutions, has been published in the PLOS Pathogens journal.
The structure of the capsid of RHDV could only be achieved through a 9,891,665-atom NAMD simulation,” said University of Illinois biophysicist Klaus Schulten, a co-author of the published study. “The computational strategy adopted would have been inconceivable before the advent of Blue Waters due to the needed large simulation size. This study demonstrates clearly that Blue Waters is a research instrument for mainstream life science!”
Schulten received a Petascale Computing Resource Allocation from the National Science Foundation that enabled his research team to prepare NAMD for extreme-scale supercomputers and to tap into the computing and data power of Blue Waters. His group is currently using Blue Waters to conduct a 24 million-atom simulation of a photosynthetic membrane that harvests sunlight and a 65 million-atom simulation of another capsid, this time the protein capsule that encases HIV. Read the Full Story.
Over at the Joint Institute for Computational Sciences, Scott Gibson writes that researchers have made a significant breakthrough in fusion energy research using supercomputers at ORNL. A multi-institutional team led by Predrag S. Krstic of the JICS and Jean Paul Allain of Purdue University has answered the question of how the behavior of plasma—the extremely hot gases of nuclear fusion—can be controlled with ultra-thin lithium films on graphite walls lining thermonuclear magnetic fusion devices.
How lithium coatings on graphite surfaces control plasma behavior has largely remained a mystery until our team was able to combine predictions from quantum-mechanical supercomputer simulations on the Kraken and Jaguar systems at Oak Ridge National Laboratory and in situ experimental results from the Purdue group to explain the causes of the delicate tunability of plasma behavior by a complex lithiated graphitic system,” Krstic said. “Surprisingly, we find that the presence of oxygen in the surface plays the key role in the bonding of deuterium, while lithium’s main role is to bring the oxygen to the surface. Deuterium atoms preferentially bind with oxygen and carbon-oxygen when there is a comparable amount of oxygen to lithium at the surface. That finding well matches a number of controversial experimental results obtained within the last decade.”
How will HPC power personalized medicine in the future? With help from XSEDE consulting and computing resources, researchers have developed finite-element computational protocols to assess of the risk for aortic rupture for individual patients, and thereby to help guide decisions about surgical intervention.
We have software to make computational models from medical images of individual patients, which takes into account their aortic wall thickness, slice by slice, in vivo, and from that to predict wall-stress distribution,” said Ender Finol from University of Texas at San Antonio. “No one else has done this before with this level of accuracy.”
Finol is currently conducting further research on the Blacklight supercomputer at the Pittsburgh Supercomputer Center. Each patient analysis requires geometry reconstruction and meshing with nearly three-million degrees of freedom for a CSS simulation. Using the shared-memory version of ADINA, Jana has found that the problem optimizes at eight cores with up to 32 cores for faster time to solution. Read the Full Story.