Over at the Texas Advanced Computing Center, Paromita Pain writes that researchers at the Mount Sinai School of Medicine are using TACC supercomputers to better understand membrane proteins, which play a major role in determining whether medications are efficient and whether they have side effects.
According to Dr. Marta Filizola, understanding how opioid receptors work is an important aspect of research at the lab. Her team creates simulations that reveal the way proteins (which are never static) interact with drug molecules and other proteins. These animations help identify the factors that contribute to a molecular-level understanding of the mechanism of action of drugs at individual or oligomeric receptors. This information is used to create more efficient medications or to stop unpleasant side effects.
Side effects are an important issue,” Filizola says. “We can develop the best pain curing medication ever, but if it causes an addiction then how good is it really?”
Over at NICS, Scott Gibson writes that climate researchers are using supercomputers to study the rate and extent of rapid greening of high-elevation landscapes caused by Global Warming.
The left map depicts areas in Yellowstone National Park that comprise the summer and winter range for the northern elk herd. The map on the right shows the elevation range over which the green wave occurs.
These types of analysis are beyond the reach of personal computers, and we typically had to work with large, disparate, high-resolution data sets that included vegetation layers, snow water equivalent data, and fine-scale temperature data,” said researcher Karthik Ram of the University of California, Berkeley. “RDAV and NICS made it possible to leverage high-performance computing to model these data in an efficient manner. For example, they have been very supportive in providing the resources to scale my analysis in the R programming language across a large number of cores on Nautilus. As the volume of data continues to grow, facilities like NICS and RDAV will be key to analyzing and drawing meaningful results without drowning in too much information.”
Our Video Sunday feature continues with this interview with Ray Kurzweil, who recently joined Google as Director of Engineering. Speaking with Singularity Hub Founder Keith Kleiner, Kurzweil discusses his new role, how his research interests connect with his latest book, and how technology will advance to produce a “cybernetic friend.”
The project we plan to do is focused on natural language understanding,” said Kurzweil. “It’s ambitious. In fact there’s no more important project than understanding intelligence and recreating it.”
Over at the HPC Notes blog, Andrew Jones from NAG is out with his HPC predictions for 2013. Will this be the year for high performance computing that is energy-efficient, easy-to-use, and industrially engaged?
Energy efficiency will be driven by the need to find lower power solutions for exascale-era supercomputers (not just exascale systems but the small department petascale systems that will be expected at that time – not to mention consumer scale devices). It is worth noting that optimizing for power and energy may not be the same thing. The technology will also drive the debate – especially the anticipated contest between GPUs and Xeon Phi. And politically, energy efficient computing sounds better for attracting investment rather than “HPC technology research”.
Over at New Scientist, Niall Firth writes that the new Oculus Rift gaming headset maybe the device that finally brings convincing Virtual Reality to the home.
In the demo at CES, I walk around the medieval scene using Xbox controllers, able to look all around me just by turning my head. The only downside is a slight feeling of sea-sickness if you move too quickly, something I experience as I try to turn on my heels and move quickly off in a different direction. But the exhilarating feeling it provides is undeniable. Virtual reality might be on its way back, after all.
Over at The Register, Timothy Prickett Morgan writes that the German HLRN consortium is going to have a different brand name and architecture now that Cray has beat out SGI for a 2.6 petaflop supercomputer.
During the first phase of system construction in the autumn of 2013, the initial XC30 system will go in with 1,488 dual-socket processor nodes sporting the next-generation “Ivy Bridge” Xeon E5 processors from Intel. The assumption is that the top-end Xeon E5 2600 v2 processors will sport ten cores compared to the “Sandy Bridge” v1 chips and their eight cores. So this machine should have a total of 29,760 cores in the initial stage, all linked together using the “Aries” dragonfly interconnect.
In this video with the unfortunate thumbnail, Taylor Kidd from Intel presents an introduction to the hardware architecture of the Intel Xeon Phi coprocessor.
This module covers the intent of the workshop, the type viewer the workshop is aimed at, a brief look at the hardware architecture of the Intel Xeon Phi coprocessor, the SW stack, and programming models. Briefly looks that the roadmap forward for the Intel Knights products. It discusses the software development platform, documentation, and use of Intel Premier Support. It sets expectations on the capabilities and usage models that are appropriate for the Intel Xeon Phi coprocessor . And lastly, looks at brief example of the advantages of the 512-bit vector engine.
IBM has announced a major advance in the ability to use light instead of electrical signals to transmit information for future computing.
The breakthrough technology – silicon nanophotonics – allows the integration of different optical components side-by-side with electrical circuits on a single silicon chip – using, for the first time, sub-100nm semiconductor technology.
Silicon nanophotonics takes advantage of pulses of light for communication and provides a super highway for large volumes of data to move at rapid speeds between computer chips in servers, large data centers, and supercomputers, thus alleviating the limitations of congested data traffic and high-cost traditional interconnects.
This technology breakthrough is a result of more than a decade of pioneering research at IBM,” said John Kelly, senior vice president and director of IBM Research. “This allows us to move silicon nanophotonics technology into a real-world manufacturing environment that will have impact across a range of applications.”
The amount of data being created and transmitted over enterprise networks continues to grow due to an explosion of new applications and services. Silicon nanophotonics, now primed for commercial development, can enable the industry to keep pace with increasing demands in chip performance and computing power.
Over at The Exascale Report, Editor Mike Bernhardt has posted the Three Noble Truths about Exascale, beginning with the notion that this journey is really not about FLOPS.
Exascale, like the previous quests for teraflops and petaflops, is a journey not to be taken solely for the sake of developing new computing technology. We must not lose sight of the true purpose of our quest for exascale-level computation – the underlying need to move technology forward in order to make possible new scientific advances that will have a profound impact on all aspects of life on this planet.
The International Workshop on Runtime and Operating Systems for Supercomputers (ROSS 2013) has issued its Call for Papers. The event will be held in conjunction with ICS 2013 in Eugene, Oregon on June 10, 2013
The complexity of node architectures in supercomputers increases as we cross petaflop milestones on the way towards Exascale. Increasing levels of parallelism in multi- and many-core chips and emerging heterogeneity of computational resources coupled with energy and memory constraints force a reevaluation of our approaches towards operating systems and runtime environments. The ROSS workshop, to be held as a full-day meeting at the ICS 2013 conference in Eugene, Oregon, USA, focuses on principles and techniques to design, implement, optimize, or operate runtime and operating systems for supercomputers and massively parallel machines.
In this video from the Intel Xeon Phi announcement at SC12, Dr. Dan Duffy at NASA Goddard describes the installation of his IBM iDataPlex M4 servers. Using the IBM Intelligent Cluster process, his team was able to complete the installation in 48 hours as well as a Linpack run that landed them at number 52 on the TOP500 supercomputer list.
Over at The Exascale Report, John Barr and Wolfgang Gentzsch review the Uber-Cloud experiment, a project to help researchers explore the end-to-end process for scientists and engineers, from technical challenges to social barriers, as they access remote HPC facilities on which to run their applications.
The goal of the Experiment is to form a community to explore the challenges and benefits of running HPC applications in the cloud, to study the end-to-end process, learn what works (and what doesn’t), and to document the findings to help the next group of potential participants.
The motivation for the project came from a series of conversations between Wolfgang Gentzsch and Burak Yenier, who wanted to better understand the validity of perceived problems of running HPC in the cloud including privacy, security, unpredictable costs, ease of use, software licensing, and application performance. Read the Full Story or Subscribe to The Exascale Report.
Over at the Nvidia Developer Zone, Mark Harris looks at how to efficiently access device memory, in particular global memory, from within kernels.
Global memory access on the device shares performance characteristics with data access on the host; namely, that data locality is very important. In early CUDA hardware, memory access alignment was as important as locality across threads, but on recent hardware alignment is not much of a concern. On the other hand, strided memory access can hurt performance, which can be alleviated using on-chip shared memory. In the next post we will explore shared memory in detail, and in the post after that we will show how to use shared memory to avoid strided global memory accesses during a matrix transpose.
This next year demands the delivery by research scientists of a demonstration of the requirements that have to be satisfied for effective exaflops computing and a determination of and if conventional methods can or cannot achieve them. Such a result will permit the entire community to work together towards a commonly recognized goal rather than to continue to engage at cross purposes in diverse strategies.