Could grass clippings be used to produce fuel for our cars and furnaces? TACC supercomputing resources have helped enable new science at the University of Virginia around biofuel reactions.
Using density functional theory, a quantum mechanical modeling method used in physics and chemistry to investigate the electronic structure of molecules, the researchers calculated the interactions of more than 200 atoms using Ranger. The simulations helped the group identify the presence of an intermediate chemical in the reaction and determined that it was in fact ketenylidene. The acetic acid-to-ketenylidene path combines dehydrogenation (oxidation) and the deoxygenation of the acetate, “which are crucial steps for biomass conversion into more valuable industrial chemicals,” the authors wrote.
You can’t get too far in any discussion of Big Data without some mention of Hadoop, an open-source software framework that supports data-intensive distributed applications. Now IBM helps us mere mortal HPC folks better understand this powerful tool with a free eBook on Hadoop for Dummies from author Robert D. Schneider.
Enterprises are using technolo- gies such as MapReduce and Hadoop to extract value from Big Data. The results of these efforts are truly mission-critical in size and scope. Properly deploying these vital solutions requires careful planning and evaluation when selecting a supporting infrastructure. In this book, we provide you with a solid understanding of key Big Data concepts and trends, as well as related architectures, such as MapReduce and Hadoop. We also present some suggestions about how to implement high-performance Hadoop.
In this video from SC12, Cycle Computing CEO Jason Stowe demonstrates how easy it is to use the company’s software to provision large compute instances on the AWS cloud.
CycleCloud is the leading software for creating HPC clusters in the cloud, from small to Top 500 Supercomputer scales. CycleCloud makes it easy to deploy, secure, automate, and manage running calculations dynamically at large scales, up to 50000 cores or more. Click here to start using CycleCloud. Companies use CycleCloud in production clusters running molecular modelling, risk analysis, bioinformatics/sequencing, semiconductor simulation, and document processing.
Over at Datacenter Knowledge, John Rath writes that Supermicro launched new 2U and 4U/Tower platforms that maximize processing power and precisely tune hardware and firmware to provide lower latency than previous models, while still maintaining high reliability. The company debuted the systems at the High Frequency Trading World event this week in New York.
Advanced trading firms looking to reduce latency and maximize transaction flow can gain an advantage with the extreme processing power and enterprise-class server optimizations designed into Supermicro’s Hyper-Speed systems,” said Wally Liaw, Vice President of Sales, International at Supermicro. “Our latest HFT-optimized platforms boost performance of the fastest rated x86 dual processors with board-level control and circuitry enhancements and custom tailored cooling systems for the highest sustained performance. With mission critical transactions on the line, Supermicro Hyper-Speed systems ensure peak performance with maximum reliability for the most demanding computational finance applications.”
The new servers are optimized for high frequency trading and feautre premium pre-installed CPUs and memory, with storage and I/O components that are validated with a rigorous burn-in process to ensure maximum performance and reliability on deployment. Read the Full Story.
In this video from SC12, Arnon Friedmann from Texas Instruments describes the company’s new multicore System-on-Chips (SoCs). Based on its award winning KeyStone architecture, TI’s SoCs are designed to revitalize cloud computing, inject new verve and excitement into pivotal infrastructure systems and, despite their feature rich specifications and superior performance, actually reduce energy consumption.
Using multicore DSPs in a cloud environment enables significant performance and operational advantages with accelerated compute intensive cloud applications,” said Rob Sherrard, VP of Service Delivery, Nimbix. “When selecting DSP technology for our accelerated cloud compute environment, TI’s KeyStone multicore SoCs were the obvious choice. TI’s multicore software enables easy integration for a variety of high performance cloud workloads like video, imaging, analytics and computing and we look forward to working with TI to help bring significant OPEX savings to high performance compute users.”
In related news, TI announced today that Nimbix will use the company’s high-performance KeyStone multicore DSPs, significantly reducing power and accelerating workflows for video processing and imaging applications in the cloud.
Breaking new ground for scientific computing, two teams of Department of Energy (DOE) scientists have for the first time exceeded a sustained performance level of 10 petaflops (quadrillion floating point operations per second) on the Sequoia supercomputer at the US National Nuclear Security Administration’s (NNSA) Lawrence Livermore National Laboratory (LLNL).
A team led by Argonne National Laboratory used the recently developed Hardware/Hybrid Accelerated Cosmology Codes (HACC) framework to achieve nearly 14 petaflops on the 20-petaflop Sequoia, an IBM BlueGene/Q supercomputer, in a record-setting benchmark run with 3.6 trillion simulation particles. HACC provides cosmologists the ability to simulate entire survey-sized volumes of the universe at a high resolution, with the ability to track billions of individual galaxies.
Simulations of this kind are required by the next generation of cosmological surveys to help elucidate the nature of dark energy and dark matter. The HACC framework is designed for extreme performance in the weak scaling limit (high levels of memory utilisation) by integrating innovative algorithms, as well as programming paradigms, in a way that easily adapts to different computer architectures.
The HACC team is now conducting a fully-instrumented science run with more than a trillion particles on Argonne’s 10-petaflop Mira, which is also an IBM BlueGene/Q system.
In this video from SC12, Geoffrey Noer from Panasas describes the hybrid storage capabilities of the new ActiveStor 14 system.
The world’s fastest parallel storage system just got faster with Panasas ActiveStor 14. By accelerating small file and metadata performance with Solid State Drive (SSD) technology, ActiveStor 14 delivers extreme performance, for the technical computing and big data workloads commonly found in HPC environments. Based on a fifth-generation storage blade architecture and the Panasas PanFS storage operating system, ActiveStor 14 delivers unmatched scale-out NAS performance in addition to the manageability, reliability, and value required by demanding computing organizations in the bioscience, energy, finance, government, manufacturing, media, and other sectors.”
John Shaff, one of our celebrated Rock Stars of HPC, has been appointed CTO at NERSC. Shalf will also continue to serve in his current role as head of the Computer and Data Sciences Department in Berkeley Lab’s Computational Research Division (CRD).
NERSC is the primary HPC facility for scientific research sponsored by theDOE’s Office of Science. As Chief Technology Officer, Shalf will help NERSC develop a plan to achieve exascale performance.
A key goal of DOE’s exascale program is to develop high performance scientific computers that deliver a thousand times the performance of today’s most powerful computers at all scales, while using less than twice the power, by the end of the next decade. The demands of energy efficiency are driving deep changes that will change the way we do computing at all scales, not just exascale. NERSC will take an active role to work with industry as a public/private partnership to guide HPC designs and bring the DOE user community along in this time of great transition.”
One of the many product announcements out of SC12 last month was the release of StackIQ Enterprise HPC, a comprehensive cluster management suite powered by Rocks+ software.
We are thrilled to bring this major update to our HPC customers in time for the annual SC12 conference,” said Tim McIntire, President and co-founder of StackIQ. “By bringing the enterprise features of our Enterprise Data product to the HPC products, we’ve improved the HPC product, while making it easier for those building hybrid HPC/Hadoop clusters to get their work done.”
Administrators will find it easier to track cluster health using new advanced cluster diagnostics tools, while developers will find it easier than ever to develop and debut Rolls using features like the filtered “profiles” tab in the GUI. StackIQ also added advanced firewall configuration to enhance the security of HPC clusters, making them more robust and able to be integrated into today’s enterprise data center environments. Read the Full Story.
This week SMB engineering software developer Engys in the U.K. announced that its new server cluster wil be hosted by their supplier, OCF. Engys staff from five worldwide locations submits jobs to the cluster via private, secure remote access, and having OCF host the cluster is saving Engys £30k per annum on operational costs for space, cooling, energy, and staffing.
Having our own cluster, but hosted and managed by OCF, gives us freedom,” said Francisco Campos, Director of Operations at Engys. “We do not have to waste our own time and effort maintaining the cluster; we don’t need cluster skills. We also don’t need to provide energy to run and cool the cluster or space to house it. We don’t have to worry about its administration or security. There are significant economic advantages to having OCF run this for us. It will save us in the region of £30k per annum on staff and operational costs.”
The server cluster purchased by Engys and hosted by OCF is built using SuperMicro server technology and uses Intel’s latest Sandy Bridge CPU processors. It delivers 144 cores of processing power and 14Tbs of storage. Engys also uses OCF’s recently upgraded enCORE Compute-on-Demand service, which uses compute power from a cluster at the Science and Technology Facilities Council’s (STFC) Hartree Centre. Read the Full Story.
In this video from SC12, Allinea CTO David Lecomber describes the company’s powerful tools for debugging at scale.
This is the mission of Allinea Software: to make parallel programming accessible to the widest range of scientists and programmers – via tools with unprecedented productivity and ease of use. Our customers are confident that their applications will run successfully on their organization’s largest systems – because they know that ours are the only tools that can scale to the size of the world’s largest systems.
Over at Admin HPC, Douglas Eadline writes that the proliferation of manycore architectures continues to be a challenge for HPC programmers.
Recently, Intel introduced their Many Integrated Core (MIC) or Xeon Phi co-processor. Whereas the Phi lives on the PCI bus and brings more cores to the table, the design is somewhat different from a GP-GPU. The current Phi has 60 general-purpose x86 cores, each coupled with a vector processor. The Phi is not a co-processor like the GP-GPUs but rather a fully functional processing unit. In terms of software, the Phi can be programmed using standard OpenMP, OpenCL, and updated versions of Intel’s Fortran, C++, and math libraries – that is, the same tools used to program the x86 multicore processors. Data must still travel across the PCI bus, but the volume depends on how the Phi is used.