The software defined data center is the underlying data center architecture that allows most IT infrastructure to be defined in software and to function as enterprise-wide resources. This approach enables ITaaS to be delivered in a virtualized environment with greater agility, speed and quality of service.
“The HPC community has had a long-standing interest in creating scale-out environments for running throughput-oriented and parallel distributed workloads. Both large-scale environments (for example, cloud computing facilities) and scale-out workloads (such as Big Data) are becoming more important in the enterprise. In fact, with the rise of Big Data, the advent of affordable, powerful clusters, and strategies that take advantage of commodity systems for scale-out applications, these days the enterprise computing environment is looking a lot like HPC.”
A new report on the problems and opportunities that will drive the need for next generation HPC has been released by the Task Force on High Performance Computing of Secretary of Energy Advisory Board. Commissioned by Secretary of Energy, Dr. Ernest J. Moniz, the report includes recommendations as to where the DOE and the NNSA should invest to deliver the next class of leading edge machines by the middle of the next decade.
In an unprecedented collaboration, eight national laboratories will apply supercomputing resources to a new climate study with the National Center for Atmospheric Research. The project, called Accelerated Climate Modeling for Energy, or ACME, is designed to accelerate the development and application of fully coupled, state-of-the-science Earth system models for scientific and energy applications.
Over at the Dell HPC Blog, Mayura Deshmukh writes that NCSA’s Private Sector Program has done some interesting work analyzing the performance benefits of the new Intel Xeon E5-2600 v2 processors (code-named Ivy Bridge) over the previous generation E5-2600 series (code-named Sandy Bridge). With a focus on applications in the manufacturing sector, the study included a mix of commercial and open source applications like ANSYS Fluent, LS-DYNA, Simulia Abaqus, MUMPS, and LAMMPS.
The all-new Journal of Supercomputing Frontiers and Innovations has published published a new paper entitled: Toward Exascale Resilience – 2014 Update. Written by Franck Cappello, Al Geist, William Gropp, Sanjay Kale, Bill Kramer, and Marc Snir, the paper surveys what the community has learned in the past five years and summarizes the research problems still considered critical by the HPC community.
“This paper provides information and benchmarks necessary to make the choice of the best file system for a given application from a number of the available options: RAM disks, virtualized local hard drives, and distributed storage shared with NFS or Lustre. We report benchmarks of I/O performance and parallel scalability on Intel Xeon Phi coprocessors, strengths and limitations of each option.”
“Over the past two and half years, the team worked on a DOE-funded project, Computer-Aided Engineering for Electric Drive Vehicle Batteries (CAEBAT), to combine new and existing battery models into engineering simulation software to shorten design cycles and optimize batteries for increased performance, safety and lifespan. In order to achieve these goals the team has been modeling thermal management, electrochemistry, ion transport and fluid flow.”
“The continuous demands of competition help maintain strong markets for high performance computing systems, even amidst apparent paradoxes. Our surveys show that HPC users are bucking the trend of reducing spending on servers, and research indicates modest growth for HPC in all economic sectors (industrial, academic, and government) over the next four years.”