“The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.”
Scaling Hardware for In-Memory Computing
The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”
In-Memory Computing for HPC
To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. “If the application program has concurrent sections then it can be executed in a “parallel” fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution.”
insideHPC Research Report on In-Memory Computing
To achieve high performance, modern computer systems rely on two basic methodologies to scale resources. A scale-up design that allows multiple cores to share a large global pool of memory and a scale-out design design that distributes data sets across the memory on separate host systems in a computing cluster. To learn more about In-Memory computing download this guide from IHPC and SGI.
Interview: Bill Mannel and Dr. Eng Lim Goh on What’s Next for HPE & SGI
In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”
SGI Paves the Way to the Future with HPE at SC16
In this video from SC16, Gabriel Broner from SGI describes the company’s full breadth of HPC solutions. Recently acquired by Hewlett Packard Enterprise, SGI’s product technologies such as the SGI ICE XA system and SGI UV big memory systems will continue to offer unique value for HPC customers on the road to Exascale. “Will this be a good marriage? Well, this reporter got to spend some time with HPE, SGI, and their joint customers at the recent HP-CAST user group meeting, and all indications are that this combination will be a powerful force in HPC moving forward.”
SGI and DDN Power UK Met Office SPICE System for Weather and Climate Research
SGI, Bright Computing, and DDN recently announced that the UK Met Office has selected the three HPC vendors to provide HPC for its new Scientific Processing and Intensive Compute Environment (SPICE) system. SPICE will enable weather and climate researchers to dramatically reduce time required to analyze massive amounts of climate simulation data.
Hewlett Packard Enterprise Gains Momentum with SGI Acquisition at SC16
In this video from SC16, Bill Mannel from Hewlett Packard Enterprise describes how the company is gaining momentum in the HPC space as the leading vendor on the TOP500. With the recent acquisition of SGI, HPE is moving forward with broader range of solutions for high performance computing. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”
High-Throughput Genomic Sequencing Workflow
A workflow to support genomic sequencing requires a collaborative effort between many research groups and a process from initial sampling to final analysis. Learn the 4 steps involved in pre-processing.
Enabling Personalized Medicine through Genomic Workflow Acceleration
If the keys to health, longevity, and a better overall quality of life are encoded in our individual genetic make-up then few advances in the history of medicine can match the significance and potential impact of the Human Genome Project. Instigated in 1985 and since that time, the race has been centered on dramatically improving the breadth and depth of genomic understanding as well as reducing the costs involved in sequencing, storing, and processing an individual’s genomic information.