Sign up for our newsletter and get the latest HPC news and analysis.
Send me information from insideHPC:


In-Memory Computing for HPC

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. “If the application program has concurrent sections then it can be executed in a “parallel” fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution.”

Interview: Bill Mannel and Dr. Eng Lim Goh on What’s Next for HPE & SGI

In this video, Bill Mannel, VP & GM, High-Performance Computing and Big Data, HPE & Dr. Eng Lim GoH, PhD, SVP & CTO of SGI join Dave Vellante & Paul Gillin at HPE Discover 2016. “The combined HPE and SGI portfolio, including a comprehensive services capability, will support private and public sector customers seeking larger high-performance computing installations, including U.S. federal agencies as well as enterprises looking to leverage high-performance computing for business insights and a competitive edge.”

SGI UV as a Converged Compute and Data Management Platform

In life sciences, perhaps more than any other HPC discipline, simplicity is key. The SGI solution meets this requirement by delivering a single system that scales to huge capabilities by unifying compute, memory, and storage. Researchers and scientists in personalized medicine (and most life sciences) are typically not computer science experts and want a simple development and usage model that enables them to focus on their research and projects.

SGI UV Powers Romanian Aerospace Agency

The National Institute of Aerospace Research in Romania will power its scientific and aeronautical research program with a new SGI UV system. “This is the second SGI system we have installed at INCAS,” said Costea Emil, head of Department Technical Services at INCAS. “SGI solutions have allowed us to rapidly develop and test our software prototypes and solutions. With our newest installation, we can reduce the time required to program algorithms, allowing us to focus on the key scientific problems we’re chartered to solve.”

Interview: Numascale to Partner with OEMs on Big Memory Server Technology

Hailing from Norway, big-memory appliance maker Numascale has been a fixture at the ISC conference since the company’s formation in 2008. At ISC 2016, Numascale was noticeably absent from the show and the word on the street was that the company was retooling their NumaConnect™ technology around NVMe. To learn more, we caught up with Einar Rustad, Numascale’s CTO.

Video: Matching the Speed of SGI UV with Multi-rail LNet for Lustre

Olaf Weber from SGI presented this talk at LUG 2016. “In collaboration with Intel, SGI set about creating support for multiple network connections to the Lustre filesystem, with multi-rail support. With Intel Omni-Path and EDR Infiniband driving to 200Gb/s or 25GB/s per connection, this capability will make it possible to start moving data between a single SGI UV node and the Lustre file system at over 100GB/s.”

Trio of SGI Systems to Drive Innovation at SKODA AUTO

Today SGI announced that ŠKODA AUTO has deployed an SGI UV and two SGI ICE high performance computing systems to further enhance its computer-aided engineering capabilities. “Customer satisfaction and the highest standard of production are at the very core of our brand and is the driving force behind our innovation processes,” said Petr Rešl, head of IT Services, ŠKODA AUTO. “This latest installation enables us to conduct complex product performance and safety analysis that will in turn help us to further our commitment to our customer’s welfare and ownership experience. It helps us develop more innovative vehicles at an excellent value-to-price ratio.”

SGI Update: Zero Copy Architecture (ZCA)

“In high performance computing, data sets are increasing in size and workflows are growing in complexity. Additionally, it is becoming too costly to have copies of that data and, perhaps more importantly, too time and energy intensive to move them. Thus, the novel Zero Copy Architecture (ZCA) was developed, where each process in a multi-stage workflow writes data locally for performance, yet other stages can access data globally. The result is accelerated workflows with the ability to perform burst buffer operations, in-situ analytics & visualization without the need for a data copy or movement.”

SGI to Deliver Advanced Data Processing for Nagaoka University of Technology

Today SGI Japan announced that the Nagaoka University of Technology has selected the SGI UV 300, SGI UV 30EX and SGI Rackable servers and SGI InfiniteStorage 5600 for its next education and research integrated high-performance computing system. With a tenfold performance increase over the previous system, the new supercomputer will will start operation on March 1, 2016.

Video: SGI Looks to Zero Copy Architecture for HPC and Big Data

In this video from SC15, Dr. Eng Lim Goh from SGI describes how the company is embracing new HPC technology trends such as new memory hierarchies. With the convergence of HPC and Big Data as a growing trend, SGI is envisions a “Zero Copy Architecture” that would bring together a traditional supercomputer with a Big Data analytics machine in a way that would not require users to move their data between systems.