Startup Partners with Princeton on DARPA In-Memory AI Chip

An AI startup co-founded by a Princeton University professor has won an $18.6 million DOD grant to develop an in-memory chip built to deliver faster, more efficient  AI inference processing. AI technology company EnCharge AI has announced a partnership with Princeton University supported….

Radio Free HPC: The Persistence of Memory

In this episode, we drill down on what Intel is doing with their cool Optane memory tech, shooting for speeds that remind you of memory, sizes that look like storage, and costs that make it look like a deal, with real byte-addressable persistent memory right inside the server – or block-addressable, if you want. This is a space that was bound to get filled and we’ve been watching the industry’s progress.

Improving Speed, Scalability and the Customer Experience with In-Memory Data Grids

Over the last decade, the new anytime, anywhere, personalized experience has driven query and transaction volumes up 10 to 1000x. It has created 50x more data about customers, products, and interactions. It has also shrunk the response times customers expect from days or hours to seconds or less. Download the new report from GridGain to learn how in-memory computing and in-memory data grids are tackling today’s data storage challenges. 

Why IMC Is Right for Today’s Fast-Data and Big-Data Applications

In-memory computing, or IMC, is being used for a variety of functions, including fintech and ecommerce to telecommunications and IoT, as it becomes well known for success with processing and analyzing big data.  Download the new report from GridGain, “Choosing the Right In-Memory Computing Solution,” to find out whether IMC is the right choice for your business. 

Five Ways Scale-Up Systems Save Money and Improve TCO

The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.

In Memory Data Grids

This white paper provides an overview of in-memory computing technology with a focus on in-memory data grids. It discusses the advantages and uses of in-memory data grids and introduces the GridGain In-Memory Data Fabric. Download this guide to learn more.

Scaling Software for In-Memory Computing

“The move away from the traditional single processor/memory design has fostered new programming paradigms that address multiple processors (cores). Existing single core applications need to be modified to use extra processors (and accelerators). Unfortunately there is no single portable and efficient programming solution that addresses both scale-up and scale-out systems.”

Scaling Hardware for In-Memory Computing

The two methods of scaling processors are based on the method used to scale the memory architecture and are called scaling-out or scale-up. Beyond the basic processor/memory architecture, accelerators and parallel file systems are also used to provide scalable performance. “High performance scale-up designs for scaling hardware require that programs have concurrent sections that can be distributed over multiple processors. Unlike the distributed memory systems described below, there is no need to copy data from system to system because all the memory is globally usable by all processors.”

In-Memory Computing for HPC

To achieve high performance, modern computer systems rely on two basic methodologies to scale resources: scale-up or scale-out. The scale-up in-memory system provides a much better total cost of ownership and can provide value in a variety of ways. “If the application program has concurrent sections then it can be executed in a “parallel” fashion. Much like using multiple bricklayers to build a brick wall. It is important to remember that the amount and efficiency of the concurrent portions of a program determine how much faster it can run on multiple processors. Not all applications are good candidates for parallel execution.”

Radio Free HPC Year End Review of 2016 Predictions

In this podcast, the Radio Free HPC team looks at how Shahin Khan fared with his OrionX 2016 Technology Issues and Predictions. “Here at OrionX.net, we are fortunate to work with tech leaders across several industries and geographies, serving markets in Mobile, Social, Cloud, and Big Data (including Analytics, Cognitive Computing, IoT, Machine Learning, Semantic Web, etc.), and focused on pretty much every part of the “stack”, from chips to apps and everything in between. Doing this for several years has given us a privileged perspective. We spent some time to discuss what we are seeing and to capture some of the trends in this blog.”