Sign up for our newsletter and get the latest big data news and analysis.
Daily
Weekly

New Supercomputer Enables Rugged, Real-Time AI at the Edge

[SPONSORED POST] Program managers face hard tradeoffs bringing artificial intelligence to in-the-field use cases. This whitepaper, “New Supercomputer Enables Rugged, Real-Time AI at the Edge,” describes a new AI server from One Stop Systems that shows what capabilities they should look for in portable, rugged AI deployments. The Rigel: Edge Supercomputer addresses AI in the field requirements: high-performance computing (HPC) using optimized CPUs, advanced GPUs and flash memory NVMe drives, and high-bandwidth interconnects.

New Supercomputer Enables Rugged, Real-Time AI at the Edge

Program managers face hard tradeoffs bringing artificial intelligence to in-the-field use cases. This whitepaper describes a new AI server from One Stop Systems that shows what capabilities they should look for in portable, rugged AI deployments.

Enhancing Security with High Performance AI Capability Deployed at the Rugged Edge

In this sponsored post from One Stop Systems, we see that whether surviving in a fast-moving battlefield situation, protecting sensitive industrial or transportation hub assets, or ensuring uninterrupted operation of critical national infrastructure, intelligent long-range surveillance is critical.  The ability to provide 24/7 remote long range threat detection and situational awareness coupled with human-machine control allows for the fast and appropriate threat response that is fundamental to addressing these security imperatives.

AI Transportable Market

This whitepaper, “AI Transportable Market,” from One Stop Systems, describes how the requirements for AI in the field form a specific and distinct segment in the big, fast-growing edge computing market, separate from the familiar segments of edge data centers and the Internet of Things. One way to describe this emerging segment is “AI Transportables.”

AI Transportable Market

This whitepaper from One Stop Systems, describes how the requirements for AI in the field form a specific and distinct segment in the big, fast-growing edge computing market, separate from the familiar segments of edge data centers and the Internet of Things. One way to describe this emerging segment is “AI Transportables.”

AI Workflow Scalability through Expansion

In this special guest feature, Tim Miller, Braden Cooper, Product Marketing Manager at One Stop Systems (OSS), suggests that for AI inferencing platforms, the data must be processed in real time to make the split-second decisions that are required to maximize effectiveness.  Without compromising the size of the data set, the best way to scale the model training speed is to add modular data processing nodes.

NVMe over Fabrics and GPU Direct Storage Boost HPC and AI Edge Applications

In this special guest feature, Tim Miller, VP of Product Marketing at One Stop Systems (OSS), discuses how deploying edge HPC solutions – instead of data movement over relatively slow or unsecure networks to distant datacenters – provides significant benefits in cost, responsiveness and security. Real time decisions require sourcing and storing raw data, and converting it to actionable intelligence with high speed computing in the field close to the data source.

Intelligent Video Analytics Pushes Demand for High Performance Computing at the Edge

In this special guest feature, Tim Miller, VP of Product Marketing at One Stop Systems (OSS), writes that his company is addressing the common requirements for video analytic applications with its AI on the Fly® building blocks. AI on the Fly is defined as moving datacenter levels of HPC and AI compute capabilities to the edge.

Precision Medicine pushes demand for HPC at the Edge: AI on the Fly ® Delivers

In this special guest feature, Tim Miller from One Stop Systems writes that by bringing specialized, high performance computing capabilities to the edge through AI on the Fly, OSS is helping the industry deliver on the enormous potential of precision medicine. “The common elements of these solutions are high data rate acquisition, high speed low latency storage, and efficient high performance compute analytics. With OSS, all of these building block elements are connected seamlessly with memory mapped PCI Express interconnect configured and customized as appropriate, to meet the specific environmental requirements of ‘in the field’ installations.”

OSS PCI Express 4.0 Expansion System does AI on the Fly with Eight GPUs

Today One Stop Systems (OSS) announced the availability of a new OSS PCIe 4.0 value expansion system incorporating up to eight of the latest NVIDIA V100S Tensor Core GPU. As the newest member of the company’s AI on the Fly product portfolio, the system delivers data center capabilities to HPC and AI edge deployments in the field or for mobile applications. “The 4U value expansion system adds massive compute capability to any Gen 3 or Gen 4 server via two OSS PCIe x16 Gen 4 links. The links can support an unprecedented 512 Gpbs of aggregated bandwidth to the GPU complex.”