LAS VEGAS, May 9, 2025 — Today at Google Cloud Next 2025, AI and data intelligence solutions vendor DDN announced it is working with Google Cloud on AI-powered infrastructure with Google Cloud Managed Lustre based on DDN’s EXASCaler. Accessible on Google Cloud, this solution features a persistent parallel file system, delivering speed, scalability, and efficiency for enterprises and startups building AI, GenAI, and HPC applications, while enabling customers to harness Google Cloud services.
This combination delivers unmatched speed, scalability, and efficiency for enterprises and startups building AI, GenAI, and HPC-driven applications.
AI and HPC workloads demand fast, reliable data access that traditional storage struggles to provide. Google Cloud Managed Lustre based on DDN’s EXASCaler addresses this with a persistent parallel file system, ensuring continuous high-performance data flow. Key advantages include:
- Superior Performance: Up to 1 TB/s throughput drives peak efficiency for AI training and inference.
- Business Impact: Faster time-to-insight, reduced costs, and streamlined data management.
- Effortless Scalability: Scales from terabytes to petabytes for enterprises and GenAI innovators.
Enterprises tackling AI at scale benefit from:
- Proven Data Management: Persistent access, proven in top supercomputing environments.
- Seamless Integration: Connects with the full breadth of Google Cloud services—Compute Engine, Google Kubernetes Engine (GKE), Cloud Storage, and more—eliminating bottlenecks.
- Accelerated Insights: Speeds up data pipelines for training, deployment, and innovation, exceeding SLAs.
- Broad Accessibility: Brings supercomputing-class storage and compute to businesses of all sizes.
If you’re a GenAI startup, DDN for Google Cloud means:
- Fast Model Development: Persistent parallel performance accelerates LLM training and tuning.
- Instant Deployment: Accessible via the Google Cloud console for rapid cloud adoption.
- Real-Time Inference: Ensures low-latency data access for GenAI applications.
- Maximized GPU Efficiency: Persistent data flow eliminates idle compute time.
- Reduced Costs: Optimizes cloud-based training with scalable pricing.
- Lifecycle Optimization: Keeps data AI-ready from pretraining to inference.
- 99.999% Uptime: Supports LLMs, RAG, and analytics.
- Google Cloud Ecosystem: Integrates seamlessly across Compute Engine, GKE, Cloud Storage, and beyond.
- Fully Managed: Frees teams to innovate, not manage infrastructure.
“This partnership between DDN and Google Cloud is a seismic shift in AI and HPC infrastructure—rewriting the rules of performance, scale, and efficiency,” said Alex Bouzari, Co-Founder and CEO of DDN. “By fusing our industry-leading EXAScaler and Infinia with Google Cloud’s global reach and cutting-edge compute power, we’re not just accelerating AI—we’re unleashing an entirely new era of AI innovation at an unprecedented scale. This is the future, and it’s happening now.”
“As a leader in AI infrastructure, this partnership marks a defining milestone in our hyperscaler strategy for DDN,” said Santosh Erram, VP of AI Partnerships at DDN. “By uniting DDN’s enterprise-grade performance with the global scalability of Google Cloud, we’re breaking down the barriers between on-premises precision and cloud agility. Hyperscaler customers can now extend their AI workloads to the Cloud effortlessly—without compromising on speed, scale, or reliability. This collaboration accelerates time-to-insight and redefines what’s possible for AI innovation in the cloud.”
With Managed Lustre for Google Cloud and DDN Infinia, customers now have access to complete, scalable, and high-performance AI data infrastructure. Whether you’re building large language models (LLMs), scaling generative AI, or enabling autonomous intelligence, DDN and Google Cloud provide the foundation for success—delivering unmatched speed, efficiency, and scalability for the most demanding AI and HPC workloads.