

AI and HPC require fast, scalable, and secure storage. Traditional storage creates performance bottlenecks, slowing down model training and large-scale simulations.
By leveraging high-speed, parallel data access, organizations can significantly accelerate data processing, optimize AI workflows, and reduce costs while ensuring data integrity and security.
accommodate growing AI and HPC demands
direct GPU access, reducing CPU overhead
policy-based access controls
enabling interoperability with AI frameworks and cloud environments
The image shows a data processing solution with NVIDIA GPUs, CPUs, switching, and Cloudian storage
The CPU sends control commands to Cloudian storage via the S3 API to initiate data access
Data from Cloudian storage bypasses the CPU and is transferred directly to the NVIDIA GPU through the switching layer
The data moves using RDMA (Remote Direct Memory Access) for high-speed, low-latency parallel transfer.
The NVIDIA GPU processes the data, leveraging its parallel computing power for tasks like AI training or simulations
Processed data is sent back to Cloudian storage via the switching layer, with the CPU coordinating via S3 API commands
Cloudian HyperStore (Scalable S3-Compatible Object Storage)
NVIDIA GPUDirect for High-Speed Data Transfer
✔ 200 GB/s+ throughput with GPU-accelerated direct storage access.
✔ Eliminates CPU bottlenecks, improving AI training efficiency.
✔ Scale from TBs to exabytes with a fully distributed object storage system.
✔ Handles massive AI datasets for training, inference, and simulations.
✔ Access control, encryption, and compliance-ready policies to protect sensitive AI datasets.
✔ Prevents unauthorized access and ensures regulatory compliance.
✔ Native support for TensorFlow, PyTorch, RAPIDS, Spark, and Dask.
✔ Compatible with Kubernetes & hybrid cloud environments for scalable AI workflows.
✔ Supports large-scale AI model training across multiple nodes.
✔ Parallel processing for real-time analytics and data streaming.
Challenge: A genomics research lab needed high-speed storage for petabytes of sequencing data.
Solution: Integrated Cloudian HyperStore with NVIDIA GPUDirect for AI-optimized data pipelines.
Results: 50% faster AI processing and 40% lower storage costs.
Challenge: A self-driving car company needed fast, scalable storage to handle large LIDAR and video datasets.
Solution: Used Cloudian HyperStore with GPUDirect for low-latency, parallel data access across AI nodes.
Results: AI training speed increased by 60%, reducing time to deployment.
© 2025 CSC-JSC. All Rights Reserved.