NVIDIA Releases Dynamo v0.9.0: A Massive Infrastructure Overhaul Featuring FlashIndexer, Multi-Modal Support, and Removed NATS and ETCD
NVIDIA has just released Dynamo v0.9.0. This is the most significant infrastructure upgrade for the distributed inference framework to date. This update simplifies how large-scale models are deployed and managed. The release focuses on removing heavy dependencies and improving how GPUs handle multi-modal data.
The Great Simplification: Removing NATS and etcd
The biggest change in v0.9.0 is the removal of NATS and ETCD. In previous versions, these tools handled service discovery and messaging. However, they added ‘operational tax’ by requiring developers to manage extra clusters.
NVIDIA replaced these with a new Event Plane and a Discovery Plane. The system now uses ZMQ (ZeroMQ) for high-performance transport and MessagePack for data serialization. For teams using Kubernetes, Dynamo now supports Kubernetes-native service discovery. This change makes the infrastructure leaner and easier to maintain in production environments.
Multi-Modal Support and the E/P/D Split
Dynamo v0.9.0 expands multi-modal support across 3 main backends: vLLM, SGLang, and TensorRT-LLM. This allows models to process text, images, and video more efficiently.
A key feature in this update is the E/P/D (Encode/Prefill/Decode) split. In standard setups, a single GPU often handles all 3 stages. This can cause bottlenecks during heavy video or image processing. v0.9.0 introduces Encoder Disaggregation. You can now run the Encoder on a separate set of GPUs from the Prefill and Decode workers. This allows you to scale your hardware based on the specific needs of your model.
Sneak Preview: FlashIndexer
This release includes a sneak preview of FlashIndexer. This component is designed to solve latency issues in distributed KV cache management.
When working with large context windows, moving Key-Value (KV) data between GPUs is a slow process. FlashIndexer improves how the system indexes and retrieves these cached tokens. This results in a lower Time to First Token (TTFT). While still a preview, it represents a major step toward making distributed inference feel as fast as local inference.
Smart Routing and Load Estimation
Managing traffic across 100s of GPUs is difficult. Dynamo v0.9.0 introduces a smarter Planner that uses predictive load estimation.
The system uses a Kalman filter to predict the future load of a request based on past performance. It also supports routing hints from the Kubernetes Gateway API Inference Extension (GAIE). This allows the network layer to communicate directly with the inference engine. If a specific GPU group is overloaded, the system can route new requests to idle workers with higher precision.
The Technical Stack at a Glance
The v0.9.0 release updates several core components to their latest stable versions. Here is the breakdown of the supported backends and libraries:
The inclusion of the dynamo-tokens crate, written in Rust, ensures that token handling remains high-speed. For data transfer between GPUs, Dynamo continues to leverage NIXL (NVIDIA Inference Transfer Library) for RDMA-based communication.
Key Takeaways
- Infrastructure Decoupling (Goodbye NATS and ETCD): The release completes the modernization of the communication architecture. By replacing NATS and ETCD with a new Event Plane (using ZMQ and MessagePack) and Kubernetes-native service discovery, the system removes the ‘operational tax’ of managing external clusters.
- Full Multi-Modal Disaggregation (E/P/D Split): Dynamo now supports a complete Encode/Prefill/Decode (E/P/D) split across all 3 backends (vLLM, SGLang, and TRT-LLM). This allows you to run vision or video encoders on separate GPUs, preventing compute-heavy encoding tasks from bottlenecking the text generation process.
- FlashIndexer Preview for Lower Latency :The ‘sneak preview’ of FlashIndexer introduces a specialized component to optimize distributed KV cache management. It is designed to make the indexing and retrieval of conversation ‘memory’ significantly faster, aimed at further reducing the Time to First Token (TTFT).
- Smarter Scheduling with Kalman Filters: The system now uses predictive load estimation powered by Kalman filters. This allows the Planner to forecast GPU load more accurately and handle traffic spikes proactively, supported by routing hints from the Kubernetes Gateway API Inference Extension (GAIE).
Check out the GitHub Release here. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

