### About NVIDIA Dynamo
NVIDIA Dynamo is an innovative, open-source platform focused on efficient, scalable inference for large language and reasoning models in distributed GPU environments. Our team is tackling the most challenging issues in distributed AI infrastructure, and we are searching for engineers enthusiastic about building the next generation of scalable AI systems.
### Role Overview
As a Principal Software Engineer on the Dynamo project, you will address sophisticated challenges in distributed inference, including:
- **Dynamo k8s Serving Platform**: Build the Kubernetes deployment and workload management stack for Dynamo to facilitate inference deployments at scale.
- **Scalability & Reliability**: Develop robust, production-grade inference workload management systems that scale from a handful to thousands of GPUs.
- **Disaggregated Serving**: Architect and optimize the separation of prefill and decode phases across distinct GPU clusters to improve throughput and resource utilization.
- **Dynamic GPU Scheduling**: Develop Planner algorithms for real-time allocation and rebalancing of GPU resources based on fluctuating workloads.
- **Intelligent Routing**: Enhance the smart routing system to efficiently direct inference requests to GPU worker replicas.
- **Distributed KV Cache Management**: Innovate in the management and transfer of large KV caches across heterogeneous memory and storage hierarchies.
### Responsibilities
- Collaborate on the design and development of the Dynamo Kubernetes stack.
- Introduce new features to the Dynamo Python SDK and Rust Runtime Core Library.
- Design, implement, and optimize distributed inference components in Rust and Python.
- Contribute to open-source repositories, participate in code reviews, and assist with issue triage on GitHub.
- Write clear documentation and contribute to user and developer guides.
### What We Need
We are looking for passionate engineers who are ready to tackle high-impact challenges in AI systems.