NVIDIA Blog · 24 Mar
Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community
NVIDIA has donated its Dynamic Resource Allocation (DRA) Driver for GPUs to the Cloud Native Computing Foundation (CNCF), announced at KubeCon Europe in Amsterdam. This moves the driver from vendor-governed to full community ownership within the Kubernetes ecosystem.
The donation aims to help developers manage high-performance AI infrastructure with greater transparency and efficiency. By open-sourcing this critical software, NVIDIA encourages wider collaboration and innovation aligned with modern cloud-native practices.
The DRA Driver delivers several key benefits including smarter GPU resource sharing through Multi-Process Service and Multi-Instance GPU technologies. It supports massive scale with Multi-Node NVlink interconnects essential for training AI models on Grace Blackwell systems.
Developers gain flexibility to dynamically reconfigure hardware resources on the fly, along with precision controls for requesting specific computing power or memory settings. This simplifies what historically required significant effort to manage.
NVIDIA is collaborating with major industry players including AWS, Broadcom, Canonical, Google Cloud, Microsoft, Nutanix, Red Hat, and SUSE. The partnership underscores open source as foundational to enterprise AI strategy.
Additionally, NVIDIA introduced GPU support for Kata Containers through the CNCF Confidential Containers community. This enables confidential computing for AI workloads with enhanced data protection through stronger isolation.
The donation is part of NVIDIA's broader open source initiatives including NVSentinel for GPU fault remediation, AI Cluster Runtime, NemoClaw reference stack, and OpenShell runtime for autonomous agents.