NVIDIA announced that it has entered into a definitive agreement to acquire Run.ai, a Kubernetes-based workload management and orchestration software provider, to help customers make more efficient use of their AI computing resources.
Omri Geller, Run:ai co-founder and CEO said, “ Run.AI has been a close collaborator with NVIDIA since 2020 and we share a passion for helping our customers make the most of their infrastructure. We’re thrilled to join NVIDIA and look forward to continuing our journey together.” Read here the official announcement on the NVIDIA and Run.AI acquisitions.
Also Read: How NVIDIA and Google Cloud Will Empower Startups With AI Innovation
What was the need for NVIDIA to acquire Run.AI?
- Decreased Workloads: Customer AI deployments are becoming increasingly complex, with workloads distributed across cloud, edge, and on-premises data center infrastructure.
- Managing and orchestrating generative AI, recommender systems, search engines, and other workloads requires sophisticated scheduling to optimize performance at the system level and on the underlying infrastructure.
What are the unique features of Run.AI?
- Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on-premises, in the cloud, or hybrid environments.
- Run:ai has built an open platform on Kubernetes, the modern Artificial Intelligence and cloud infrastructure orchestration layer. It supports all popular Kubernetes variants and integrates with third-party AI tools and frameworks.
- Run:ai customers include some of the world’s largest enterprises across multiple industries, which use the Run:ai platform to manage data-center-scale GPU clusters.
How does Run.AI Help AI developers?
- It provides a centralized interface to manage shared compute infrastructure, enabling easier and faster access for complex AI workloads.
- Run:ai also gives functionality to add users, curate them under teams, provide access to cluster resources, control over quotas, priorities, and pools, and monitor and report on resource use.
- It can pool GPUs and share computing power — from fractions of GPUs to multiple GPUs or multiple nodes of GPUs running on different clusters — for separate tasks.
- Run:ai has efficient GPU cluster resource utilization, enabling customers to gain more from their computing investments.
Also Read: Georgia Tech and NVIDIA Launch First AI Makerspace Innovation for Students
How NVIDIA will get beniffuted with Run.AI Acquistion?
- NVIDIA will continue to offer Run:ai’s products under the same business model for the immediate future.
- NVIDIA will continue to invest in the Run:ai product roadmap as part of NVIDIA DGX Cloud, an AI platform co-engineered with leading clouds for enterprise developers, offering an integrated, full-stack service optimized for generative AI.
- NVIDIA DGX and DGX Cloud customers will gain access to Run:ai’s capabilities for their AI workloads, particularly for large language model deployments. Run:ai’s solutions are already integrated with NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI Enterprise software, among other products.
- NVIDIA’s accelerated computing platform and Run:ai’s platform will continue to support a broad ecosystem of third-party solutions, giving customers choice and flexibility.
Also Read: What is NVIDIA Hopper-based Gen AI with the Power of TensorRT-LLM?
With Run.AI, NVIDIA will enable customers to have a single fabric that accesses GPU solutions anywhere. Customers can expect to benefit from better GPU utilization, improved management of GPU infrastructure, and greater flexibility from the open architecture.