Individuals and organisations have been using GPU to fulfil their computational needs. NVIDIA provides a high range of distinguished GPUs for easing the computational problems of the users. Kubernetes has become popular with development teams, as it offers a great platform for deploying containerized applications. However, many development teams have existing Virtual Machine-based workloads that can't be easily containerized.
Introduction
Virtual machines (VMs) are a common way for teams to deploy and run their applications. But as teams move towards microservices-based architectures and DevOps practices, they need a new way to run their more agile and scalable applications. Containers can provide that agility and scalability, and they can do so while still using the VMs that teams are already familiar with.
Teams that rely on existing virtual machine-based workloads are entrusted to containerise applications instantaneously. This blog explains how containers can be used with VMs to provide the benefits of both technologies.
KubeVirt technology is designed to address that issue by providing a unified development platform where developers can build, modify, and deploy both containers KubeVirt technology helps development teams that use Kubernetes while also having Virtual Machine-based workloads. KubeVirt makes it possible for developers to build, modify, and deploy applications in both Application Containers and Virtual Machines in a single environment.
It is helpful because it makes it easier for development teams to adopt Kubernetes. Containerizing virtualized workloads can provide several benefits. Teams can decompose them over time by placing virtualized workloads directly into their development workflows while still leveraging the remaining virtualized components. It allows them increase efficiency and agility without sacrificing operational efficiency. Users who desire to use the KubeVirt device plugin in NVIDIA GPU Cloud need to have an NVIDIA GPU configured for vGPU or GPU pass through.
Why KubeVirt?
KubeVirt can be viewed as an extension of Kubernetes provided for managing the traditional Virtual Machine workloads along with the container workloads. This technology had to maintain two stacks for computations and operations maintenance. The maintenance of two stacks has been reduced to one after the implementation of KubeVirt.
KubeVirt allows the users to work with a single stack and perform operations on Container as well as VM devices. The technology is beneficial as it provides the users on-premise options such as oVirt, OpenStack, and public cloud services such as Amazon Web Service, GCP, Microsoft Azure. It has been designed to support the nodes in the Kubernetes cluster.
KubeVirt discovers devices in its scope, after which it advertises Kubelet to the new devices. KubeVirt returns the PCI address of the GPU device that has been allocated to the Kubernetes.
Features
KubeVirt can discover the NVIDIA GPUs that are obligated to the VFIO-PCI driver. KubeVirt exposes the available devices to the related Virtual Machine in a pass-through mode. After discovering new devices available in the scope, the KubeVirt device plugin discovers all the NVIDIA GPUs working in a specific Kubernetes cluster and exposes them in such a way that they can be attached to the KubeVirt Virtual Machine.
The newly discovered GPUs and vGPUs are then sent an allocation request for joining the Kubelet. KubeVirt then sends the information about the GPU nodes, virtual instances, pods, and others while promoting the joining action to the Kubelet. Meanwhile, the device plugin is capable of performing frequent health check-ups of the GPUs in the Kubernetes cluster or nodes.
After identifying the node, Kubelet identifies the new GPU or vGPU and then gives it the grant to join the Kubelet. Users can then perform highly computational and complex operations on their GPUs, play games, and perform other complex processes.
The GPU pass-through architecture consists of several layers. The bottom-most layer consists of the GPU chips. Above the GPU chips, VFIO-PCI Driver and NVIDIA vGPU Manager are installed for managing the computational operations.
NVIDIA KubeVirt GPU Device Plugin lies above the drivers and GPU managers and acts as a mediator between the Kubernetes and the drivers. KubeVirt Pod is the topmost layer that manages the container workload and consists of the Qemu container that has GPU processors.
KubeVirt features used by NVIDIA
NVIDIA uses some of the important features with the KubeVirt VM implication. KubeVirt provides multi-interface support to the GPUs. Along with that, Multus and SRVIO support are provided by the device plugin.
Conclusion
KubeVirt provides the VM, Cloud-Init support, virtual control VNC, and sidecar hooks to tweak virtual parameters. KubeVirt lets the GPU have PVC-PV support, Bloc PV support, and CoW disk support. KubeVirt provides NVIDIA features such as adding service account to VM, node selection, tolerations, taints, and affinity.
E2E cloud infrastructure provides the interface to provide the flexibility of local cloud infused with NVIDIA GPUs that can be used for AI-based applications.
For a Free Trial: https://bit.ly/freetrialcloud