×

Providing documentation feedback

To report an error or to improve our documentation, log in to your Red Hat Jira account and submit a Jira issue.

About Red Hat OpenShift Virtualization

With Red Hat OpenShift Virtualization, you can bring traditional virtual machines (VMs) into OpenShift Container Platform and run them alongside containers. In OpenShift Virtualization, VMs are native Kubernetes objects that you can manage by using the OpenShift Container Platform web console or the command line.

OpenShift Virtualization is represented by the OpenShift Virtualization icon.

You can use OpenShift Virtualization with either the OVN-Kubernetes or the OpenShiftSDN default Container Network Interface (CNI) network provider.

Prepare your cluster for OpenShift Virtualization.

OpenShift Virtualization supported cluster version

OpenShift Virtualization 4.16 is supported for use on OpenShift Container Platform 4.16 clusters. To use the latest z-stream release of OpenShift Virtualization, you must first upgrade to the latest version of OpenShift Container Platform.

Supported guest operating systems

Microsoft Windows SVVP certification

OpenShift Virtualization is certified in Microsoft’s Windows Server Virtualization Validation Program (SVVP) to run Windows Server workloads.

The SVVP certification applies to:

  • Red Hat Enterprise Linux CoreOS workers. In the Microsoft SVVP Catalog, they are named Red Hat OpenShift Container Platform 4 on RHEL CoreOS 9.

  • Intel and AMD CPUs.

Quick starts

Quick start tours are available for several OpenShift Virtualization features. To view the tours, click the Help icon ? in the menu bar on the header of the OpenShift Container Platform web console and then select Quick Starts. You can filter the available tours by entering the keyword virtualization in the Filter field.

New and changed features

This release adds new features and enhancements related to the following components and concepts:

Installation and update

  • After upgrading to OpenShift Virtualization 4.16, data volumes that were previously removed through garbage collection might be recreated. This behavior is expected. You can ignore the recreated data volumes because data volume garbage collection is now disabled.

Virtualization

  • Windows 10 VMs now boot using UEFI with TPM.

Networking

  • You can now access a VM that is connected to the default internal pod network on a stable fully qualified domain name (FQDN) by using headless services.

Web console

  • Hot plugging virtual machines is now generally available. If a virtual machine cannot be hot plugged, the condition RestartRequired is applied to the VM. You can view this condition in the Diagnostics tab of the web console.

  • You can now select sysprep options when you create a Microsoft Windows VM from an instance type. Previously, you had to set the sysprep options by customizing the VM after its creation.

Monitoring

Notable technical changes

  • VMs require a minimum of 1 GiB of allocated memory to enable memory hotplug. If a VM has less than 1 GiB of allocated memory, then memory hotplug is disabled.

Deprecated and removed features

Deprecated features

Deprecated features are included in the current release and remain supported. However, deprecated features will be removed in a future release and are not recommended for new deployments.

  • The tekton-tasks-operator is deprecated and Tekton tasks and example pipelines are now deployed by the ssp-operator.

  • The copy-template, modify-vm-template, and create-vm-from-template tasks are deprecated.

  • Support for Windows Server 2012 R2 templates is deprecated.

  • The alerts KubeVirtComponentExceedsRequestedMemory and KubeVirtComponentExceedsRequestedCPU are deprecated. You can safely silence them.

Removed features

Removed features are not supported in the current release.

Technology Preview features

Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:

  • Cluster admins can now enable CPU resource limits on a namespace in the OpenShift Container Platform web console under OverviewSettingsPreview features.

  • Cluster admins can now use the wasp-agent tool to configure a higher VM workload density in their clusters by overcommitting the amount of memory, in RAM, and assigning swap resources to VM workloads.

  • OpenShift Virtualization now supports compatibility with Red Hat OpenShift Data Foundation (ODF) Regional Disaster Recovery.

Known issues

Monitoring

  • The Pod Disruption Budget (PDB) prevents pod disruptions for migratable virtual machine images. If the PDB detects pod disruption, then openshift-monitoring sends a PodDisruptionBudgetAtLimit alert every 60 minutes for virtual machine images that use the LiveMigrate eviction strategy. (CNV-33834)

Nodes

  • Uninstalling OpenShift Virtualization does not remove the feature.node.kubevirt.io node labels created by OpenShift Virtualization. You must remove the labels manually. (CNV-38543)

  • In a heterogeneous cluster with different compute nodes, virtual machines that have HyperV reenlightenment enabled cannot be scheduled on nodes that do not support timestamp-counter scaling (TSC) or have the appropriate TSC frequency. (BZ#2151169)

Storage

  • If you use Portworx as your storage solution on AWS and create a VM disk image, the created image might be smaller than expected due to the filesystem overhead being accounted for twice. (CNV-40217)

    • As a workaround, you can manually expand the persistent volume claim (PVC) to increase the available space after the initial provisioning process completes.

  • In some instances, multiple virtual machines can mount the same PVC in read-write mode, which might result in data corruption. (CNV-13500)

    • As a workaround, avoid using a single PVC in read-write mode with multiple VMs.

  • If you clone more than 100 VMs using the csi-clone cloning strategy, then the Ceph CSI might not purge the clones. Manually deleting the clones might also fail. (CNV-23501)

    • As a workaround, you can restart the ceph-mgr to purge the VM clones.

Virtualization

  • When adding a virtual Trusted Platform Module (vTPM) device to a Windows VM, the BitLocker Drive Encryption system check passes even if the vTPM device is not persistent. This is because a vTPM device that is not persistent stores and recovers encryption keys using ephemeral storage for the lifetime of the virt-launcher pod. When the VM migrates or is shut down and restarts, the vTPM data is lost. (CNV-36448)

  • OpenShift Virtualization links a service account token in use by a pod to that specific pod. OpenShift Virtualization implements a service account volume by creating a disk image that contains a token. If you migrate a VM, then the service account volume becomes invalid. (CNV-33835)

    • As a workaround, use user accounts rather than service accounts because user account tokens are not bound to a specific pod.

  • With the release of the RHSA-2023:3722 advisory, the TLS Extended Master Secret (EMS) extension (RFC 7627) is mandatory for TLS 1.2 connections on FIPS-enabled Red Hat Enterprise Linux (RHEL) 9 systems. This is in accordance with FIPS-140-3 requirements. TLS 1.3 is not affected.

    Legacy OpenSSL clients that do not support EMS or TLS 1.3 now cannot connect to FIPS servers running on RHEL 9. Similarly, RHEL 9 clients in FIPS mode cannot connect to servers that only support TLS 1.2 without EMS. This in practice means that these clients cannot connect to servers on RHEL 6, RHEL 7 and non-RHEL legacy operating systems. This is because the legacy 1.0.x versions of OpenSSL do not support EMS or TLS 1.3. For more information, see TLS Extension "Extended Master Secret" enforced with Red Hat Enterprise Linux 9.2.

    • As a workaround, update legacy OpenSSL clients to a version that supports TLS 1.3 and configure OpenShift Virtualization to use TLS 1.3, with the Modern TLS security profile type, for FIPS mode.

Web console

  • When you first deploy an OpenShift Container Platform cluster, creating VMs from templates or instance types by using the web console, fails if you do not have cluster-admin permissions.

    • As a workaround, the cluster administrator must first create a config map to enable other users to use templates and instance types to create VMs. (CNV-38284)

  • When you create a persistent volume claim (PVC) by selecting With Data upload form from the Create PersistentVolumeClaim list in the web console, uploading data to the PVC by using the Upload Data field fails. (CNV-37607)