1Xen HVM guest support 2===================== 3 4 5Description 6----------- 7 8KVM has support for hosting Xen guests, intercepting Xen hypercalls and event 9channel (Xen PV interrupt) delivery. This allows guests which expect to be 10run under Xen to be hosted in QEMU under Linux/KVM instead. 11 12Using the split irqchip is mandatory for Xen support. 13 14Setup 15----- 16 17Xen mode is enabled by setting the ``xen-version`` property of the KVM 18accelerator, for example for Xen 4.17: 19 20.. parsed-literal:: 21 22 |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split 23 24Additionally, virtual APIC support can be advertised to the guest through the 25``xen-vapic`` CPU flag: 26 27.. parsed-literal:: 28 29 |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split --cpu host,+xen-vapic 30 31When Xen support is enabled, QEMU changes hypervisor identification (CPUID 320x40000000..0x4000000A) to Xen. The KVM identification and features are not 33advertised to a Xen guest. If Hyper-V is also enabled, the Xen identification 34moves to leaves 0x40000100..0x4000010A. 35 36Properties 37---------- 38 39The following properties exist on the KVM accelerator object: 40 41``xen-version`` 42 This property contains the Xen version in ``XENVER_version`` form, with the 43 major version in the top 16 bits and the minor version in the low 16 bits. 44 Setting this property enables the Xen guest support. If Xen version 4.5 or 45 greater is specified, the HVM leaf in Xen CPUID is populated. Xen version 46 4.6 enables the vCPU ID in CPUID, and version 4.17 advertises vCPU upcall 47 vector support to the guest. 48 49``xen-evtchn-max-pirq`` 50 Xen PIRQs represent an emulated physical interrupt, either GSI or MSI, which 51 can be routed to an event channel instead of to the emulated I/O or local 52 APIC. By default, QEMU permits only 256 PIRQs because this allows maximum 53 compatibility with 32-bit MSI where the higher bits of the PIRQ# would need 54 to be in the upper 64 bits of the MSI message. For guests with large numbers 55 of PCI devices (and none which are limited to 32-bit addressing) it may be 56 desirable to increase this value. 57 58``xen-gnttab-max-frames`` 59 Xen grant tables are the means by which a Xen guest grants access to its 60 memory for PV back ends (disk, network, etc.). Since QEMU only supports v1 61 grant tables which are 8 bytes in size, each page (each frame) of the grant 62 table can reference 512 pages of guest memory. The default number of frames 63 is 64, allowing for 32768 pages of guest memory to be accessed by PV backends 64 through simultaneous grants. For guests with large numbers of PV devices and 65 high throughput, it may be desirable to increase this value. 66 67Xen paravirtual devices 68----------------------- 69 70The Xen PCI platform device is enabled automatically for a Xen guest. This 71allows a guest to unplug all emulated devices, in order to use paravirtual 72block and network drivers instead. 73 74Those paravirtual Xen block, network (and console) devices can be created 75through the command line, and/or hot-plugged. 76 77To provide a Xen console device, define a character device and then a device 78of type ``xen-console`` to connect to it. For the Xen console equivalent of 79the handy ``-serial mon:stdio`` option, for example: 80 81.. parsed-literal:: 82 -chardev stdio,mux=on,id=char0,signal=off -mon char0 \\ 83 -device xen-console,chardev=char0 84 85The Xen network device is ``xen-net-device``, which becomes the default NIC 86model for emulated Xen guests, meaning that just the default NIC provided 87by QEMU should automatically work and present a Xen network device to the 88guest. 89 90Disks can be configured with '``-drive file=${GUEST_IMAGE},if=xen``' and will 91appear to the guest as ``xvda`` onwards. 92 93Under Xen, the boot disk is typically available both via IDE emulation, and 94as a PV block device. Guest bootloaders typically use IDE to load the guest 95kernel, which then unplugs the IDE and continues with the Xen PV block device. 96 97This configuration can be achieved as follows: 98 99.. parsed-literal:: 100 101 |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split \\ 102 -drive file=${GUEST_IMAGE},if=xen \\ 103 -drive file=${GUEST_IMAGE},file.locking=off,if=ide 104 105VirtIO devices can also be used; Linux guests may need to be dissuaded from 106umplugging them by adding '``xen_emul_unplug=never``' on their command line. 107 108Booting Xen PV guests 109--------------------- 110 111Booting PV guest kernels is possible by using the Xen PV shim (a version of Xen 112itself, designed to run inside a Xen HVM guest and provide memory management 113services for one guest alone). 114 115The Xen binary is provided as the ``-kernel`` and the guest kernel itself (or 116PV Grub image) as the ``-initrd`` image, which actually just means the first 117multiboot "module". For example: 118 119.. parsed-literal:: 120 121 |qemu_system| --accel kvm,xen-version=0x40011,kernel-irqchip=split \\ 122 -chardev stdio,id=char0 -device xen-console,chardev=char0 \\ 123 -display none -m 1G -kernel xen -initrd bzImage \\ 124 -append "pv-shim console=xen,pv -- console=hvc0 root=/dev/xvda1" \\ 125 -drive file=${GUEST_IMAGE},if=xen 126 127The Xen image must be built with the ``CONFIG_XEN_GUEST`` and ``CONFIG_PV_SHIM`` 128options, and as of Xen 4.17, Xen's PV shim mode does not support using a serial 129port; it must have a Xen console or it will panic. 130 131The example above provides the guest kernel command line after a separator 132(" ``--`` ") on the Xen command line, and does not provide the guest kernel 133with an actual initramfs, which would need to listed as a second multiboot 134module. For more complicated alternatives, see the command line 135documentation for the ``-initrd`` option. 136 137Host OS requirements 138-------------------- 139 140The minimal Xen support in the KVM accelerator requires the host to be running 141Linux v5.12 or newer. Later versions add optimisations: Linux v5.17 added 142acceleration of interrupt delivery via the Xen PIRQ mechanism, and Linux v5.19 143accelerated Xen PV timers and inter-processor interrupts (IPIs). 144