1'virt' Generic Virtual Platform (``virt``) 2========================================== 3 4The ``virt`` board is a platform which does not correspond to any real hardware; 5it is designed for use in virtual machines. It is the recommended board type 6if you simply want to run a guest such as Linux and do not care about 7reproducing the idiosyncrasies and limitations of a particular bit of 8real-world hardware. 9 10Supported devices 11----------------- 12 13The ``virt`` machine supports the following devices: 14 15* Up to 512 generic RV32GC/RV64GC cores, with optional extensions 16* Core Local Interruptor (CLINT) 17* Platform-Level Interrupt Controller (PLIC) 18* CFI parallel NOR flash memory 19* 1 NS16550 compatible UART 20* 1 Google Goldfish RTC 21* 1 SiFive Test device 22* 8 virtio-mmio transport devices 23* 1 generic PCIe host bridge 24* The fw_cfg device that allows a guest to obtain data from QEMU 25 26The hypervisor extension has been enabled for the default CPU, so virtual 27machines with hypervisor extension can simply be used without explicitly 28declaring. 29 30Hardware configuration information 31---------------------------------- 32 33The ``virt`` machine automatically generates a device tree blob ("dtb") 34which it passes to the guest, if there is no ``-dtb`` option. This provides 35information about the addresses, interrupt lines and other configuration of 36the various devices in the system. Guest software should discover the devices 37that are present in the generated DTB. 38 39If users want to provide their own DTB, they can use the ``-dtb`` option. 40These DTBs should have the following requirements: 41 42* The number of subnodes of the /cpus node should match QEMU's ``-smp`` option 43* The /memory reg size should match QEMU’s selected ram_size via ``-m`` 44* Should contain a node for the CLINT device with a compatible string 45 "riscv,clint0" if using with OpenSBI BIOS images 46 47Boot options 48------------ 49 50The ``virt`` machine can start using the standard -kernel functionality 51for loading a Linux kernel, a VxWorks kernel, an S-mode U-Boot bootloader 52with the default OpenSBI firmware image as the -bios. It also supports 53the recommended RISC-V bootflow: U-Boot SPL (M-mode) loads OpenSBI fw_dynamic 54firmware and U-Boot proper (S-mode), using the standard -bios functionality. 55 56Using flash devices 57------------------- 58 59By default, the first flash device (pflash0) is expected to contain 60S-mode firmware code. It can be configured as read-only, with the 61second flash device (pflash1) available to store configuration data. 62 63For example, booting edk2 looks like 64 65.. code-block:: bash 66 67 $ qemu-system-riscv64 \ 68 -blockdev node-name=pflash0,driver=file,read-only=on,filename=<edk2_code> \ 69 -blockdev node-name=pflash1,driver=file,filename=<edk2_vars> \ 70 -M virt,pflash0=pflash0,pflash1=pflash1 \ 71 ... other args .... 72 73For TCG guests only, it is also possible to boot M-mode firmware from 74the first flash device (pflash0) by additionally passing ``-bios 75none``, as in 76 77.. code-block:: bash 78 79 $ qemu-system-riscv64 \ 80 -bios none \ 81 -blockdev node-name=pflash0,driver=file,read-only=on,filename=<m_mode_code> \ 82 -M virt,pflash0=pflash0 \ 83 ... other args .... 84 85Firmware images used for pflash must be exactly 32 MiB in size. 86 87riscv-iommu support 88------------------- 89 90The board has support for the riscv-iommu-pci device by using the following 91command line: 92 93.. code-block:: bash 94 95 $ qemu-system-riscv64 -M virt -device riscv-iommu-pci (...) 96 97Refer to :ref:`riscv-iommu` for more information on how the RISC-V IOMMU support 98works. 99 100Machine-specific options 101------------------------ 102 103The following machine-specific options are supported: 104 105- aclint=[on|off] 106 107 When this option is "on", ACLINT devices will be emulated instead of 108 SiFive CLINT. When not specified, this option is assumed to be "off". 109 This option is restricted to the TCG accelerator. 110 111- acpi=[on|off|auto] 112 113 When this option is "on" (which is the default), ACPI tables are generated and 114 exposed as firmware tables etc/acpi/rsdp and etc/acpi/tables. 115 116- aia=[none|aplic|aplic-imsic] 117 118 This option allows selecting interrupt controller defined by the AIA 119 (advanced interrupt architecture) specification. The "aia=aplic" selects 120 APLIC (advanced platform level interrupt controller) to handle wired 121 interrupts whereas the "aia=aplic-imsic" selects APLIC and IMSIC (incoming 122 message signaled interrupt controller) to handle both wired interrupts and 123 MSIs. When not specified, this option is assumed to be "none" which selects 124 SiFive PLIC to handle wired interrupts. 125 126- aia-guests=nnn 127 128 The number of per-HART VS-level AIA IMSIC pages to be emulated for a guest 129 having AIA IMSIC (i.e. "aia=aplic-imsic" selected). When not specified, 130 the default number of per-HART VS-level AIA IMSIC pages is 0. 131 132Running Linux kernel 133-------------------- 134 135Linux mainline v5.12 release is tested at the time of writing. To build a 136Linux mainline kernel that can be booted by the ``virt`` machine in 13764-bit mode, simply configure the kernel using the defconfig configuration: 138 139.. code-block:: bash 140 141 $ export ARCH=riscv 142 $ export CROSS_COMPILE=riscv64-linux- 143 $ make defconfig 144 $ make 145 146To boot the newly built Linux kernel in QEMU with the ``virt`` machine: 147 148.. code-block:: bash 149 150 $ qemu-system-riscv64 -M virt -smp 4 -m 2G \ 151 -display none -serial stdio \ 152 -kernel arch/riscv/boot/Image \ 153 -initrd /path/to/rootfs.cpio \ 154 -append "root=/dev/ram" 155 156To build a Linux mainline kernel that can be booted by the ``virt`` machine 157in 32-bit mode, use the rv32_defconfig configuration. A patch is required to 158fix the 32-bit boot issue for Linux kernel v5.12. 159 160.. code-block:: bash 161 162 $ export ARCH=riscv 163 $ export CROSS_COMPILE=riscv64-linux- 164 $ curl https://patchwork.kernel.org/project/linux-riscv/patch/20210627135117.28641-1-bmeng.cn@gmail.com/mbox/ > riscv.patch 165 $ git am riscv.patch 166 $ make rv32_defconfig 167 $ make 168 169Replace ``qemu-system-riscv64`` with ``qemu-system-riscv32`` in the command 170line above to boot the 32-bit Linux kernel. A rootfs image containing 32-bit 171applications shall be used in order for kernel to boot to user space. 172 173Running U-Boot 174-------------- 175 176U-Boot mainline v2021.04 release is tested at the time of writing. To build an 177S-mode U-Boot bootloader that can be booted by the ``virt`` machine, use 178the qemu-riscv64_smode_defconfig with similar commands as described above for Linux: 179 180.. code-block:: bash 181 182 $ export CROSS_COMPILE=riscv64-linux- 183 $ make qemu-riscv64_smode_defconfig 184 185Boot the 64-bit U-Boot S-mode image directly: 186 187.. code-block:: bash 188 189 $ qemu-system-riscv64 -M virt -smp 4 -m 2G \ 190 -display none -serial stdio \ 191 -kernel /path/to/u-boot.bin 192 193To test booting U-Boot SPL which in M-mode, which in turn loads a FIT image 194that bundles OpenSBI fw_dynamic firmware and U-Boot proper (S-mode) together, 195build the U-Boot images using riscv64_spl_defconfig: 196 197.. code-block:: bash 198 199 $ export CROSS_COMPILE=riscv64-linux- 200 $ export OPENSBI=/path/to/opensbi-riscv64-generic-fw_dynamic.bin 201 $ make qemu-riscv64_spl_defconfig 202 203The minimal QEMU commands to run U-Boot SPL are: 204 205.. code-block:: bash 206 207 $ qemu-system-riscv64 -M virt -smp 4 -m 2G \ 208 -display none -serial stdio \ 209 -bios /path/to/u-boot-spl \ 210 -device loader,file=/path/to/u-boot.itb,addr=0x80200000 211 212To test 32-bit U-Boot images, switch to use qemu-riscv32_smode_defconfig and 213riscv32_spl_defconfig builds, and replace ``qemu-system-riscv64`` with 214``qemu-system-riscv32`` in the command lines above to boot the 32-bit U-Boot. 215 216Enabling TPM 217------------ 218 219A TPM device can be connected to the virt board by following the steps below. 220 221First launch the TPM emulator: 222 223.. code-block:: bash 224 225 $ swtpm socket --tpm2 -t -d --tpmstate dir=/tmp/tpm \ 226 --ctrl type=unixio,path=swtpm-sock 227 228Then launch QEMU with some additional arguments to link a TPM device to the backend: 229 230.. code-block:: bash 231 232 $ qemu-system-riscv64 \ 233 ... other args .... \ 234 -chardev socket,id=chrtpm,path=swtpm-sock \ 235 -tpmdev emulator,id=tpm0,chardev=chrtpm \ 236 -device tpm-tis-device,tpmdev=tpm0 237 238The TPM device can be seen in the memory tree and the generated device 239tree and should be accessible from the guest software. 240