12a578133STarun Gupta===================== 22a578133STarun GuptaVFIO device Migration 32a578133STarun Gupta===================== 42a578133STarun Gupta 52a578133STarun GuptaMigration of virtual machine involves saving the state for each device that 62a578133STarun Guptathe guest is running on source host and restoring this saved state on the 72a578133STarun Guptadestination host. This document details how saving and restoring of VFIO 82a578133STarun Guptadevices is done in QEMU. 92a578133STarun Gupta 10eda7362aSAvihai HoronMigration of VFIO devices consists of two phases: the optional pre-copy phase, 11eda7362aSAvihai Horonand the stop-and-copy phase. The pre-copy phase is iterative and allows to 12eda7362aSAvihai Horonaccommodate VFIO devices that have a large amount of data that needs to be 13eda7362aSAvihai Horontransferred. The iterative pre-copy phase of migration allows for the guest to 14eda7362aSAvihai Horoncontinue whilst the VFIO device state is transferred to the destination, this 15eda7362aSAvihai Horonhelps to reduce the total downtime of the VM. VFIO devices opt-in to pre-copy 16eda7362aSAvihai Horonsupport by reporting the VFIO_MIGRATION_PRE_COPY flag in the 17eda7362aSAvihai HoronVFIO_DEVICE_FEATURE_MIGRATION ioctl. 182b0ab9e9SAvihai Horon 19745c4291SAvihai HoronWhen pre-copy is supported, it's possible to further reduce downtime by 20745c4291SAvihai Horonenabling "switchover-ack" migration capability. 21745c4291SAvihai HoronVFIO migration uAPI defines "initial bytes" as part of its pre-copy data stream 22745c4291SAvihai Horonand recommends that the initial bytes are sent and loaded in the destination 23745c4291SAvihai Horonbefore stopping the source VM. Enabling this migration capability will 24745c4291SAvihai Horonguarantee that and thus, can potentially reduce downtime even further. 25745c4291SAvihai Horon 26*94f775e4SAvihai HoronTo support migration of multiple devices that might do P2P transactions between 27*94f775e4SAvihai Horonthemselves, VFIO migration uAPI defines an intermediate P2P quiescent state. 28*94f775e4SAvihai HoronWhile in the P2P quiescent state, P2P DMA transactions cannot be initiated by 29*94f775e4SAvihai Horonthe device, but the device can respond to incoming ones. Additionally, all 30*94f775e4SAvihai Horonoutstanding P2P transactions are guaranteed to have been completed by the time 31*94f775e4SAvihai Horonthe device enters this state. 32*94f775e4SAvihai Horon 33*94f775e4SAvihai HoronAll the devices that support P2P migration are first transitioned to the P2P 34*94f775e4SAvihai Horonquiescent state and only then are they stopped or started. This makes migration 35*94f775e4SAvihai Horonsafe P2P-wise, since starting and stopping the devices is not done atomically 36*94f775e4SAvihai Horonfor all the devices together. 37*94f775e4SAvihai Horon 38*94f775e4SAvihai HoronThus, multiple VFIO devices migration is allowed only if all the devices 39*94f775e4SAvihai Horonsupport P2P migration. Single VFIO device migration is allowed regardless of 40*94f775e4SAvihai HoronP2P migration support. 412a578133STarun Gupta 422a578133STarun GuptaA detailed description of the UAPI for VFIO device migration can be found in 432b0ab9e9SAvihai Horonthe comment for the ``vfio_device_mig_state`` structure in the header file 442b0ab9e9SAvihai Horonlinux-headers/linux/vfio.h. 452a578133STarun Gupta 462a578133STarun GuptaVFIO implements the device hooks for the iterative approach as follows: 472a578133STarun Gupta 482b0ab9e9SAvihai Horon* A ``save_setup`` function that sets up migration on the source. 492a578133STarun Gupta 502b0ab9e9SAvihai Horon* A ``load_setup`` function that sets the VFIO device on the destination in 512b0ab9e9SAvihai Horon _RESUMING state. 522a578133STarun Gupta 53eda7362aSAvihai Horon* A ``state_pending_estimate`` function that reports an estimate of the 54eda7362aSAvihai Horon remaining pre-copy data that the vendor driver has yet to save for the VFIO 55eda7362aSAvihai Horon device. 56eda7362aSAvihai Horon 57c8df4a7aSJuan Quintela* A ``state_pending_exact`` function that reads pending_bytes from the vendor 582a578133STarun Gupta driver, which indicates the amount of data that the vendor driver has yet to 592a578133STarun Gupta save for the VFIO device. 602a578133STarun Gupta 61eda7362aSAvihai Horon* An ``is_active_iterate`` function that indicates ``save_live_iterate`` is 62eda7362aSAvihai Horon active only when the VFIO device is in pre-copy states. 63eda7362aSAvihai Horon 64eda7362aSAvihai Horon* A ``save_live_iterate`` function that reads the VFIO device's data from the 65eda7362aSAvihai Horon vendor driver during iterative pre-copy phase. 66eda7362aSAvihai Horon 67745c4291SAvihai Horon* A ``switchover_ack_needed`` function that checks if the VFIO device uses 68745c4291SAvihai Horon "switchover-ack" migration capability when this capability is enabled. 69745c4291SAvihai Horon 702a578133STarun Gupta* A ``save_state`` function to save the device config space if it is present. 712a578133STarun Gupta 722b0ab9e9SAvihai Horon* A ``save_live_complete_precopy`` function that sets the VFIO device in 732b0ab9e9SAvihai Horon _STOP_COPY state and iteratively copies the data for the VFIO device until 742b0ab9e9SAvihai Horon the vendor driver indicates that no data remains. 752a578133STarun Gupta 762a578133STarun Gupta* A ``load_state`` function that loads the config section and the data 772b0ab9e9SAvihai Horon sections that are generated by the save functions above. 782a578133STarun Gupta 792a578133STarun Gupta* ``cleanup`` functions for both save and load that perform any migration 802b0ab9e9SAvihai Horon related cleanup. 812a578133STarun Gupta 822a578133STarun Gupta 832a578133STarun GuptaThe VFIO migration code uses a VM state change handler to change the VFIO 842a578133STarun Guptadevice state when the VM state changes from running to not-running, and 852a578133STarun Guptavice versa. 862a578133STarun Gupta 872a578133STarun GuptaSimilarly, a migration state change handler is used to trigger a transition of 882a578133STarun Guptathe VFIO device state when certain changes of the migration state occur. For 892a578133STarun Guptaexample, the VFIO device state is transitioned back to _RUNNING in case a 902a578133STarun Guptamigration failed or was canceled. 912a578133STarun Gupta 922a578133STarun GuptaSystem memory dirty pages tracking 932a578133STarun Gupta---------------------------------- 942a578133STarun Gupta 952a578133STarun GuptaA ``log_global_start`` and ``log_global_stop`` memory listener callback informs 96333f988dSAvihai Horonthe VFIO dirty tracking module to start and stop dirty page tracking. A 97333f988dSAvihai Horon``log_sync`` memory listener callback queries the dirty page bitmap from the 98333f988dSAvihai Horondirty tracking module and marks system memory pages which were DMA-ed by the 99333f988dSAvihai HoronVFIO device as dirty. The dirty page bitmap is queried per container. 100333f988dSAvihai Horon 101333f988dSAvihai HoronCurrently there are two ways dirty page tracking can be done: 102333f988dSAvihai Horon(1) Device dirty tracking: 103333f988dSAvihai HoronIn this method the device is responsible to log and report its DMAs. This 104333f988dSAvihai Horonmethod can be used only if the device is capable of tracking its DMAs. 105333f988dSAvihai HoronDiscovering device capability, starting and stopping dirty tracking, and 106333f988dSAvihai Horonsyncing the dirty bitmaps from the device are done using the DMA logging uAPI. 107333f988dSAvihai HoronMore info about the uAPI can be found in the comments of the 108333f988dSAvihai Horon``vfio_device_feature_dma_logging_control`` and 109333f988dSAvihai Horon``vfio_device_feature_dma_logging_report`` structures in the header file 110333f988dSAvihai Horonlinux-headers/linux/vfio.h. 111333f988dSAvihai Horon 112333f988dSAvihai Horon(2) VFIO IOMMU module: 113333f988dSAvihai HoronIn this method dirty tracking is done by IOMMU. However, there is currently no 114333f988dSAvihai HoronIOMMU support for dirty page tracking. For this reason, all pages are 115333f988dSAvihai Horonperpetually marked dirty, unless the device driver pins pages through external 116333f988dSAvihai HoronAPIs in which case only those pinned pages are perpetually marked dirty. 117333f988dSAvihai Horon 118333f988dSAvihai HoronIf the above two methods are not supported, all pages are perpetually marked 119333f988dSAvihai Horondirty by QEMU. 1202a578133STarun Gupta 1212b0ab9e9SAvihai HoronBy default, dirty pages are tracked during pre-copy as well as stop-and-copy 122333f988dSAvihai Horonphase. So, a page marked as dirty will be copied to the destination in both 123333f988dSAvihai Horonphases. Copying dirty pages in pre-copy phase helps QEMU to predict if it can 124333f988dSAvihai Horonachieve its downtime tolerances. If QEMU during pre-copy phase keeps finding 125333f988dSAvihai Horondirty pages continuously, then it understands that even in stop-and-copy phase, 126333f988dSAvihai Horonit is likely to find dirty pages and can predict the downtime accordingly. 1272a578133STarun Gupta 1282a578133STarun GuptaQEMU also provides a per device opt-out option ``pre-copy-dirty-page-tracking`` 1292a578133STarun Guptawhich disables querying the dirty bitmap during pre-copy phase. If it is set to 1302a578133STarun Guptaoff, all dirty pages will be copied to the destination in stop-and-copy phase 1312a578133STarun Guptaonly. 1322a578133STarun Gupta 1332a578133STarun GuptaSystem memory dirty pages tracking when vIOMMU is enabled 1342a578133STarun Gupta--------------------------------------------------------- 1352a578133STarun Gupta 1362a578133STarun GuptaWith vIOMMU, an IO virtual address range can get unmapped while in pre-copy 1372a578133STarun Guptaphase of migration. In that case, the unmap ioctl returns any dirty pages in 1382a578133STarun Guptathat range and QEMU reports corresponding guest physical pages dirty. During 1392a578133STarun Guptastop-and-copy phase, an IOMMU notifier is used to get a callback for mapped 1402a578133STarun Guptapages and then dirty pages bitmap is fetched from VFIO IOMMU modules for those 141333f988dSAvihai Horonmapped ranges. If device dirty tracking is enabled with vIOMMU, live migration 142333f988dSAvihai Horonwill be blocked. 1432a578133STarun Gupta 1442a578133STarun GuptaFlow of state changes during Live migration 1452a578133STarun Gupta=========================================== 1462a578133STarun Gupta 147*94f775e4SAvihai HoronBelow is the state change flow during live migration for a VFIO device that 148*94f775e4SAvihai Horonsupports both precopy and P2P migration. The flow for devices that don't 149*94f775e4SAvihai Horonsupport it is similar, except that the relevant states for precopy and P2P are 150*94f775e4SAvihai Horonskipped. 151eda7362aSAvihai HoronThe values in the parentheses represent the VM state, the migration state, and 1522a578133STarun Guptathe VFIO device state, respectively. 1532a578133STarun Gupta 1542a578133STarun GuptaLive migration save path 1552a578133STarun Gupta------------------------ 1562a578133STarun Gupta 1572a578133STarun Gupta:: 1582a578133STarun Gupta 1592a578133STarun Gupta QEMU normal running state 1602a578133STarun Gupta (RUNNING, _NONE, _RUNNING) 1612a578133STarun Gupta | 1622a578133STarun Gupta migrate_init spawns migration_thread 1632a578133STarun Gupta Migration thread then calls each device's .save_setup() 164*94f775e4SAvihai Horon (RUNNING, _SETUP, _PRE_COPY) 1652a578133STarun Gupta | 166*94f775e4SAvihai Horon (RUNNING, _ACTIVE, _PRE_COPY) 167eda7362aSAvihai Horon If device is active, get pending_bytes by .state_pending_{estimate,exact}() 1682a578133STarun Gupta If total pending_bytes >= threshold_size, call .save_live_iterate() 169*94f775e4SAvihai Horon Data of VFIO device for pre-copy phase is copied 1702a578133STarun Gupta Iterate till total pending bytes converge and are less than threshold 1712a578133STarun Gupta | 172*94f775e4SAvihai Horon On migration completion, the vCPUs and the VFIO device are stopped 173*94f775e4SAvihai Horon The VFIO device is first put in P2P quiescent state 174*94f775e4SAvihai Horon (FINISH_MIGRATE, _ACTIVE, _PRE_COPY_P2P) 1752a578133STarun Gupta | 176*94f775e4SAvihai Horon Then the VFIO device is put in _STOP_COPY state 177*94f775e4SAvihai Horon (FINISH_MIGRATE, _ACTIVE, _STOP_COPY) 178*94f775e4SAvihai Horon .save_live_complete_precopy() is called for each active device 179*94f775e4SAvihai Horon For the VFIO device, iterate in .save_live_complete_precopy() until 1802a578133STarun Gupta pending data is 0 1812a578133STarun Gupta | 182*94f775e4SAvihai Horon (POSTMIGRATE, _COMPLETED, _STOP_COPY) 1832a578133STarun Gupta Migraton thread schedules cleanup bottom half and exits 184*94f775e4SAvihai Horon | 185*94f775e4SAvihai Horon .save_cleanup() is called 186*94f775e4SAvihai Horon (POSTMIGRATE, _COMPLETED, _STOP) 1872a578133STarun Gupta 1882a578133STarun GuptaLive migration resume path 1892a578133STarun Gupta-------------------------- 1902a578133STarun Gupta 1912a578133STarun Gupta:: 1922a578133STarun Gupta 193*94f775e4SAvihai Horon Incoming migration calls .load_setup() for each device 1942b0ab9e9SAvihai Horon (RESTORE_VM, _ACTIVE, _STOP) 1952a578133STarun Gupta | 196*94f775e4SAvihai Horon For each device, .load_state() is called for that device section data 1972a578133STarun Gupta (RESTORE_VM, _ACTIVE, _RESUMING) 1982a578133STarun Gupta | 199*94f775e4SAvihai Horon At the end, .load_cleanup() is called for each device and vCPUs are started 200*94f775e4SAvihai Horon The VFIO device is first put in P2P quiescent state 201*94f775e4SAvihai Horon (RUNNING, _ACTIVE, _RUNNING_P2P) 202*94f775e4SAvihai Horon | 2032a578133STarun Gupta (RUNNING, _NONE, _RUNNING) 2042a578133STarun Gupta 2052a578133STarun GuptaPostcopy 2062a578133STarun Gupta======== 2072a578133STarun Gupta 2082a578133STarun GuptaPostcopy migration is currently not supported for VFIO devices. 209