1=================== 2Migration framework 3=================== 4 5QEMU has code to load/save the state of the guest that it is running. 6These are two complementary operations. Saving the state just does 7that, saves the state for each device that the guest is running. 8Restoring a guest is just the opposite operation: we need to load the 9state of each device. 10 11For this to work, QEMU has to be launched with the same arguments the 12two times. I.e. it can only restore the state in one guest that has 13the same devices that the one it was saved (this last requirement can 14be relaxed a bit, but for now we can consider that configuration has 15to be exactly the same). 16 17Once that we are able to save/restore a guest, a new functionality is 18requested: migration. This means that QEMU is able to start in one 19machine and being "migrated" to another machine. I.e. being moved to 20another machine. 21 22Next was the "live migration" functionality. This is important 23because some guests run with a lot of state (specially RAM), and it 24can take a while to move all state from one machine to another. Live 25migration allows the guest to continue running while the state is 26transferred. Only while the last part of the state is transferred has 27the guest to be stopped. Typically the time that the guest is 28unresponsive during live migration is the low hundred of milliseconds 29(notice that this depends on a lot of things). 30 31.. contents:: 32 33Transports 34========== 35 36The migration stream is normally just a byte stream that can be passed 37over any transport. 38 39- tcp migration: do the migration using tcp sockets 40- unix migration: do the migration using unix sockets 41- exec migration: do the migration using the stdin/stdout through a process. 42- fd migration: do the migration using a file descriptor that is 43 passed to QEMU. QEMU doesn't care how this file descriptor is opened. 44 45In addition, support is included for migration using RDMA, which 46transports the page data using ``RDMA``, where the hardware takes care of 47transporting the pages, and the load on the CPU is much lower. While the 48internals of RDMA migration are a bit different, this isn't really visible 49outside the RAM migration code. 50 51All these migration protocols use the same infrastructure to 52save/restore state devices. This infrastructure is shared with the 53savevm/loadvm functionality. 54 55Common infrastructure 56===================== 57 58The files, sockets or fd's that carry the migration stream are abstracted by 59the ``QEMUFile`` type (see ``migration/qemu-file.h``). In most cases this 60is connected to a subtype of ``QIOChannel`` (see ``io/``). 61 62 63Saving the state of one device 64============================== 65 66For most devices, the state is saved in a single call to the migration 67infrastructure; these are *non-iterative* devices. The data for these 68devices is sent at the end of precopy migration, when the CPUs are paused. 69There are also *iterative* devices, which contain a very large amount of 70data (e.g. RAM or large tables). See the iterative device section below. 71 72General advice for device developers 73------------------------------------ 74 75- The migration state saved should reflect the device being modelled rather 76 than the way your implementation works. That way if you change the implementation 77 later the migration stream will stay compatible. That model may include 78 internal state that's not directly visible in a register. 79 80- When saving a migration stream the device code may walk and check 81 the state of the device. These checks might fail in various ways (e.g. 82 discovering internal state is corrupt or that the guest has done something bad). 83 Consider carefully before asserting/aborting at this point, since the 84 normal response from users is that *migration broke their VM* since it had 85 apparently been running fine until then. In these error cases, the device 86 should log a message indicating the cause of error, and should consider 87 putting the device into an error state, allowing the rest of the VM to 88 continue execution. 89 90- The migration might happen at an inconvenient point, 91 e.g. right in the middle of the guest reprogramming the device, during 92 guest reboot or shutdown or while the device is waiting for external IO. 93 It's strongly preferred that migrations do not fail in this situation, 94 since in the cloud environment migrations might happen automatically to 95 VMs that the administrator doesn't directly control. 96 97- If you do need to fail a migration, ensure that sufficient information 98 is logged to identify what went wrong. 99 100- The destination should treat an incoming migration stream as hostile 101 (which we do to varying degrees in the existing code). Check that offsets 102 into buffers and the like can't cause overruns. Fail the incoming migration 103 in the case of a corrupted stream like this. 104 105- Take care with internal device state or behaviour that might become 106 migration version dependent. For example, the order of PCI capabilities 107 is required to stay constant across migration. Another example would 108 be that a special case handled by subsections (see below) might become 109 much more common if a default behaviour is changed. 110 111- The state of the source should not be changed or destroyed by the 112 outgoing migration. Migrations timing out or being failed by 113 higher levels of management, or failures of the destination host are 114 not unusual, and in that case the VM is restarted on the source. 115 Note that the management layer can validly revert the migration 116 even though the QEMU level of migration has succeeded as long as it 117 does it before starting execution on the destination. 118 119- Buses and devices should be able to explicitly specify addresses when 120 instantiated, and management tools should use those. For example, 121 when hot adding USB devices it's important to specify the ports 122 and addresses, since implicit ordering based on the command line order 123 may be different on the destination. This can result in the 124 device state being loaded into the wrong device. 125 126VMState 127------- 128 129Most device data can be described using the ``VMSTATE`` macros (mostly defined 130in ``include/migration/vmstate.h``). 131 132An example (from hw/input/pckbd.c) 133 134.. code:: c 135 136 static const VMStateDescription vmstate_kbd = { 137 .name = "pckbd", 138 .version_id = 3, 139 .minimum_version_id = 3, 140 .fields = (const VMStateField[]) { 141 VMSTATE_UINT8(write_cmd, KBDState), 142 VMSTATE_UINT8(status, KBDState), 143 VMSTATE_UINT8(mode, KBDState), 144 VMSTATE_UINT8(pending, KBDState), 145 VMSTATE_END_OF_LIST() 146 } 147 }; 148 149We are declaring the state with name "pckbd". The ``version_id`` is 1503, and there are 4 uint8_t fields in the KBDState structure. We 151registered this ``VMSTATEDescription`` with one of the following 152functions. The first one will generate a device ``instance_id`` 153different for each registration. Use the second one if you already 154have an id that is different for each instance of the device: 155 156.. code:: c 157 158 vmstate_register_any(NULL, &vmstate_kbd, s); 159 vmstate_register(NULL, instance_id, &vmstate_kbd, s); 160 161For devices that are ``qdev`` based, we can register the device in the class 162init function: 163 164.. code:: c 165 166 dc->vmsd = &vmstate_kbd_isa; 167 168The VMState macros take care of ensuring that the device data section 169is formatted portably (normally big endian) and make some compile time checks 170against the types of the fields in the structures. 171 172VMState macros can include other VMStateDescriptions to store substructures 173(see ``VMSTATE_STRUCT_``), arrays (``VMSTATE_ARRAY_``) and variable length 174arrays (``VMSTATE_VARRAY_``). Various other macros exist for special 175cases. 176 177Note that the format on the wire is still very raw; i.e. a VMSTATE_UINT32 178ends up with a 4 byte bigendian representation on the wire; in the future 179it might be possible to use a more structured format. 180 181Legacy way 182---------- 183 184This way is going to disappear as soon as all current users are ported to VMSTATE; 185although converting existing code can be tricky, and thus 'soon' is relative. 186 187Each device has to register two functions, one to save the state and 188another to load the state back. 189 190.. code:: c 191 192 int register_savevm_live(const char *idstr, 193 int instance_id, 194 int version_id, 195 SaveVMHandlers *ops, 196 void *opaque); 197 198Two functions in the ``ops`` structure are the ``save_state`` 199and ``load_state`` functions. Notice that ``load_state`` receives a version_id 200parameter to know what state format is receiving. ``save_state`` doesn't 201have a version_id parameter because it always uses the latest version. 202 203Note that because the VMState macros still save the data in a raw 204format, in many cases it's possible to replace legacy code 205with a carefully constructed VMState description that matches the 206byte layout of the existing code. 207 208Changing migration data structures 209---------------------------------- 210 211When we migrate a device, we save/load the state as a series 212of fields. Sometimes, due to bugs or new functionality, we need to 213change the state to store more/different information. Changing the migration 214state saved for a device can break migration compatibility unless 215care is taken to use the appropriate techniques. In general QEMU tries 216to maintain forward migration compatibility (i.e. migrating from 217QEMU n->n+1) and there are users who benefit from backward compatibility 218as well. 219 220Subsections 221----------- 222 223The most common structure change is adding new data, e.g. when adding 224a newer form of device, or adding that state that you previously 225forgot to migrate. This is best solved using a subsection. 226 227A subsection is "like" a device vmstate, but with a particularity, it 228has a Boolean function that tells if that values are needed to be sent 229or not. If this functions returns false, the subsection is not sent. 230Subsections have a unique name, that is looked for on the receiving 231side. 232 233On the receiving side, if we found a subsection for a device that we 234don't understand, we just fail the migration. If we understand all 235the subsections, then we load the state with success. There's no check 236that a subsection is loaded, so a newer QEMU that knows about a subsection 237can (with care) load a stream from an older QEMU that didn't send 238the subsection. 239 240If the new data is only needed in a rare case, then the subsection 241can be made conditional on that case and the migration will still 242succeed to older QEMUs in most cases. This is OK for data that's 243critical, but in some use cases it's preferred that the migration 244should succeed even with the data missing. To support this the 245subsection can be connected to a device property and from there 246to a versioned machine type. 247 248The 'pre_load' and 'post_load' functions on subsections are only 249called if the subsection is loaded. 250 251One important note is that the outer post_load() function is called "after" 252loading all subsections, because a newer subsection could change the same 253value that it uses. A flag, and the combination of outer pre_load and 254post_load can be used to detect whether a subsection was loaded, and to 255fall back on default behaviour when the subsection isn't present. 256 257Example: 258 259.. code:: c 260 261 static bool ide_drive_pio_state_needed(void *opaque) 262 { 263 IDEState *s = opaque; 264 265 return ((s->status & DRQ_STAT) != 0) 266 || (s->bus->error_status & BM_STATUS_PIO_RETRY); 267 } 268 269 const VMStateDescription vmstate_ide_drive_pio_state = { 270 .name = "ide_drive/pio_state", 271 .version_id = 1, 272 .minimum_version_id = 1, 273 .pre_save = ide_drive_pio_pre_save, 274 .post_load = ide_drive_pio_post_load, 275 .needed = ide_drive_pio_state_needed, 276 .fields = (const VMStateField[]) { 277 VMSTATE_INT32(req_nb_sectors, IDEState), 278 VMSTATE_VARRAY_INT32(io_buffer, IDEState, io_buffer_total_len, 1, 279 vmstate_info_uint8, uint8_t), 280 VMSTATE_INT32(cur_io_buffer_offset, IDEState), 281 VMSTATE_INT32(cur_io_buffer_len, IDEState), 282 VMSTATE_UINT8(end_transfer_fn_idx, IDEState), 283 VMSTATE_INT32(elementary_transfer_size, IDEState), 284 VMSTATE_INT32(packet_transfer_size, IDEState), 285 VMSTATE_END_OF_LIST() 286 } 287 }; 288 289 const VMStateDescription vmstate_ide_drive = { 290 .name = "ide_drive", 291 .version_id = 3, 292 .minimum_version_id = 0, 293 .post_load = ide_drive_post_load, 294 .fields = (const VMStateField[]) { 295 .... several fields .... 296 VMSTATE_END_OF_LIST() 297 }, 298 .subsections = (const VMStateDescription * const []) { 299 &vmstate_ide_drive_pio_state, 300 NULL 301 } 302 }; 303 304Here we have a subsection for the pio state. We only need to 305save/send this state when we are in the middle of a pio operation 306(that is what ``ide_drive_pio_state_needed()`` checks). If DRQ_STAT is 307not enabled, the values on that fields are garbage and don't need to 308be sent. 309 310Connecting subsections to properties 311------------------------------------ 312 313Using a condition function that checks a 'property' to determine whether 314to send a subsection allows backward migration compatibility when 315new subsections are added, especially when combined with versioned 316machine types. 317 318For example: 319 320 a) Add a new property using ``DEFINE_PROP_BOOL`` - e.g. support-foo and 321 default it to true. 322 b) Add an entry to the ``hw_compat_`` for the previous version that sets 323 the property to false. 324 c) Add a static bool support_foo function that tests the property. 325 d) Add a subsection with a .needed set to the support_foo function 326 e) (potentially) Add an outer pre_load that sets up a default value 327 for 'foo' to be used if the subsection isn't loaded. 328 329Now that subsection will not be generated when using an older 330machine type and the migration stream will be accepted by older 331QEMU versions. 332 333Not sending existing elements 334----------------------------- 335 336Sometimes members of the VMState are no longer needed: 337 338 - removing them will break migration compatibility 339 340 - making them version dependent and bumping the version will break backward migration 341 compatibility. 342 343Adding a dummy field into the migration stream is normally the best way to preserve 344compatibility. 345 346If the field really does need to be removed then: 347 348 a) Add a new property/compatibility/function in the same way for subsections above. 349 b) replace the VMSTATE macro with the _TEST version of the macro, e.g.: 350 351 ``VMSTATE_UINT32(foo, barstruct)`` 352 353 becomes 354 355 ``VMSTATE_UINT32_TEST(foo, barstruct, pre_version_baz)`` 356 357 Sometime in the future when we no longer care about the ancient versions these can be killed off. 358 Note that for backward compatibility it's important to fill in the structure with 359 data that the destination will understand. 360 361Any difference in the predicates on the source and destination will end up 362with different fields being enabled and data being loaded into the wrong 363fields; for this reason conditional fields like this are very fragile. 364 365Versions 366-------- 367 368Version numbers are intended for major incompatible changes to the 369migration of a device, and using them breaks backward-migration 370compatibility; in general most changes can be made by adding Subsections 371(see above) or _TEST macros (see above) which won't break compatibility. 372 373Each version is associated with a series of fields saved. The ``save_state`` always saves 374the state as the newer version. But ``load_state`` sometimes is able to 375load state from an older version. 376 377You can see that there are two version fields: 378 379- ``version_id``: the maximum version_id supported by VMState for that device. 380- ``minimum_version_id``: the minimum version_id that VMState is able to understand 381 for that device. 382 383VMState is able to read versions from minimum_version_id to version_id. 384 385There are *_V* forms of many ``VMSTATE_`` macros to load fields for version dependent fields, 386e.g. 387 388.. code:: c 389 390 VMSTATE_UINT16_V(ip_id, Slirp, 2), 391 392only loads that field for versions 2 and newer. 393 394Saving state will always create a section with the 'version_id' value 395and thus can't be loaded by any older QEMU. 396 397Massaging functions 398------------------- 399 400Sometimes, it is not enough to be able to save the state directly 401from one structure, we need to fill the correct values there. One 402example is when we are using kvm. Before saving the cpu state, we 403need to ask kvm to copy to QEMU the state that it is using. And the 404opposite when we are loading the state, we need a way to tell kvm to 405load the state for the cpu that we have just loaded from the QEMUFile. 406 407The functions to do that are inside a vmstate definition, and are called: 408 409- ``int (*pre_load)(void *opaque);`` 410 411 This function is called before we load the state of one device. 412 413- ``int (*post_load)(void *opaque, int version_id);`` 414 415 This function is called after we load the state of one device. 416 417- ``int (*pre_save)(void *opaque);`` 418 419 This function is called before we save the state of one device. 420 421- ``int (*post_save)(void *opaque);`` 422 423 This function is called after we save the state of one device 424 (even upon failure, unless the call to pre_save returned an error). 425 426Example: You can look at hpet.c, that uses the first three functions 427to massage the state that is transferred. 428 429The ``VMSTATE_WITH_TMP`` macro may be useful when the migration 430data doesn't match the stored device data well; it allows an 431intermediate temporary structure to be populated with migration 432data and then transferred to the main structure. 433 434If you use memory API functions that update memory layout outside 435initialization (i.e., in response to a guest action), this is a strong 436indication that you need to call these functions in a ``post_load`` callback. 437Examples of such memory API functions are: 438 439 - memory_region_add_subregion() 440 - memory_region_del_subregion() 441 - memory_region_set_readonly() 442 - memory_region_set_nonvolatile() 443 - memory_region_set_enabled() 444 - memory_region_set_address() 445 - memory_region_set_alias_offset() 446 447Iterative device migration 448-------------------------- 449 450Some devices, such as RAM, Block storage or certain platform devices, 451have large amounts of data that would mean that the CPUs would be 452paused for too long if they were sent in one section. For these 453devices an *iterative* approach is taken. 454 455The iterative devices generally don't use VMState macros 456(although it may be possible in some cases) and instead use 457qemu_put_*/qemu_get_* macros to read/write data to the stream. Specialist 458versions exist for high bandwidth IO. 459 460 461An iterative device must provide: 462 463 - A ``save_setup`` function that initialises the data structures and 464 transmits a first section containing information on the device. In the 465 case of RAM this transmits a list of RAMBlocks and sizes. 466 467 - A ``load_setup`` function that initialises the data structures on the 468 destination. 469 470 - A ``state_pending_exact`` function that indicates how much more 471 data we must save. The core migration code will use this to 472 determine when to pause the CPUs and complete the migration. 473 474 - A ``state_pending_estimate`` function that indicates how much more 475 data we must save. When the estimated amount is smaller than the 476 threshold, we call ``state_pending_exact``. 477 478 - A ``save_live_iterate`` function should send a chunk of data until 479 the point that stream bandwidth limits tell it to stop. Each call 480 generates one section. 481 482 - A ``save_live_complete_precopy`` function that must transmit the 483 last section for the device containing any remaining data. 484 485 - A ``load_state`` function used to load sections generated by 486 any of the save functions that generate sections. 487 488 - ``cleanup`` functions for both save and load that are called 489 at the end of migration. 490 491Note that the contents of the sections for iterative migration tend 492to be open-coded by the devices; care should be taken in parsing 493the results and structuring the stream to make them easy to validate. 494 495Device ordering 496--------------- 497 498There are cases in which the ordering of device loading matters; for 499example in some systems where a device may assert an interrupt during loading, 500if the interrupt controller is loaded later then it might lose the state. 501 502Some ordering is implicitly provided by the order in which the machine 503definition creates devices, however this is somewhat fragile. 504 505The ``MigrationPriority`` enum provides a means of explicitly enforcing 506ordering. Numerically higher priorities are loaded earlier. 507The priority is set by setting the ``priority`` field of the top level 508``VMStateDescription`` for the device. 509 510Stream structure 511================ 512 513The stream tries to be word and endian agnostic, allowing migration between hosts 514of different characteristics running the same VM. 515 516 - Header 517 518 - Magic 519 - Version 520 - VM configuration section 521 522 - Machine type 523 - Target page bits 524 - List of sections 525 Each section contains a device, or one iteration of a device save. 526 527 - section type 528 - section id 529 - ID string (First section of each device) 530 - instance id (First section of each device) 531 - version id (First section of each device) 532 - <device data> 533 - Footer mark 534 - EOF mark 535 - VM Description structure 536 Consisting of a JSON description of the contents for analysis only 537 538The ``device data`` in each section consists of the data produced 539by the code described above. For non-iterative devices they have a single 540section; iterative devices have an initial and last section and a set 541of parts in between. 542Note that there is very little checking by the common code of the integrity 543of the ``device data`` contents, that's up to the devices themselves. 544The ``footer mark`` provides a little bit of protection for the case where 545the receiving side reads more or less data than expected. 546 547The ``ID string`` is normally unique, having been formed from a bus name 548and device address, PCI devices and storage devices hung off PCI controllers 549fit this pattern well. Some devices are fixed single instances (e.g. "pc-ram"). 550Others (especially either older devices or system devices which for 551some reason don't have a bus concept) make use of the ``instance id`` 552for otherwise identically named devices. 553 554Return path 555----------- 556 557Only a unidirectional stream is required for normal migration, however a 558``return path`` can be created when bidirectional communication is desired. 559This is primarily used by postcopy, but is also used to return a success 560flag to the source at the end of migration. 561 562``qemu_file_get_return_path(QEMUFile* fwdpath)`` gives the QEMUFile* for the return 563path. 564 565 Source side 566 567 Forward path - written by migration thread 568 Return path - opened by main thread, read by return-path thread 569 570 Destination side 571 572 Forward path - read by main thread 573 Return path - opened by main thread, written by main thread AND postcopy 574 thread (protected by rp_mutex) 575 576Dirty limit 577===================== 578The dirty limit, short for dirty page rate upper limit, is a new capability 579introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM 580dirty ring to throttle down the guest during live migration. 581 582The algorithm framework is as follows: 583 584:: 585 586 ------------------------------------------------------------------------------ 587 main --------------> throttle thread ------------> PREPARE(1) <-------- 588 thread \ | | 589 \ | | 590 \ V | 591 -\ CALCULATE(2) | 592 \ | | 593 \ | | 594 \ V | 595 \ SET PENALTY(3) ----- 596 -\ | 597 \ | 598 \ V 599 -> virtual CPU thread -------> ACCEPT PENALTY(4) 600 ------------------------------------------------------------------------------ 601 602When the qmp command qmp_set_vcpu_dirty_limit is called for the first time, 603the QEMU main thread starts the throttle thread. The throttle thread, once 604launched, executes the loop, which consists of three steps: 605 606 - PREPARE (1) 607 608 The entire work of PREPARE (1) is preparation for the second stage, 609 CALCULATE(2), as the name implies. It involves preparing the dirty 610 page rate value and the corresponding upper limit of the VM: 611 The dirty page rate is calculated via the KVM dirty ring mechanism, 612 which tells QEMU how many dirty pages a virtual CPU has had since the 613 last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper 614 limit is specified by caller, therefore fetch it directly. 615 616 - CALCULATE (2) 617 618 Calculate a suitable sleep period for each virtual CPU, which will be 619 used to determine the penalty for the target virtual CPU. The 620 computation must be done carefully in order to reduce the dirty page 621 rate progressively down to the upper limit without oscillation. To 622 achieve this, two strategies are provided: the first is to add or 623 subtract sleep time based on the ratio of the current dirty page rate 624 to the limit, which is used when the current dirty page rate is far 625 from the limit; the second is to add or subtract a fixed time when 626 the current dirty page rate is close to the limit. 627 628 - SET PENALTY (3) 629 630 Set the sleep time for each virtual CPU that should be penalized based 631 on the results of the calculation supplied by step CALCULATE (2). 632 633After completing the three above stages, the throttle thread loops back 634to step PREPARE (1) until the dirty limit is reached. 635 636On the other hand, each virtual CPU thread reads the sleep duration and 637sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that 638is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will 639obviously exit to the path and get penalized, whereas virtual CPUs involved 640with read processes will not. 641 642In summary, thanks to the KVM dirty ring technology, the dirty limit 643algorithm will restrict virtual CPUs as needed to keep their dirty page 644rate inside the limit. This leads to more steady reading performance during 645live migration and can aid in improving large guest responsiveness. 646 647