/openbmc/qemu/docs/system/ |
H A D | vm-templating.rst | 1 QEMU VM templating 4 This document explains how to use VM templating in QEMU. 6 For now, the focus is on VM memory aspects, and not about how to save and 7 restore other VM state (i.e., migrate-to-file with ``x-ignore-shared``). 12 With VM templating, a single template VM serves as the starting point for 16 Conceptually, the VM state is frozen, to then be used as a basis for new 18 new VMs are able to read template VM memory; however, any modifications 19 stay private and don't modify the original template VM or any other 20 created VM. 25 When effectively cloning VMs by VM templating, hardware identifiers [all …]
|
H A D | images.rst | 38 VM snapshots 41 VM snapshots are snapshots of the complete virtual machine including CPU 43 order to use VM snapshots, you must have at least one non removable and 47 Use the monitor command ``savevm`` to create a new VM snapshot or 51 Use ``loadvm`` to restore a VM snapshot and ``delvm`` to remove a VM 58 ID TAG VM SIZE DATE VM CLOCK 63 A VM snapshot is made of a VM state info (its size is shown in 64 ``info snapshots``) and a snapshot of every writable disk image. The VM 74 you can always make VM snapshots, but they are deleted as soon as you 77 VM snapshots currently have the following known limitations:
|
H A D | managed-startup.rst | 4 In system mode emulation, it's possible to create a VM in a paused 7 to execute VM code but VCPU threads are not executing any code. The VM 13 code loaded by QEMU in the VM's RAM and with incoming migration 19 allowing VM code to run. 22 that affect initial VM creation (like: ``-smp``/``-m``/``-numa`` ...) or 24 allows pausing QEMU before the initial VM creation, in a "preconfig" state,
|
/openbmc/linux/Documentation/translations/zh_CN/mm/ |
H A D | page_migration.rst | 129 为了克服这个问题,VM支持非LRU页面迁移,它为非LRU可移动页面提供了通用函数,而在迁移 137 VM对驱动的isolate_page()函数的期望是,如果驱动成功隔离了该页,则返回*true*。 138 返回true后,VM会将该页标记为PG_isolated,这样多个CPU的并发隔离就会跳过该 141 一旦页面被成功隔离,VM就会使用page.lru字段,因此驱动程序不应期望保留这些字段的值。 150 时,VM会在短时间内重试页面迁移,因为VM将-EAGAIN理解为 "临时迁移失败"。在返回除 151 -EAGAIN以外的任何错误时,VM将放弃页面迁移而不重试。 157 如果在隔离页上迁移失败,VM应该将隔离页返回给驱动,因此VM用隔离页调用驱动的 170 它需要address_space的参数来注册将被VM调用的migration family函数。确切地说, 171 PG_movable不是struct page的一个真正的标志。相反,VM复用了page->mapping的低 180 对于非LRU可移动页面的测试,VM支持__PageMovable()函数。然而,它并不能保证识别 [all …]
|
/openbmc/linux/Documentation/virt/ |
H A D | ne_overview.rst | 14 For example, an application that processes sensitive data and runs in a VM, 15 can be separated from other applications running in the same VM. This 16 application then runs in a separate VM than the primary VM, namely an enclave. 17 It runs alongside the VM that spawned it. This setup matches low latency 24 carved out of the primary VM. Each enclave is mapped to a process running in the 25 primary VM, that communicates with the NE kernel driver via an ioctl interface. 30 VM guest that uses the provided ioctl interface of the NE driver to spawn an 31 enclave VM (that's 2 below). 33 There is a NE emulated PCI device exposed to the primary VM. The driver for this 39 hypervisor running on the host where the primary VM is running. The Nitro [all …]
|
/openbmc/linux/Documentation/virt/acrn/ |
H A D | introduction.rst | 7 hardware. It has a privileged management VM, called Service VM, to manage User 10 ACRN userspace is an application running in the Service VM that emulates 11 devices for a User VM based on command line configurations. ACRN Hypervisor 12 Service Module (HSM) is a kernel module in the Service VM which provides 19 Service VM User VM 35 ACRN userspace allocates memory for the User VM, configures and initializes the 36 devices used by the User VM, loads the virtual bootloader, initializes the 37 virtual CPU state and handles I/O request accesses from the User VM. It uses
|
H A D | io-request.rst | 6 An I/O request of a User VM, which is constructed by the hypervisor, is 14 For each User VM, there is a shared 4-KByte memory region used for I/O requests 15 communication between the hypervisor and Service VM. An I/O request is a 18 VM. ACRN userspace in the Service VM first allocates a 4-KByte page and passes 26 An I/O client is responsible for handling User VM I/O requests whose accessed 28 User VM. There is a special client associated with each User VM, called the 31 VM. 39 | Service VM | 88 state when a trapped I/O access happens in a User VM. 90 the Service VM.
|
/openbmc/qemu/docs/devel/migration/ |
H A D | CPR.rst | 5 VM is migrated to a new QEMU instance on the same host. It is 7 that run the VM, such as QEMU or even the host kernel. At this time, 15 CPR unconditionally stops VM execution before memory is saved, and 21 In this mode, QEMU stops the VM, and writes VM state to the migration 32 software before restarting QEMU and resuming the VM. Further, if 50 to be saved in place. Otherwise, after QEMU stops the VM, all guest 67 * If the VM was running when the outgoing ``migrate`` command was 68 issued, then QEMU automatically resumes VM execution. 79 VM status: running 84 VM status: paused (postmigrate) [all …]
|
/openbmc/linux/net/iucv/ |
H A D | Kconfig | 5 prompt "IUCV support (S390 - z/VM only)" 8 under VM or VIF. If you run on z/VM, say "Y" to enable a fast 9 communication link between VM guests. 14 prompt "AF_IUCV Socket support (S390 - z/VM and HiperSockets transport)" 17 based on z/VM inter-user communication vehicle or based on
|
/openbmc/qemu/docs/system/i386/ |
H A D | nitro-enclave.rst | 11 for cryptographic attestation. The parent instance VM always has CID 3 while 12 the enclave VM gets a dynamic CID. Enclaves use an EIF (`Enclave Image Format`_) 45 Running a nitro-enclave VM 50 VM to the host machine and the forward-listen (port numbers separated by '+') is used 51 for forwarding connections from the host machine to the enclave VM. 58 Now run the necessary applications on the host machine so that the nitro-enclave VM 59 applications' vsock communication works. For example, the nitro-enclave VM's init 61 parent VM know that it booted expecting a heartbeat (0xB7) response. So you must run 63 after it receives the heartbeat for enclave VM to boot successfully. You should run all 65 VM for successful communication with the enclave VM. [all …]
|
/openbmc/openbmc/poky/meta/recipes-core/images/build-appliance-image/ |
H A D | README_VirtualBox_Toaster.txt | 5 Toaster is launched via the command in VM: 18 Find out your VM network IP address: 26 Launch the Toaster web server in VM: 37 Find out your VM network IP address: 45 When using NAT network, the VM web server can be accessed using 64 Now we can launch the Toaster web server in VM:
|
H A D | README_VirtualBox_Guest_Additions.txt | 9 Make sure VM is configured with an Optical Drive. 12 Build Appliance VM: 14 1. Boot VM, select root "Terminal" instead of the default "Terminal <2>" 16 2. Insert Guest additions CD into VM optical drive: 17 VM menu "Devices"->"Optical Drives"-> Select "VBoxGuestAdditions<version>.iso" 57 Guest VM: create mount point for the shared folder, i.e.:
|
/openbmc/linux/Documentation/virt/kvm/s390/ |
H A D | s390-pv-dump.rst | 10 Dumping a VM is an essential tool for debugging problems inside 11 it. This is especially true when a protected VM runs into trouble as 15 However when dumping a protected VM we need to maintain its 16 confidentiality until the dump is in the hands of the VM owner who 19 The confidentiality of the VM dump is ensured by the Ultravisor who 22 Communication Key which is the key that's used to encrypt VM data in a 34 and extracts dump keys with which the VM dump data will be encrypted. 38 Currently there are two types of data that can be gathered from a VM:
|
/openbmc/qemu/tests/qemu-iotests/ |
H A D | 261.out | 16 VM state size: 0 24 VM state size: 0 34 VM state size: 0 41 VM state size: 0 49 VM state size: 0 69 VM state size: 0 76 VM state size: 0 84 VM state size: 0 96 VM state size: 0 103 VM state size: 0 [all …]
|
H A D | 218 | 77 with iotests.VM() as vm: 91 with iotests.VM() as vm: 109 with iotests.VM() as vm: 125 with iotests.VM() as vm: 141 with iotests.VM() as vm, \
|
H A D | 286.out | 7 (snapshot ID) (snapshot name) (VM state size value) (VM state size unit) (snapshot date) (snapshot …
|
/openbmc/linux/drivers/s390/char/ |
H A D | Kconfig | 144 prompt "Support for the z/VM recording system services (VM only)" 148 by the z/VM recording system services, eg. from *LOGREC, *ACCOUNT or 154 prompt "Support for the z/VM CP interface" 159 program on z/VM 162 int "Memory in MiB reserved for z/VM CP interface" 166 Specify the default amount of memory in MiB reserved for the z/VM CP 173 prompt "API for reading z/VM monitor service records" 176 Character device driver for reading z/VM monitor service records 180 prompt "API for writing z/VM monitor service records" 183 Character device driver for writing z/VM monitor service records [all …]
|
/openbmc/linux/drivers/s390/net/ |
H A D | Kconfig | 22 It also supports virtual CTCs when running under VM. 31 prompt "IUCV network device support (VM only)" 35 vehicle networking under VM or VIF. It enables a fast communication 36 link between VM guests. Using ifconfig a point-to-point connection 38 running on the other VM guest. To compile as a module, choose M. 43 prompt "IUCV special message support (VM only)" 47 from other VM guest systems. 51 prompt "Deliver IUCV special messages as uevents (VM only)" 66 HiperSockets interfaces and z/VM virtual NICs for Guest LAN and
|
/openbmc/linux/Documentation/networking/ |
H A D | net_failover.rst | 24 datapath. It also enables hypervisor controlled live migration of a VM with 72 Booting a VM with the above configuration will result in the following 3 73 interfaces created in the VM: 92 device; and on the first boot, the VM might end up with both 'failover' device 94 This will result in lack of connectivity to the VM. So some tweaks might be 113 Live Migration of a VM with SR-IOV VF & virtio-net in STANDBY mode 121 the source hypervisor. Note: It is assumed that the VM is connected to a 123 device to the VM. This is not the VF that was passthrough'd to the VM (seen in 143 TAP_IF=vmtap01 # virtio-net interface in the VM. 152 # Remove the VF that was passthrough'd to the VM. [all …]
|
/openbmc/qemu/qapi/ |
H A D | run-state.json | 6 # = VM run state 12 # An enumeration of VM run states. 36 # @restore-vm: guest is paused to restore VM state 40 # @save-vm: guest is paused to save the VM state 52 # @colo: guest is paused to save/restore VM state under colo 53 # checkpoint, VM can not get into this state unless colo 105 # Information about VM run state 120 # Query the run status of the VM 122 # Returns: @StatusInfo reflecting the VM 334 # @reset: Reset the VM [all …]
|
/openbmc/qemu/tests/qemu-iotests/tests/ |
H A D | backing-file-invalidation | 40 vm_s: Optional[iotests.VM] = None 41 vm_d: Optional[iotests.VM] = None 102 self.vm_s = iotests.VM(path_suffix='a') \ 104 self.vm_d = iotests.VM(path_suffix='b') \
|
/openbmc/linux/Documentation/gpu/rfc/ |
H A D | i915_vm_bind.rst | 9 specified address space (VM). These mappings (also referred to as persistent 18 User has to opt-in for VM_BIND mode of binding for an address space (VM) 19 during VM creation time via I915_VM_CREATE_FLAGS_USE_VM_BIND extension. 38 submissions on that VM and will not be in the working set for currently running 43 A VM in VM_BIND mode will not support older execbuf mode of binding. 56 works with execbuf3 ioctl for submission. All BOs mapped on that VM (through 82 dma-resv fence list of all shared BOs mapped on the VM. 85 is private to a specified VM via I915_GEM_CREATE_EXT_VM_PRIVATE flag during 86 BO creation. Unlike Shared BOs, these VM private BOs can only be mapped on 87 the VM they are private to and can't be dma-buf exported. [all …]
|
/openbmc/qemu/docs/ |
H A D | igd-assign.txt | 19 the VM firmware. 24 graphics device in the VM[1], as such QEMU does not facilitate any sort 25 of remote graphics to the VM in this mode. A connected physical monitor 29 * IGD must be given address 02.0 on the PCI root bus in the VM 32 * The VM firmware must support specific fw_cfg enablers for IGD 33 * The VM machine type must support a PCI host bridge at 00.0 (standard) 34 * The VM machine type must provide or allow to be created a special 45 has been assigned to a VM. It's therefore generally recommended to prevent 50 device to vfio drivers and then managed='no' set in the VM xml to prevent 89 where the VM device address is not expressly specified. [all …]
|
H A D | COLO-FT.txt | 13 Virtual machine (VM) replication is a well known technique for providing 18 Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the 21 immediately. Otherwise, a VM checkpoint (on demand) is conducted. 47 | Primary VM | +-----------+-----------+ +-----------+------------+ |Secondary VM| 54 | | | | VM Checkpoint +-------------->+ VM Checkpoint | | | | 71 | VM Monitor | | | | | | VM Monitor | 94 When primary VM writes data into image, the colo disk manager captures this data 95 and sends it to secondary VM's which makes sure the context of secondary VM's 96 image is consistent with the context of primary VM 's image. 101 to make sure the state of VM in Secondary side is always consistent with VM in [all …]
|
/openbmc/linux/drivers/virt/acrn/ |
H A D | Kconfig | 10 a privileged management VM, called Service VM, to manage User 12 under ACRN as a User VM.
|