Lines Matching +full:max +full:- +full:virtual +full:- +full:functions
31 data copies by bypassing the host networking stack. In particular, a TCP-based
32 migration, under certain types of memory-bound workloads, may take a more
38 over Converged Ethernet) as well as Infiniband-based. This implementation of
56 of RDMA migration may in fact be harmful to co-located VMs or other
58 relocate the entire footprint of the virtual machine. If so, then the
62 For example, if you have an 8GB RAM virtual machine, but only 1GB
65 bulk-phase round of the migration and can be enabled for extremely
66 high-performance RDMA hardware using the following command:
69 $ migrate_set_capability rdma-pin-all on # disabled by default
79 Note: for very large virtual machines (hundreds of GBs), pinning all
80 *all* of the memory of your virtual machine in the kernel is very expensive
92 $ migrate_set_parameter max-bandwidth 40g # or whatever is the MAX of your RDMA device
96 qemu ..... -incoming rdma:host:port
101 $ migrate -d rdma:host:port
107 Using a 40gbps infiniband link performing a worst-case stress test,
108 using an 8GB RAM virtual machine:
111 $ apt-get install stress
112 $ stress --vm-bytes 7500M --vm 1 --vm-keep
123 1. rdma-pin-all disabled total time: approximately 7.5 seconds @ 9.5 Gbps
124 2. rdma-pin-all enabled total time: approximately 4 seconds @ 26 Gbps
126 These numbers would of course scale up to whatever size virtual machine
132 the bulk round and does not need to be re-registered during the successive
176 as follows (migration-rdma.c):
201 The maximum number of repeats is hard-coded to 4096. This is a conservative
208 3. Ready (control-channel is available)
209 4. QEMU File (for sending non-live device state)
226 After ram block exchange is completed, we have two protocol-level
227 functions, responsible for communicating control-channel commands
240 5. Verify that the command-type and version received matches the one we expected.
259 hold the rkey need to perform RDMA. Note that the virtual address
264 described above all use the aforementioned two functions to do the hard work:
268 a description of each RAMBlock on the server side as well as the virtual addresses
276 3. Also, the QEMUFile interfaces also call these functions (described below)
277 when transmitting non-live state, such as devices or to send
295 at connection-setup time before any infiniband traffic is generated.
322 Finally: Negotiation happens with the Flags field: If the primary-VM
324 will return a zero-bit for that flag and the primary-VM will understand
326 capability on the primary-VM side.
331 QEMUFileRDMA introduces a couple of new functions:
336 These two functions are very short and simply use the protocol
337 describe above to deliver bytes without changing the upper-level
344 to hold on to the bytes received from control-channel's SEND
347 Each time we receive a complete "QEMU File" control-channel
356 asking for a new SEND message to re-fill the buffer.
361 At the beginning of the migration, (migration-rdma.c),
367 a list of all the RAMBlocks, their offsets and lengths, virtual
368 addresses and possibly includes pre-registered RDMA keys in case dynamic
369 page registration was disabled on the server-side, otherwise not.
374 Pages are migrated in "chunks" (hard-coded to 1 Megabyte right now).
392 Error-handling:
404 After cleanup, the Virtual Machine is returned to normal
406 socket is broken during a non-RDMA based migration.
410 1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
415 3. Also, some form of balloon-device usage tracking would also
417 4. Use LRU to provide more fine-grained direction of UNREGISTER
419 5. Expose UNREGISTER support to the user by way of workload-specific