1COarse-grained LOck-stepping Virtual Machines for Non-stop Service 2---------------------------------------- 3Copyright (c) 2016 Intel Corporation 4Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD. 5Copyright (c) 2016 Fujitsu, Corp. 6 7This work is licensed under the terms of the GNU GPL, version 2 or later. 8See the COPYING file in the top-level directory. 9 10This document gives an overview of COLO's design and how to use it. 11 12== Background == 13Virtual machine (VM) replication is a well known technique for providing 14application-agnostic software-implemented hardware fault tolerance, 15also known as "non-stop service". 16 17COLO (COarse-grained LOck-stepping) is a high availability solution. 18Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the 19same request from client, and generate response in parallel too. 20If the response packets from PVM and SVM are identical, they are released 21immediately. Otherwise, a VM checkpoint (on demand) is conducted. 22 23== Architecture == 24 25The architecture of COLO is shown in the diagram below. 26It consists of a pair of networked physical nodes: 27The primary node running the PVM, and the secondary node running the SVM 28to maintain a valid replica of the PVM. 29PVM and SVM execute in parallel and generate output of response packets for 30client requests according to the application semantics. 31 32The incoming packets from the client or external network are received by the 33primary node, and then forwarded to the secondary node, so that both the PVM 34and the SVM are stimulated with the same requests. 35 36COLO receives the outbound packets from both the PVM and SVM and compares them 37before allowing the output to be sent to clients. 38 39The SVM is qualified as a valid replica of the PVM, as long as it generates 40identical responses to all client requests. Once the differences in the outputs 41are detected between the PVM and SVM, COLO withholds transmission of the 42outbound packets until it has successfully synchronized the PVM state to the SVM. 43 44 Primary Node Secondary Node 45+------------+ +-----------------------+ +------------------------+ +------------+ 46| | | HeartBeat +<----->+ HeartBeat | | | 47| Primary VM | +-----------+-----------+ +-----------+------------+ |Secondary VM| 48| | | | | | 49| | +-----------|-----------+ +-----------|------------+ | | 50| | |QEMU +---v----+ | |QEMU +----v---+ | | | 51| | | |Failover| | | |Failover| | | | 52| | | +--------+ | | +--------+ | | | 53| | | +---------------+ | | +---------------+ | | | 54| | | | VM Checkpoint +-------------->+ VM Checkpoint | | | | 55| | | +---------------+ | | +---------------+ | | | 56|Requests<--------------------------\ /-----------------\ /--------------------->Requests| 57| | | ^ ^ | | | | | | | 58|Responses+---------------------\ /-|-|------------\ /-------------------------+Responses| 59| | | | | | | | | | | | | | | | 60| | | +-----------+ | | | | | | | | | | +----------+ | | | 61| | | | COLO disk | | | | | | | | | | | | COLO disk| | | | 62| | | | Manager +---------------------------->| Manager | | | | 63| | | ++----------+ v v | | | | | v v | +---------++ | | | 64| | | |+-----------+-+-+-++| | ++-+--+-+---------+ | | | | 65| | | || COLO Proxy || | | COLO Proxy | | | | | 66| | | || (compare packet || | |(adjust sequence | | | | | 67| | | ||and mirror packet)|| | | and ACK) | | | | | 68| | | |+------------+---+-+| | +-----------------+ | | | | 69+------------+ +-----------------------+ +------------------------+ +------------+ 70+------------+ | | | | +------------+ 71| VM Monitor | | | | | | VM Monitor | 72+------------+ | | | | +------------+ 73+---------------------------------------+ +----------------------------------------+ 74| Kernel | | | | | Kernel | | 75+---------------------------------------+ +----------------------------------------+ 76 | | | | 77 +--------------v+ +---------v---+--+ +------------------+ +v-------------+ 78 | Storage | |External Network| | External Network | | Storage | 79 +---------------+ +----------------+ +------------------+ +--------------+ 80 81 82== Components introduction == 83 84You can see there are several components in COLO's diagram of architecture. 85Their functions are described below. 86 87HeartBeat: 88Runs on both the primary and secondary nodes, to periodically check platform 89availability. When the primary node suffers a hardware fail-stop failure, 90the heartbeat stops responding, the secondary node will trigger a failover 91as soon as it determines the absence. 92 93COLO disk Manager: 94When primary VM writes data into image, the colo disk manager captures this data 95and sends it to secondary VM's which makes sure the context of secondary VM's 96image is consistent with the context of primary VM 's image. 97For more details, please refer to docs/block-replication.txt. 98 99Checkpoint/Failover Controller: 100Modifications of save/restore flow to realize continuous migration, 101to make sure the state of VM in Secondary side is always consistent with VM in 102Primary side. 103 104COLO Proxy: 105Delivers packets to Primary and Secondary, and then compare the responses from 106both side. Then decide whether to start a checkpoint according to some rules. 107Please refer to docs/colo-proxy.txt for more information. 108 109Note: 110HeartBeat has not been implemented yet, so you need to trigger failover process 111by using 'x-colo-lost-heartbeat' command. 112 113== COLO operation status == 114 115+-----------------+ 116| | 117| Start COLO | 118| | 119+--------+--------+ 120 | 121 | Main qmp command: 122 | migrate-set-capabilities with x-colo 123 | migrate 124 | 125 v 126+--------+--------+ 127| | 128| COLO running | 129| | 130+--------+--------+ 131 | 132 | Main qmp command: 133 | x-colo-lost-heartbeat 134 | or 135 | some error happened 136 v 137+--------+--------+ 138| | send qmp event: 139| COLO failover | COLO_EXIT 140| | 141+-----------------+ 142 143COLO use the qmp command to switch and report operation status. 144The diagram just shows the main qmp command, you can get the detail 145in test procedure. 146 147== Test procedure == 148Note: Here we are running both instances on the same host for testing, 149change the IP Addresses if you want to run it on two hosts. Initially 150127.0.0.1 is the Primary Host and 127.0.0.2 is the Secondary Host. 151 152== Startup qemu == 1531. Primary: 154Note: Initially, $imagefolder/primary.qcow2 needs to be copied to all hosts. 155You don't need to change any IP's here, because 0.0.0.0 listens on any 156interface. The chardev's with 127.0.0.1 IP's loopback to the local qemu 157instance. 158 159# imagefolder="/mnt/vms/colo-test-primary" 160 161# qemu-system-x86_64 -enable-kvm -cpu qemu64,kvmclock=on -m 512 -smp 1 -qmp stdio \ 162 -device piix3-usb-uhci -device usb-tablet -name primary \ 163 -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \ 164 -device rtl8139,id=e0,netdev=hn0 \ 165 -chardev socket,id=mirror0,host=0.0.0.0,port=9003,server=on,wait=off \ 166 -chardev socket,id=compare1,host=0.0.0.0,port=9004,server=on,wait=on \ 167 -chardev socket,id=compare0,host=127.0.0.1,port=9001,server=on,wait=off \ 168 -chardev socket,id=compare0-0,host=127.0.0.1,port=9001 \ 169 -chardev socket,id=compare_out,host=127.0.0.1,port=9005,server=on,wait=off \ 170 -chardev socket,id=compare_out0,host=127.0.0.1,port=9005 \ 171 -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \ 172 -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out \ 173 -object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \ 174 -object iothread,id=iothread1 \ 175 -object colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,\ 176outdev=compare_out0,iothread=iothread1 \ 177 -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ 178children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2 -S 179 1802. Secondary: 181Note: Active and hidden images need to be created only once and the 182size should be the same as primary.qcow2. Again, you don't need to change 183any IP's here, except for the $primary_ip variable. 184 185# imagefolder="/mnt/vms/colo-test-secondary" 186# primary_ip=127.0.0.1 187 188# qemu-img create -f qcow2 $imagefolder/secondary-active.qcow2 10G 189 190# qemu-img create -f qcow2 $imagefolder/secondary-hidden.qcow2 10G 191 192# qemu-system-x86_64 -enable-kvm -cpu qemu64,kvmclock=on -m 512 -smp 1 -qmp stdio \ 193 -device piix3-usb-uhci -device usb-tablet -name secondary \ 194 -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \ 195 -device rtl8139,id=e0,netdev=hn0 \ 196 -chardev socket,id=red0,host=$primary_ip,port=9003,reconnect-ms=1000 \ 197 -chardev socket,id=red1,host=$primary_ip,port=9004,reconnect-ms=1000 \ 198 -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \ 199 -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \ 200 -object filter-rewriter,id=rew0,netdev=hn0,queue=all \ 201 -drive if=none,id=parent0,file.filename=$imagefolder/primary.qcow2,driver=qcow2 \ 202 -drive if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,\ 203top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,\ 204file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,\ 205file.backing.backing=parent0 \ 206 -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\ 207children.0=childs0 \ 208 -incoming tcp:0.0.0.0:9998 209 210 2113. On Secondary VM's QEMU monitor, issue command 212{"execute":"qmp_capabilities"} 213{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [ {"capability": "x-colo", "state": true } ] } } 214{"execute": "nbd-server-start", "arguments": {"addr": {"type": "inet", "data": {"host": "0.0.0.0", "port": "9999"} } } } 215{"execute": "nbd-server-add", "arguments": {"device": "parent0", "writable": true } } 216 217Note: 218 a. The qmp command nbd-server-start and nbd-server-add must be run 219 before running the qmp command migrate on primary QEMU 220 b. Active disk, hidden disk and nbd target's length should be the 221 same. 222 c. It is better to put active disk and hidden disk in ramdisk. They 223 will be merged into the parent disk on failover. 224 2254. On Primary VM's QEMU monitor, issue command: 226{"execute":"qmp_capabilities"} 227{"execute": "human-monitor-command", "arguments": {"command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0"}} 228{"execute": "x-blockdev-change", "arguments":{"parent": "colo-disk0", "node": "replication0" } } 229{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [ {"capability": "x-colo", "state": true } ] } } 230{"execute": "migrate", "arguments": {"uri": "tcp:127.0.0.2:9998" } } 231 232 Note: 233 a. There should be only one NBD Client for each primary disk. 234 b. The qmp command line must be run after running qmp command line in 235 secondary qemu. 236 2375. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced. 238You can issue command '{ "execute": "migrate-set-parameters" , "arguments":{ "x-checkpoint-delay": 2000 } }' 239to change the idle checkpoint period time 240 2416. Failover test 242You can kill one of the VMs and Failover on the surviving VM: 243 244If you killed the Secondary, then follow "Primary Failover". After that, 245if you want to resume the replication, follow "Primary resume replication" 246 247If you killed the Primary, then follow "Secondary Failover". After that, 248if you want to resume the replication, follow "Secondary resume replication" 249 250== Primary Failover == 251The Secondary died, resume on the Primary 252 253{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "child": "children.1"} } 254{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_del replication0" } } 255{"execute": "object-del", "arguments":{ "id": "comp0" } } 256{"execute": "object-del", "arguments":{ "id": "iothread1" } } 257{"execute": "object-del", "arguments":{ "id": "m0" } } 258{"execute": "object-del", "arguments":{ "id": "redire0" } } 259{"execute": "object-del", "arguments":{ "id": "redire1" } } 260{"execute": "x-colo-lost-heartbeat" } 261 262== Secondary Failover == 263The Primary died, resume on the Secondary and prepare to become the new Primary 264 265{"execute": "nbd-server-stop"} 266{"execute": "x-colo-lost-heartbeat"} 267 268{"execute": "object-del", "arguments":{ "id": "f2" } } 269{"execute": "object-del", "arguments":{ "id": "f1" } } 270{"execute": "chardev-remove", "arguments":{ "id": "red1" } } 271{"execute": "chardev-remove", "arguments":{ "id": "red0" } } 272 273{"execute": "chardev-add", "arguments":{ "id": "mirror0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "0.0.0.0", "port": "9003" } }, "server": true } } } } 274{"execute": "chardev-add", "arguments":{ "id": "compare1", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "0.0.0.0", "port": "9004" } }, "server": true } } } } 275{"execute": "chardev-add", "arguments":{ "id": "compare0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9001" } }, "server": true } } } } 276{"execute": "chardev-add", "arguments":{ "id": "compare0-0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9001" } }, "server": false } } } } 277{"execute": "chardev-add", "arguments":{ "id": "compare_out", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9005" } }, "server": true } } } } 278{"execute": "chardev-add", "arguments":{ "id": "compare_out0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9005" } }, "server": false } } } } 279 280== Primary resume replication == 281Resume replication after new Secondary is up. 282 283Start the new Secondary (Steps 2 and 3 above), then on the Primary: 284{"execute": "drive-mirror", "arguments":{ "device": "colo-disk0", "job-id": "resync", "target": "nbd://127.0.0.2:9999/parent0", "mode": "existing", "format": "raw", "sync": "full"} } 285 286Wait until disk is synced, then: 287{"execute": "stop"} 288{"execute": "block-job-cancel", "arguments":{ "device": "resync"} } 289 290{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0"}} 291{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "node": "replication0" } } 292 293{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "m0", "netdev": "hn0", "queue": "tx", "outdev": "mirror0" } } 294{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire0", "netdev": "hn0", "queue": "rx", "indev": "compare_out" } } 295{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire1", "netdev": "hn0", "queue": "rx", "outdev": "compare0" } } 296{"execute": "object-add", "arguments":{ "qom-type": "iothread", "id": "iothread1" } } 297{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp0", "primary_in": "compare0-0", "secondary_in": "compare1", "outdev": "compare_out0", "iothread": "iothread1" } } 298 299{"execute": "migrate-set-capabilities", "arguments":{ "capabilities": [ {"capability": "x-colo", "state": true } ] } } 300{"execute": "migrate", "arguments":{ "uri": "tcp:127.0.0.2:9998" } } 301 302Note: 303If this Primary previously was a Secondary, then we need to insert the 304filters before the filter-rewriter by using the 305""insert": "before", "position": "id=rew0"" Options. See below. 306 307== Secondary resume replication == 308Become Primary and resume replication after new Secondary is up. Note 309that now 127.0.0.1 is the Secondary and 127.0.0.2 is the Primary. 310 311Start the new Secondary (Steps 2 and 3 above, but with primary_ip=127.0.0.2), 312then on the old Secondary: 313{"execute": "drive-mirror", "arguments":{ "device": "colo-disk0", "job-id": "resync", "target": "nbd://127.0.0.1:9999/parent0", "mode": "existing", "format": "raw", "sync": "full"} } 314 315Wait until disk is synced, then: 316{"execute": "stop"} 317{"execute": "block-job-cancel", "arguments":{ "device": "resync" } } 318 319{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.1,file.port=9999,file.export=parent0,node-name=replication0"}} 320{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "node": "replication0" } } 321 322{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "m0", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "tx", "outdev": "mirror0" } } 323{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire0", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "rx", "indev": "compare_out" } } 324{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire1", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "rx", "outdev": "compare0" } } 325{"execute": "object-add", "arguments":{ "qom-type": "iothread", "id": "iothread1" } } 326{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp0", "primary_in": "compare0-0", "secondary_in": "compare1", "outdev": "compare_out0", "iothread": "iothread1" } } 327 328{"execute": "migrate-set-capabilities", "arguments":{ "capabilities": [ {"capability": "x-colo", "state": true } ] } } 329{"execute": "migrate", "arguments":{ "uri": "tcp:127.0.0.1:9998" } } 330 331== TODO == 3321. Support shared storage. 3332. Develop the heartbeat part. 3343. Reduce checkpoint VM’s downtime while doing checkpoint. 335