xref: /openbmc/qemu/docs/COLO-FT.txt (revision e0bd743b)
1COarse-grained LOck-stepping Virtual Machines for Non-stop Service
2----------------------------------------
3Copyright (c) 2016 Intel Corporation
4Copyright (c) 2016 HUAWEI TECHNOLOGIES CO., LTD.
5Copyright (c) 2016 Fujitsu, Corp.
6
7This work is licensed under the terms of the GNU GPL, version 2 or later.
8See the COPYING file in the top-level directory.
9
10This document gives an overview of COLO's design and how to use it.
11
12== Background ==
13Virtual machine (VM) replication is a well known technique for providing
14application-agnostic software-implemented hardware fault tolerance,
15also known as "non-stop service".
16
17COLO (COarse-grained LOck-stepping) is a high availability solution.
18Both primary VM (PVM) and secondary VM (SVM) run in parallel. They receive the
19same request from client, and generate response in parallel too.
20If the response packets from PVM and SVM are identical, they are released
21immediately. Otherwise, a VM checkpoint (on demand) is conducted.
22
23== Architecture ==
24
25The architecture of COLO is shown in the diagram below.
26It consists of a pair of networked physical nodes:
27The primary node running the PVM, and the secondary node running the SVM
28to maintain a valid replica of the PVM.
29PVM and SVM execute in parallel and generate output of response packets for
30client requests according to the application semantics.
31
32The incoming packets from the client or external network are received by the
33primary node, and then forwarded to the secondary node, so that both the PVM
34and the SVM are stimulated with the same requests.
35
36COLO receives the outbound packets from both the PVM and SVM and compares them
37before allowing the output to be sent to clients.
38
39The SVM is qualified as a valid replica of the PVM, as long as it generates
40identical responses to all client requests. Once the differences in the outputs
41are detected between the PVM and SVM, COLO withholds transmission of the
42outbound packets until it has successfully synchronized the PVM state to the SVM.
43
44  Primary Node                                                            Secondary Node
45+------------+  +-----------------------+       +------------------------+  +------------+
46|            |  |       HeartBeat       +<----->+       HeartBeat        |  |            |
47| Primary VM |  +-----------+-----------+       +-----------+------------+  |Secondary VM|
48|            |              |                               |               |            |
49|            |  +-----------|-----------+       +-----------|------------+  |            |
50|            |  |QEMU   +---v----+      |       |QEMU  +----v---+        |  |            |
51|            |  |       |Failover|      |       |      |Failover|        |  |            |
52|            |  |       +--------+      |       |      +--------+        |  |            |
53|            |  |   +---------------+   |       |   +---------------+    |  |            |
54|            |  |   | VM Checkpoint +-------------->+ VM Checkpoint |    |  |            |
55|            |  |   +---------------+   |       |   +---------------+    |  |            |
56|Requests<--------------------------\ /-----------------\ /--------------------->Requests|
57|            |  |                   ^ ^ |       |       | |              |  |            |
58|Responses+---------------------\ /-|-|------------\ /-------------------------+Responses|
59|            |  |               | | | | |       |  | |  | |              |  |            |
60|            |  | +-----------+ | | | | |       |  | |  | | +----------+ |  |            |
61|            |  | | COLO disk | | | | | |       |  | |  | | | COLO disk| |  |            |
62|            |  | |   Manager +---------------------------->| Manager  | |  |            |
63|            |  | ++----------+ v v | | |       |  | v  v | +---------++ |  |            |
64|            |  |  |+-----------+-+-+-++|       | ++-+--+-+---------+ |  |  |            |
65|            |  |  ||   COLO Proxy     ||       | |   COLO Proxy    | |  |  |            |
66|            |  |  || (compare packet  ||       | |(adjust sequence | |  |  |            |
67|            |  |  ||and mirror packet)||       | |    and ACK)     | |  |  |            |
68|            |  |  |+------------+---+-+|       | +-----------------+ |  |  |            |
69+------------+  +-----------------------+       +------------------------+  +------------+
70+------------+     |             |   |                                |     +------------+
71| VM Monitor |     |             |   |                                |     | VM Monitor |
72+------------+     |             |   |                                |     +------------+
73+---------------------------------------+       +----------------------------------------+
74|   Kernel         |             |   |  |       |   Kernel            |                  |
75+---------------------------------------+       +----------------------------------------+
76                   |             |   |                                |
77    +--------------v+  +---------v---+--+       +------------------+ +v-------------+
78    |   Storage     |  |External Network|       | External Network | |   Storage    |
79    +---------------+  +----------------+       +------------------+ +--------------+
80
81
82== Components introduction ==
83
84You can see there are several components in COLO's diagram of architecture.
85Their functions are described below.
86
87HeartBeat:
88Runs on both the primary and secondary nodes, to periodically check platform
89availability. When the primary node suffers a hardware fail-stop failure,
90the heartbeat stops responding, the secondary node will trigger a failover
91as soon as it determines the absence.
92
93COLO disk Manager:
94When primary VM writes data into image, the colo disk manager captures this data
95and sends it to secondary VM's which makes sure the context of secondary VM's
96image is consistent with the context of primary VM 's image.
97For more details, please refer to docs/block-replication.txt.
98
99Checkpoint/Failover Controller:
100Modifications of save/restore flow to realize continuous migration,
101to make sure the state of VM in Secondary side is always consistent with VM in
102Primary side.
103
104COLO Proxy:
105Delivers packets to Primary and Secondary, and then compare the responses from
106both side. Then decide whether to start a checkpoint according to some rules.
107Please refer to docs/colo-proxy.txt for more information.
108
109Note:
110HeartBeat has not been implemented yet, so you need to trigger failover process
111by using 'x-colo-lost-heartbeat' command.
112
113== COLO operation status ==
114
115+-----------------+
116|                 |
117|    Start COLO   |
118|                 |
119+--------+--------+
120         |
121         |  Main qmp command:
122         |  migrate-set-capabilities with x-colo
123         |  migrate
124         |
125         v
126+--------+--------+
127|                 |
128|  COLO running   |
129|                 |
130+--------+--------+
131         |
132         |  Main qmp command:
133         |  x-colo-lost-heartbeat
134         |  or
135         |  some error happened
136         v
137+--------+--------+
138|                 |  send qmp event:
139|  COLO failover  |  COLO_EXIT
140|                 |
141+-----------------+
142
143COLO use the qmp command to switch and report operation status.
144The diagram just shows the main qmp command, you can get the detail
145in test procedure.
146
147== Test procedure ==
148Note: Here we are running both instances on the same host for testing,
149change the IP Addresses if you want to run it on two hosts. Initially
150127.0.0.1 is the Primary Host and 127.0.0.2 is the Secondary Host.
151
152== Startup qemu ==
1531. Primary:
154Note: Initially, $imagefolder/primary.qcow2 needs to be copied to all hosts.
155You don't need to change any IP's here, because 0.0.0.0 listens on any
156interface. The chardev's with 127.0.0.1 IP's loopback to the local qemu
157instance.
158
159# imagefolder="/mnt/vms/colo-test-primary"
160
161# qemu-system-x86_64 -enable-kvm -cpu qemu64,kvmclock=on -m 512 -smp 1 -qmp stdio \
162   -device piix3-usb-uhci -device usb-tablet -name primary \
163   -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \
164   -device rtl8139,id=e0,netdev=hn0 \
165   -chardev socket,id=mirror0,host=0.0.0.0,port=9003,server=on,wait=off \
166   -chardev socket,id=compare1,host=0.0.0.0,port=9004,server=on,wait=on \
167   -chardev socket,id=compare0,host=127.0.0.1,port=9001,server=on,wait=off \
168   -chardev socket,id=compare0-0,host=127.0.0.1,port=9001 \
169   -chardev socket,id=compare_out,host=127.0.0.1,port=9005,server=on,wait=off \
170   -chardev socket,id=compare_out0,host=127.0.0.1,port=9005 \
171   -object filter-mirror,id=m0,netdev=hn0,queue=tx,outdev=mirror0 \
172   -object filter-redirector,netdev=hn0,id=redire0,queue=rx,indev=compare_out \
173   -object filter-redirector,netdev=hn0,id=redire1,queue=rx,outdev=compare0 \
174   -object iothread,id=iothread1 \
175   -object colo-compare,id=comp0,primary_in=compare0-0,secondary_in=compare1,\
176outdev=compare_out0,iothread=iothread1 \
177   -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\
178children.0.file.filename=$imagefolder/primary.qcow2,children.0.driver=qcow2 -S
179
1802. Secondary:
181Note: Active and hidden images need to be created only once and the
182size should be the same as primary.qcow2. Again, you don't need to change
183any IP's here, except for the $primary_ip variable.
184
185# imagefolder="/mnt/vms/colo-test-secondary"
186# primary_ip=127.0.0.1
187
188# qemu-img create -f qcow2 $imagefolder/secondary-active.qcow2 10G
189
190# qemu-img create -f qcow2 $imagefolder/secondary-hidden.qcow2 10G
191
192# qemu-system-x86_64 -enable-kvm -cpu qemu64,kvmclock=on -m 512 -smp 1 -qmp stdio \
193   -device piix3-usb-uhci -device usb-tablet -name secondary \
194   -netdev tap,id=hn0,vhost=off,helper=/usr/lib/qemu/qemu-bridge-helper \
195   -device rtl8139,id=e0,netdev=hn0 \
196   -chardev socket,id=red0,host=$primary_ip,port=9003,reconnect=1 \
197   -chardev socket,id=red1,host=$primary_ip,port=9004,reconnect=1 \
198   -object filter-redirector,id=f1,netdev=hn0,queue=tx,indev=red0 \
199   -object filter-redirector,id=f2,netdev=hn0,queue=rx,outdev=red1 \
200   -object filter-rewriter,id=rew0,netdev=hn0,queue=all \
201   -drive if=none,id=parent0,file.filename=$imagefolder/primary.qcow2,driver=qcow2 \
202   -drive if=none,id=childs0,driver=replication,mode=secondary,file.driver=qcow2,\
203top-id=colo-disk0,file.file.filename=$imagefolder/secondary-active.qcow2,\
204file.backing.driver=qcow2,file.backing.file.filename=$imagefolder/secondary-hidden.qcow2,\
205file.backing.backing=parent0 \
206   -drive if=ide,id=colo-disk0,driver=quorum,read-pattern=fifo,vote-threshold=1,\
207children.0=childs0 \
208   -incoming tcp:0.0.0.0:9998
209
210
2113. On Secondary VM's QEMU monitor, issue command
212{"execute":"qmp_capabilities"}
213{"execute": "nbd-server-start", "arguments": {"addr": {"type": "inet", "data": {"host": "0.0.0.0", "port": "9999"} } } }
214{"execute": "nbd-server-add", "arguments": {"device": "parent0", "writable": true } }
215
216Note:
217  a. The qmp command nbd-server-start and nbd-server-add must be run
218     before running the qmp command migrate on primary QEMU
219  b. Active disk, hidden disk and nbd target's length should be the
220     same.
221  c. It is better to put active disk and hidden disk in ramdisk. They
222     will be merged into the parent disk on failover.
223
2244. On Primary VM's QEMU monitor, issue command:
225{"execute":"qmp_capabilities"}
226{"execute": "human-monitor-command", "arguments": {"command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0"}}
227{"execute": "x-blockdev-change", "arguments":{"parent": "colo-disk0", "node": "replication0" } }
228{"execute": "migrate-set-capabilities", "arguments": {"capabilities": [ {"capability": "x-colo", "state": true } ] } }
229{"execute": "migrate", "arguments": {"uri": "tcp:127.0.0.2:9998" } }
230
231  Note:
232  a. There should be only one NBD Client for each primary disk.
233  b. The qmp command line must be run after running qmp command line in
234     secondary qemu.
235
2365. After the above steps, you will see, whenever you make changes to PVM, SVM will be synced.
237You can issue command '{ "execute": "migrate-set-parameters" , "arguments":{ "x-checkpoint-delay": 2000 } }'
238to change the idle checkpoint period time
239
2406. Failover test
241You can kill one of the VMs and Failover on the surviving VM:
242
243If you killed the Secondary, then follow "Primary Failover". After that,
244if you want to resume the replication, follow "Primary resume replication"
245
246If you killed the Primary, then follow "Secondary Failover". After that,
247if you want to resume the replication, follow "Secondary resume replication"
248
249== Primary Failover ==
250The Secondary died, resume on the Primary
251
252{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "child": "children.1"} }
253{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_del replication0" } }
254{"execute": "object-del", "arguments":{ "id": "comp0" } }
255{"execute": "object-del", "arguments":{ "id": "iothread1" } }
256{"execute": "object-del", "arguments":{ "id": "m0" } }
257{"execute": "object-del", "arguments":{ "id": "redire0" } }
258{"execute": "object-del", "arguments":{ "id": "redire1" } }
259{"execute": "x-colo-lost-heartbeat" }
260
261== Secondary Failover ==
262The Primary died, resume on the Secondary and prepare to become the new Primary
263
264{"execute": "nbd-server-stop"}
265{"execute": "x-colo-lost-heartbeat"}
266
267{"execute": "object-del", "arguments":{ "id": "f2" } }
268{"execute": "object-del", "arguments":{ "id": "f1" } }
269{"execute": "chardev-remove", "arguments":{ "id": "red1" } }
270{"execute": "chardev-remove", "arguments":{ "id": "red0" } }
271
272{"execute": "chardev-add", "arguments":{ "id": "mirror0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "0.0.0.0", "port": "9003" } }, "server": true } } } }
273{"execute": "chardev-add", "arguments":{ "id": "compare1", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "0.0.0.0", "port": "9004" } }, "server": true } } } }
274{"execute": "chardev-add", "arguments":{ "id": "compare0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9001" } }, "server": true } } } }
275{"execute": "chardev-add", "arguments":{ "id": "compare0-0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9001" } }, "server": false } } } }
276{"execute": "chardev-add", "arguments":{ "id": "compare_out", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9005" } }, "server": true } } } }
277{"execute": "chardev-add", "arguments":{ "id": "compare_out0", "backend": {"type": "socket", "data": {"addr": { "type": "inet", "data": { "host": "127.0.0.1", "port": "9005" } }, "server": false } } } }
278
279== Primary resume replication ==
280Resume replication after new Secondary is up.
281
282Start the new Secondary (Steps 2 and 3 above), then on the Primary:
283{"execute": "drive-mirror", "arguments":{ "device": "colo-disk0", "job-id": "resync", "target": "nbd://127.0.0.2:9999/parent0", "mode": "existing", "format": "raw", "sync": "full"} }
284
285Wait until disk is synced, then:
286{"execute": "stop"}
287{"execute": "block-job-cancel", "arguments":{ "device": "resync"} }
288
289{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.2,file.port=9999,file.export=parent0,node-name=replication0"}}
290{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "node": "replication0" } }
291
292{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "m0", "netdev": "hn0", "queue": "tx", "outdev": "mirror0" } }
293{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire0", "netdev": "hn0", "queue": "rx", "indev": "compare_out" } }
294{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire1", "netdev": "hn0", "queue": "rx", "outdev": "compare0" } }
295{"execute": "object-add", "arguments":{ "qom-type": "iothread", "id": "iothread1" } }
296{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp0", "primary_in": "compare0-0", "secondary_in": "compare1", "outdev": "compare_out0", "iothread": "iothread1" } }
297
298{"execute": "migrate-set-capabilities", "arguments":{ "capabilities": [ {"capability": "x-colo", "state": true } ] } }
299{"execute": "migrate", "arguments":{ "uri": "tcp:127.0.0.2:9998" } }
300
301Note:
302If this Primary previously was a Secondary, then we need to insert the
303filters before the filter-rewriter by using the
304""insert": "before", "position": "id=rew0"" Options. See below.
305
306== Secondary resume replication ==
307Become Primary and resume replication after new Secondary is up. Note
308that now 127.0.0.1 is the Secondary and 127.0.0.2 is the Primary.
309
310Start the new Secondary (Steps 2 and 3 above, but with primary_ip=127.0.0.2),
311then on the old Secondary:
312{"execute": "drive-mirror", "arguments":{ "device": "colo-disk0", "job-id": "resync", "target": "nbd://127.0.0.1:9999/parent0", "mode": "existing", "format": "raw", "sync": "full"} }
313
314Wait until disk is synced, then:
315{"execute": "stop"}
316{"execute": "block-job-cancel", "arguments":{ "device": "resync" } }
317
318{"execute": "human-monitor-command", "arguments":{ "command-line": "drive_add -n buddy driver=replication,mode=primary,file.driver=nbd,file.host=127.0.0.1,file.port=9999,file.export=parent0,node-name=replication0"}}
319{"execute": "x-blockdev-change", "arguments":{ "parent": "colo-disk0", "node": "replication0" } }
320
321{"execute": "object-add", "arguments":{ "qom-type": "filter-mirror", "id": "m0", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "tx", "outdev": "mirror0" } }
322{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire0", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "rx", "indev": "compare_out" } }
323{"execute": "object-add", "arguments":{ "qom-type": "filter-redirector", "id": "redire1", "insert": "before", "position": "id=rew0", "netdev": "hn0", "queue": "rx", "outdev": "compare0" } }
324{"execute": "object-add", "arguments":{ "qom-type": "iothread", "id": "iothread1" } }
325{"execute": "object-add", "arguments":{ "qom-type": "colo-compare", "id": "comp0", "primary_in": "compare0-0", "secondary_in": "compare1", "outdev": "compare_out0", "iothread": "iothread1" } }
326
327{"execute": "migrate-set-capabilities", "arguments":{ "capabilities": [ {"capability": "x-colo", "state": true } ] } }
328{"execute": "migrate", "arguments":{ "uri": "tcp:127.0.0.1:9998" } }
329
330== TODO ==
3311. Support shared storage.
3322. Develop the heartbeat part.
3333. Reduce checkpoint VM’s downtime while doing checkpoint.
334