/openbmc/qemu/tests/migration/guestperf/ |
H A D | progress.py | 81 downtime, argument 92 self._downtime = downtime 107 "downtime": self._downtime, 124 data["downtime"],
|
H A D | scenario.py | 24 downtime=500, argument 40 self._downtime = downtime # milliseconds 72 "downtime": self._downtime, 97 data["downtime"],
|
H A D | shell.py | 112 parser.add_argument("--downtime", dest="downtime", default=500, type=int) 147 downtime=args.downtime,
|
H A D | engine.py | 101 info.get("downtime", 0), 102 info.get("expected-downtime", 0),
|
/openbmc/qemu/docs/devel/migration/ |
H A D | vfio.rst | 15 helps to reduce the total downtime of the VM. VFIO devices opt-in to pre-copy 19 When pre-copy is supported, it's possible to further reduce downtime by 24 guarantee that and thus, can potentially reduce downtime even further. 124 achieve its downtime tolerances. If QEMU during pre-copy phase keeps finding 126 it is likely to find dirty pages and can predict the downtime accordingly.
|
/openbmc/qemu/qapi/ |
H A D | migration.json | 52 # @downtime-bytes: The number of bytes sent while the guest is paused 73 'precopy-bytes': 'uint64', 'downtime-bytes': 'uint64', 208 # @downtime: only present when migration finishes correctly total 209 # downtime in milliseconds for the guest. (since 1.3) 211 # @expected-downtime: only present while migration is active expected 212 # downtime in milliseconds for the guest in last walk of the dirty 267 '*expected-downtime': 'int', 268 '*downtime': 'int', 304 # "downtime":12345, 332 # "expected-downtime":12345, [all …]
|
/openbmc/qemu/tests/qemu-iotests/tests/ |
H A D | migrate-bitmaps-postcopy-test | 186 downtime = event_dist(event_stop, event_resume) 189 assert downtime * 10 < postcopy_time 191 print('downtime:', downtime)
|
/openbmc/qemu/docs/ |
H A D | multi-thread-compression.txt | 24 about 70% in a typical case. In addition to this, the VM downtime can be 80 downtime(msec): | 100 | 27 100 downtime(msec): | 337 | 173
|
H A D | rdma.txt | 106 Here is a brief summary of total migration time and downtime using RDMA: 115 2. Downtime (stop time) varies between 15 and 100 milliseconds. 130 migration *downtime*. This is because, without this feature, all of the
|
H A D | xbzrle.txt | 5 of VM downtime and the total live-migration time of Virtual machines.
|
/openbmc/linux/fs/xfs/ |
H A D | Kconfig | 136 filesystem downtime by supplementing xfs_repair. The key 170 filesystem downtime by fixing minor problems before they cause the
|
/openbmc/qemu/migration/ |
H A D | trace-events | 50 …, const char *idstr, uint32_t instance_id, int64_t downtime) "type=%s idstr=%s instance_id=%d down… 51 …, const char *idstr, uint32_t instance_id, int64_t downtime) "type=%s idstr=%s instance_id=%d down…
|
H A D | dirtyrate.h | 35 * Lower limit relates to the smallest realistic downtime it
|
H A D | migration-hmp-cmds.c | 82 monitor_printf(mon, "expected downtime: %" PRIu64 " ms\n", in hmp_info_migrate() 86 monitor_printf(mon, "downtime: %" PRIu64 " ms\n", in hmp_info_migrate() 87 info->downtime); in hmp_info_migrate() 132 monitor_printf(mon, "downtime ram: %" PRIu64 " kbytes\n", in hmp_info_migrate()
|
H A D | migration-stats.h | 39 * the guest memory. We use that to calculate the downtime. As
|
H A D | migration.h | 310 * this threshold; it's calculated from the requested downtime and 354 int64_t downtime; member
|
H A D | migration.c | 114 trace_vmstate_downtime_checkpoint("src-downtime-start"); in migration_downtime_start() 123 * If downtime already set, should mean that postcopy already set it, in migration_downtime_end() 124 * then that should be the real downtime already. in migration_downtime_end() 126 if (!s->downtime) { in migration_downtime_end() 127 s->downtime = now - s->downtime_start; in migration_downtime_end() 130 trace_vmstate_downtime_checkpoint("src-downtime-end"); in migration_downtime_end() 1164 info->downtime = s->downtime; in populate_time_info() 1686 s->downtime = 0; in migrate_init() 2636 * used for getting a better measurement of downtime at the source. in postcopy_start()
|
/openbmc/linux/Documentation/ABI/testing/ |
H A D | debugfs-driver-qat | 21 but minimizes the customer’s system downtime. Also, if there are
|
/openbmc/linux/Documentation/networking/devlink/ |
H A D | devlink-reload.rst | 43 include reset or downtime as needed to perform the actions.
|
/openbmc/openbmc/poky/bitbake/lib/bb/fetch2/ |
H A D | README | 10 iii) allow work to continue even with downtime upstream
|
/openbmc/qemu/tests/qtest/ |
H A D | test-hmp.c | 49 "migrate_set_parameter downtime-limit 1",
|
H A D | migration-test.c | 459 /* Can't converge with 1ms downtime + 3 mbs bandwidth limit */ in migrate_ensure_non_converge() 461 migrate_set_parameter_int(who, "downtime-limit", 1); in migrate_ensure_non_converge() 466 /* Should converge with 30s downtime + 1 gbs bandwidth limit */ in migrate_ensure_converge() 468 migrate_set_parameter_int(who, "downtime-limit", 30 * 1000); in migrate_ensure_converge() 481 * low value, with tiny max downtime too. This basically 2659 * migration is not interesting for us here. Thus, set huge downtime for in do_test_validate_uuid() 2662 migrate_set_parameter_int(from, "downtime-limit", 1000000); in do_test_validate_uuid() 3731 migrate_set_parameter_int(from, "downtime-limit", downtime_limit); in test_migrate_dirty_limit()
|
/openbmc/qemu/tests/qemu-iotests/ |
H A D | 194 | 79 # was migrated during downtime (and no data to migrate in postcopy
|
/openbmc/linux/fs/xfs/libxfs/ |
H A D | xfs_health.h | 15 * some downtime for repairs. Until then, we would also like to avoid abrupt
|
/openbmc/linux/Documentation/devicetree/bindings/net/wireless/ |
H A D | mediatek,mt76.yaml | 115 Background radar/CAC detection allows to avoid the CAC downtime
|