ad74751f | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_filters() need to hold a reader lock for the graph because it calls b
block: Mark bdrv_skip_filters() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_filters() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-9-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
430da832 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_skip_implicit_filters() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_implicit_filters() need to hold a reader lock for the graph
block: Mark bdrv_skip_implicit_filters() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_skip_implicit_filters() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-8-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
372b69f5 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_filter_or_cow_bs() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_or_cow_bs() need to hold a reader lock for the graph because it
block: Mark bdrv_filter_or_cow_bs() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_or_cow_bs() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_child(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-7-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
f3bbc53d | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark block_job_add_bdrv() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling block_job_add_bdrv(). These callers will typically already
block: Mark block_job_add_bdrv() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling block_job_add_bdrv(). These callers will typically already hold the graph lock once the locking work is completed, which means that they can't call functions that take it internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-6-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
03b9eaca | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_root_attach_child() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_root_attach_child(). These callers will typically
block: Mark bdrv_root_attach_child() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_root_attach_child(). These callers will typically already hold the graph lock once the locking work is completed, which means that they can't call functions that take it internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-5-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
f5a3a270 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_filter_bs() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_bs() need to hold a reader lock for the graph because it calls bdrv_fi
block: Mark bdrv_filter_bs() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_filter_bs() need to hold a reader lock for the graph because it calls bdrv_filter_child(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-4-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
06717986 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_has_zero_init() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_has_zero_init() need to hold a reader lock for the graph because it calls
block: Mark bdrv_has_zero_init() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_has_zero_init() need to hold a reader lock for the graph because it calls bdrv_filter_bs(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-3-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
221caadc | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_probe_blocksizes() need to hold a reader lock for the graph because it
block: Mark bdrv_probe_blocksizes() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_probe_blocksizes() need to hold a reader lock for the graph because it calls bdrv_filter_bs(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-2-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
80aaef96 | 06-Nov-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
Merge tag 'pull-block-2023-11-06' of https://gitlab.com/hreitz/qemu into staging
Block patches: - One patch to make qcow2's discard-no-unref option do better what it is supposed to do (i.e. preven
Merge tag 'pull-block-2023-11-06' of https://gitlab.com/hreitz/qemu into staging
Block patches: - One patch to make qcow2's discard-no-unref option do better what it is supposed to do (i.e. prevent fragmentation) - Two fixes for zoned requests
# -----BEGIN PGP SIGNATURE----- # # iQJGBAABCAAwFiEEy2LXoO44KeRfAE00ofpA0JgBnN8FAmVJHbgSHGhyZWl0ekBy # ZWRoYXQuY29tAAoJEKH6QNCYAZzfLn4QAKxuUYZaXirv6K4U2tW4aAJtc5uESdwv # WYhG7YU7MleBGCY0fRoih5thrPrzRLC8o1QhbRcA36+/PAZf4BYrJEfqLUdzuN5x # 6Vb1n3NRUzPD1+VfL/B9hVZhFbtTOUZuxPGEqCoHAmqBaeKuYRT1bLZbtRtPVLSk # 5eTMiyrpRMlBWc7O71eGKLqU4k0vAznwHBGf2Z93qWAsKcRZCwbAWYa7Q6rJ9jJ8 # 1jNsQuAk0p74/uGEpFhoEVrFEcV6pMbI4+jB9i0t9YYxT0tLIdIX1VUx+AHJfItk # IF2stB6SFOaAy2W3Fn+0oJvz40aMLzg9VjEeTpGmdlKC67ZTYa6Obwzy5WNLPIap # k7VUheUEe8qoKUtxQNxGLR/HKEJSFXyhU0lgAGxE1gl2xc1QFFFsrimpwFd3d37j # 3PwfhjARHonf4ZXgsvtIjb7nG9seMZYO7Vht0OztJyW8c2XN5OFVPir9xLbd9VUg # wZNGB8jAsHgj77+S/mRIwpP+laKL8wB7zYZ1mgFI98QJIYqL8tGdV/IiUhLljHzc # XAmwekOhBMMbgHhliBy9zDuTy59+zZ0FoxZPn/JvBjqBAkEnz9EbhHxi2imQg+1d # XSoLbx1X1yEbepWz8mCGiveLIPkt+3qMJuuQF76nURaA+nm3tCl/nKca6QLnVKzU # 2QtPWS0qRmwd # =5w7S # -----END PGP SIGNATURE----- # gpg: Signature made Tue 07 Nov 2023 01:09:12 HKT # gpg: using RSA key CB62D7A0EE3829E45F004D34A1FA40D098019CDF # gpg: issuer "hreitz@redhat.com" # gpg: Good signature from "Hanna Reitz <hreitz@redhat.com>" [unknown] # gpg: WARNING: The key's User ID is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: CB62 D7A0 EE38 29E4 5F00 4D34 A1FA 40D0 9801 9CDF
* tag 'pull-block-2023-11-06' of https://gitlab.com/hreitz/qemu: file-posix: fix over-writing of returning zone_append offset block/file-posix: fix update_zones_wp() caller qcow2: keep reference on zeroize with discard-no-unref enabled
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
f6b174ff | 06-Nov-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
Merge tag 'pull-target-arm-20231106' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * hw/arm/virt: fix PMU IRQ registration * hw/arm/virt: Report correct registe
Merge tag 'pull-target-arm-20231106' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * hw/arm/virt: fix PMU IRQ registration * hw/arm/virt: Report correct register sizes in ACPI DBG2/SPCR tables * hw/i386/intel_iommu: vtd_slpte_nonzero_rsvd(): assert no overflow * util/filemonitor-inotify: qemu_file_monitor_watch(): assert no overflow * mc146818rtc: rtc_set_time(): initialize tm to zeroes * block/nvme: nvme_process_completion() fix bound for cid * hw/core/loader: gunzip(): initialize z_stream * io/channel-socket: qio_channel_socket_flush(): improve msg validation * hw/arm/vexpress-a9: Remove useless mapping of RAM at address 0 * target/arm: Fix A64 LDRA immediate decode
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmVJBtUZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3qYTEACYqLV57JezgRFXzMEwKX3l # 9IYbFje+lGemobdJOEHhRvXjCNb+5TwhEfQasri0FBzokw16S3WOOF7roGb6YOU1 # od1SGiS2AbrmiazlBpamVO8z0WAEgbnXIoQa/3xKAGPJXszD2zK+06KnXS5xuCuD # nHojzIx7Gv4HEIs4huY39/YL2HMaxrqvXC8IAu51eqY+TPnETT+WI3HxlZ2OMIsn # 1Jnn+FeZfA1bhKx4JsD9MyHM1ovbjOwYkHOlzjU6fmTFFPGKRy0nxnjMNCBcXHQ+ # unemc/9BhEFup76tkX+JIlSBrPre5Mnh93DsGKSapwKPKq+fQhUDmzXY2r3OvQZX # ryxO4PJkCNTM1wZU6GeEDPWVfhgBKHUMv+tr9Mf9iBlyXRsmXLSEl7AFUUaFlgAL # dSMyiAaUlfvGa7Gtta9eFAJ/GeaiuJu2CYq6lvtRrNIHflLm3gVCef8gmwM5Eqxm # 3PNzEoabKyQQfz69j9RCLpoutMBq1sg2IzxW8UjAFupugcIABjLf0Sl11qA0/B89 # YX67B0ynQD9ajI2GS8ULid/tvEiJVgdZ2Ua3U3xpG54vKG1/54EUiCP8TtoIuoMy # bKg8AU9EIPN962PxoAwS+bSSdCu7/zBjVpg4T/zIzWRdgSjRsE21Swu5Ca934ng5 # VpVUuiwtI/zvHgqaiORu+w== # =UbqJ # -----END PGP SIGNATURE----- # gpg: Signature made Mon 06 Nov 2023 23:31:33 HKT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20231106' of https://git.linaro.org/people/pmaydell/qemu-arm: target/arm: Fix A64 LDRA immediate decode hw/arm/vexpress-a9: Remove useless mapping of RAM at address 0 io/channel-socket: qio_channel_socket_flush(): improve msg validation hw/core/loader: gunzip(): initialize z_stream block/nvme: nvme_process_completion() fix bound for cid mc146818rtc: rtc_set_time(): initialize tm to zeroes util/filemonitor-inotify: qemu_file_monitor_watch(): assert no overflow hw/i386/intel_iommu: vtd_slpte_nonzero_rsvd(): assert no overflow tests/qtest/bios-tables-test: Update virt SPCR and DBG2 golden references hw/arm/virt: Report correct register sizes in ACPI DBG2/SPCR tables. tests/qtest/bios-tables-test: Allow changes to virt SPCR and DBG2 hw/arm/virt: fix PMU IRQ registration
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
ad4feaca | 30-Oct-2023 |
Naohiro Aota <nao.aota@gmail.com> |
file-posix: fix over-writing of returning zone_append offset
raw_co_zone_append() sets "s->offset" where "BDRVRawState *s". This pointer is used later at raw_co_prw() to save the block address where
file-posix: fix over-writing of returning zone_append offset
raw_co_zone_append() sets "s->offset" where "BDRVRawState *s". This pointer is used later at raw_co_prw() to save the block address where the data is written.
When multiple IOs are on-going at the same time, a later IO's raw_co_zone_append() call over-writes a former IO's offset address before raw_co_prw() completes. As a result, the former zone append IO returns the initial value (= the start address of the writing zone), instead of the proper address.
Fix the issue by passing the offset pointer to raw_co_prw() instead of passing it through s->offset. Also, remove "offset" from BDRVRawState as there is no usage anymore.
Fixes: 4751d09adcc3 ("block: introduce zone append write for zoned devices") Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com> Message-Id: <20231030073853.2601162-1-naohiro.aota@wdc.com> Reviewed-by: Sam Li <faithilikerun@gmail.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
show more ...
|
10b9e080 | 24-Aug-2023 |
Sam Li <faithilikerun@gmail.com> |
block/file-posix: fix update_zones_wp() caller
When the zoned request fail, it needs to update only the wp of the target zones for not disrupting the in-flight writes on these other zones. The wp is
block/file-posix: fix update_zones_wp() caller
When the zoned request fail, it needs to update only the wp of the target zones for not disrupting the in-flight writes on these other zones. The wp is updated successfully after the request completes.
Fixed the callers with right offset and nr_zones.
Signed-off-by: Sam Li <faithilikerun@gmail.com> Message-Id: <20230825040556.4217-1-faithilikerun@gmail.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [hreitz: Rebased and fixed comment spelling] Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
show more ...
|
b2b10904 | 03-Oct-2023 |
Jean-Louis Dupond <jean-louis@dupond.be> |
qcow2: keep reference on zeroize with discard-no-unref enabled
When the discard-no-unref flag is enabled, we keep the reference for normal discard requests. But when a discard is executed on a snaps
qcow2: keep reference on zeroize with discard-no-unref enabled
When the discard-no-unref flag is enabled, we keep the reference for normal discard requests. But when a discard is executed on a snapshot/qcow2 image with backing, the discards are saved as zero clusters in the snapshot image.
When committing the snapshot to the backing file, not discard_in_l2_slice is called but zero_in_l2_slice. Which did not had any logic to keep the reference when discard-no-unref is enabled.
Therefor we add logic in the zero_in_l2_slice call to keep the reference on commit.
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/1621 Signed-off-by: Jean-Louis Dupond <jean-louis@dupond.be> Message-Id: <20231003125236.216473-2-jean-louis@dupond.be> [hreitz: Made the documentation change more verbose, as discussed on-list] Signed-off-by: Hanna Czenczek <hreitz@redhat.com>
show more ...
|
cc8fb0c3 | 06-Nov-2023 |
Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> |
block/nvme: nvme_process_completion() fix bound for cid
NVMeQueuePair::reqs has length NVME_NUM_REQS, which less than NVME_QUEUE_SIZE by 1.
Fixes: 1086e95da17050 ("block/nvme: switch to a NVMeReque
block/nvme: nvme_process_completion() fix bound for cid
NVMeQueuePair::reqs has length NVME_NUM_REQS, which less than NVME_QUEUE_SIZE by 1.
Fixes: 1086e95da17050 ("block/nvme: switch to a NVMeRequest freelist") Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Maksim Davydov <davydov-max@yandex-team.ru> Message-id: 20231017125941.810461-5-vsementsov@yandex-team.ru Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
721da039 | 26-Oct-2023 |
Cédric Le Goater <clg@redhat.com> |
util/uuid: Add UUID_STR_LEN definition
qemu_uuid_unparse() includes a trailing NUL when writing the uuid string and the buffer size should be UUID_FMT_LEN + 1 bytes. Add a define for this size and u
util/uuid: Add UUID_STR_LEN definition
qemu_uuid_unparse() includes a trailing NUL when writing the uuid string and the buffer size should be UUID_FMT_LEN + 1 bytes. Add a define for this size and use it where required.
Cc: Fam Zheng <fam@euphon.net> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: "Denis V. Lunev" <den@openvz.org> Signed-off-by: Cédric Le Goater <clg@redhat.com>
show more ...
|
e0ee3a8f | 25-Oct-2023 |
Steve Sistare <steven.sistare@oracle.com> |
cpr: relax blockdev migration blockers
Some blockdevs block migration because they do not support sharing across hosts and/or do not support dirty bitmaps. These prohibitions do not apply if the ol
cpr: relax blockdev migration blockers
Some blockdevs block migration because they do not support sharing across hosts and/or do not support dirty bitmaps. These prohibitions do not apply if the old and new qemu processes do not run concurrently, and if new qemu starts on the same host as old, which is the case for cpr. Narrow the scope of these blockers so they only apply to normal mode. They will not block cpr modes when they are added in subsequent patches.
No functional change until a new mode is added.
Signed-off-by: Steve Sistare <steven.sistare@oracle.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com> Message-ID: <1698263069-406971-4-git-send-email-steven.sistare@oracle.com>
show more ...
|
76cb2f24 | 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
mirror: return mirror-specific information upon query
To start out, only actively-synced is returned.
For example, this is useful for jobs that started out in background mode and switched to active
mirror: return mirror-specific information upon query
To start out, only actively-synced is returned.
For example, this is useful for jobs that started out in background mode and switched to active mode. Once actively-synced is true, it's clear that the mode switch has been completed. Note that completion of the switch might happen much earlier, e.g. if the switch happens before the job is ready, once all background operations have finished. It's assumed that whether the disks are actively-synced or not is more interesting than whether the mode switch completed. That information can still be added if required in the future.
In presence of an iothread, the actively_synced member is now shared between the iothread and the main thread, so turn accesses to it atomic.
Requires to adapt the output for iotest 109.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Message-ID: <20231031135431.393137-10-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
d67c54d0 | 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
qapi/block-core: use JobType for BlockJobInfo's type
In preparation to turn BlockJobInfo into a union with @type as the discriminator. That requires it to be an enum. Even without that requirement,
qapi/block-core: use JobType for BlockJobInfo's type
In preparation to turn BlockJobInfo into a union with @type as the discriminator. That requires it to be an enum. Even without that requirement, it's nicer to have an enum instead of a str here.
No functional change is intended.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Markus Armbruster <armbru@redhat.com> Message-ID: <20231031135431.393137-7-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
2d400d15 | 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
mirror: implement mirror_change method
which allows switching the @copy-mode from 'background' to 'write-blocking'.
This is useful for management applications, so they can start out in background m
mirror: implement mirror_change method
which allows switching the @copy-mode from 'background' to 'write-blocking'.
This is useful for management applications, so they can start out in background mode to avoid limiting guest write speed and switch to active mode when certain criteria are fulfilled.
In presence of an iothread, the copy_mode member is now shared between the iothread and the main thread, so turn accesses to it atomic.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Message-ID: <20231031135431.393137-6-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
7b32ad22 | 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
block/mirror: determine copy_to_target only once
In preparation to allow changing the copy_mode via QMP. When running in an iothread, it could be that copy_mode is changed from the main thread in be
block/mirror: determine copy_to_target only once
In preparation to allow changing the copy_mode via QMP. When running in an iothread, it could be that copy_mode is changed from the main thread in between reading copy_mode in bdrv_mirror_top_pwritev() and reading copy_mode in bdrv_mirror_top_do_write(), so they might end up disagreeing about whether copy_to_target is true or false. Avoid that scenario by determining copy_to_target only once and passing it to bdrv_mirror_top_do_write() as an argument.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20231031135431.393137-5-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
058cfca5 | 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
block/mirror: move dirty bitmap to filter
In preparation to allow switching to active mode without draining. Initialization of the bitmap in mirror_dirty_init() still happens with the original/backi
block/mirror: move dirty bitmap to filter
In preparation to allow switching to active mode without draining. Initialization of the bitmap in mirror_dirty_init() still happens with the original/backing BlockDriverState, which should be fine, because the mirror top has the same length.
Suggested-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Message-ID: <20231031135431.393137-4-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
c45d0e1a | 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
block/mirror: set actively_synced even after the job is ready
In preparation to allow switching from background to active mode. This ensures that setting actively_synced will not be missed when the
block/mirror: set actively_synced even after the job is ready
In preparation to allow switching from background to active mode. This ensures that setting actively_synced will not be missed when the switch happens after the job is ready.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20231031135431.393137-3-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
84d61e5f | 13-Sep-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
virtio: use defer_call() in virtio_irqfd_notify()
virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used Buffer Notifications from an IOThread. This involves an eventfd write(2) syscal
virtio: use defer_call() in virtio_irqfd_notify()
virtio-blk and virtio-scsi invoke virtio_irqfd_notify() to send Used Buffer Notifications from an IOThread. This involves an eventfd write(2) syscall. Calling this repeatedly when completing multiple I/O requests in a row is wasteful.
Use the defer_call() API to batch together virtio_irqfd_notify() calls made during thread pool (aio=threads), Linux AIO (aio=native), and io_uring (aio=io_uring) completion processing.
Behavior is unchanged for emulated devices that do not use defer_call_begin()/defer_call_end() since defer_call() immediately invokes the callback when called outside a defer_call_begin()/defer_call_end() region.
fio rw=randread bs=4k iodepth=64 numjobs=8 IOPS increases by ~9% with a single IOThread and 8 vCPUs. iodepth=1 decreases by ~1% but this could be noise. Detailed performance data and configuration specifics are available here: https://gitlab.com/stefanha/virt-playbooks/-/tree/blk_io_plug-irqfd
This duplicates the BH that virtio-blk uses for batching. The next commit will remove it.
Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230913200045.1024233-4-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
433fcea4 | 13-Sep-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
util/defer-call: move defer_call() to util/
The networking subsystem may wish to use defer_call(), so move the code to util/ where it can be reused.
As a reminder of what defer_call() does:
This A
util/defer-call: move defer_call() to util/
The networking subsystem may wish to use defer_call(), so move the code to util/ where it can be reused.
As a reminder of what defer_call() does:
This API defers a function call within a defer_call_begin()/defer_call_end() section, allowing multiple calls to batch up. This is a performance optimization that is used in the block layer to submit several I/O requests at once instead of individually:
defer_call_begin(); <-- start of section ... defer_call(my_func, my_obj); <-- deferred my_func(my_obj) call defer_call(my_func, my_obj); <-- another defer_call(my_func, my_obj); <-- another ... defer_call_end(); <-- end of section, my_func(my_obj) is called once
Suggested-by: Ilya Maximets <i.maximets@ovn.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230913200045.1024233-3-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
ccee48aa | 13-Sep-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: rename blk_io_plug_call() API to defer_call()
Prepare to move the blk_io_plug_call() API out of the block layer so that other subsystems call use this deferred call mechanism. Rename it to de
block: rename blk_io_plug_call() API to defer_call()
Prepare to move the blk_io_plug_call() API out of the block layer so that other subsystems call use this deferred call mechanism. Rename it to defer_call() but leave the code in block/plug.c.
The next commit will move the code out of the block layer.
Suggested-by: Ilya Maximets <i.maximets@ovn.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Paul Durrant <paul@xen.org> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230913200045.1024233-2-stefanha@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|