53e8868d | 26-May-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
meson: remove OS definitions from config_targetos
CONFIG_DARWIN, CONFIG_LINUX and CONFIG_BSD are used in some rules, but only CONFIG_LINUX has substantial use. Convert them all to if...endif.
Sign
meson: remove OS definitions from config_targetos
CONFIG_DARWIN, CONFIG_LINUX and CONFIG_BSD are used in some rules, but only CONFIG_LINUX has substantial use. Convert them all to if...endif.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
23c983c8 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: remove outdated AioContext locking comments
The AioContext lock no longer exists.
There is one noteworthy change:
- * More specifically, these functions use BDRV_POLL_WHILE(bs), which -
block: remove outdated AioContext locking comments
The AioContext lock no longer exists.
There is one noteworthy change:
- * More specifically, these functions use BDRV_POLL_WHILE(bs), which - * requires the caller to be either in the main thread and hold - * the BlockdriverState (bs) AioContext lock, or directly in the - * home thread that runs the bs AioContext. Calling them from - * another thread in another AioContext would cause deadlocks. + * More specifically, these functions use BDRV_POLL_WHILE(bs), which requires + * the caller to be either in the main thread or directly in the home thread + * that runs the bs AioContext. Calling them from another thread in another + * AioContext would cause deadlocks.
I am not sure whether deadlocks are still possible. Maybe they have just moved to the fine-grained locks that have replaced the AioContext. Since I am not sure if the deadlocks are gone, I have kept the substance unchanged and just removed mention of the AioContext.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Message-ID: <20231205182011.1976568-15-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
b49f4755 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: remove AioContext locking
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users.
There isn't a clean way to split
block: remove AioContext locking
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users.
There isn't a clean way to split this patch and the reviewers are likely the same group of people, so I decided to do it in one patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Message-ID: <20231205182011.1976568-7-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
6bc30f19 | 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
graph-lock: remove AioContext locking
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect.
The distinction between bdrv_g
graph-lock: remove AioContext locking
Stop acquiring/releasing the AioContext lock in bdrv_graph_wrlock()/bdrv_graph_unlock() since the lock no longer has any effect.
The distinction between bdrv_graph_wrunlock() and bdrv_graph_wrunlock_ctx() becomes meaningless and they can be collapsed into one function.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231205182011.1976568-6-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
d3007d34 | 01-Dec-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Fix crash when loading snapshot on inactive node
bdrv_is_read_only() only checks if the node is configured to be read-only eventually, but even if it returns false, writing to the node may no
block: Fix crash when loading snapshot on inactive node
bdrv_is_read_only() only checks if the node is configured to be read-only eventually, but even if it returns false, writing to the node may not be permitted at the moment (because it's inactive).
bdrv_is_writable() checks that the node can be written to right now, and this is what the snapshot operations really need.
Change bdrv_can_snapshot() to use bdrv_is_writable() to fix crashes like the following:
$ ./qemu-system-x86_64 -hda /tmp/test.qcow2 -loadvm foo -incoming defer qemu-system-x86_64: ../block/io.c:1990: int bdrv_co_write_req_prepare(BdrvChild *, int64_t, int64_t, BdrvTrackedRequest *, int): Assertion `!(bs->open_flags & BDRV_O_INACTIVE)' failed.
The resulting error message after this patch isn't perfect yet, but at least it doesn't crash any more:
$ ./qemu-system-x86_64 -hda /tmp/test.qcow2 -loadvm foo -incoming defer qemu-system-x86_64: Device 'ide0-hd0' is writable but does not support snapshots
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231201142520.32255-2-kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
cd0c0db0 | 14-Sep-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block/file-posix: set up Linux AIO and io_uring in the current thread
The file-posix block driver currently only sets up Linux AIO and io_uring in the BDS's AioContext. In the multi-queue block laye
block/file-posix: set up Linux AIO and io_uring in the current thread
The file-posix block driver currently only sets up Linux AIO and io_uring in the BDS's AioContext. In the multi-queue block layer we must be able to submit I/O requests in AioContexts that do not have Linux AIO and io_uring set up yet since any thread can call into the block driver.
Set up Linux AIO and io_uring for the current AioContext during request submission. We lose the ability to return an error from .bdrv_file_open() when Linux AIO and io_uring setup fails (e.g. due to resource limits). Instead the user only gets warnings and we fall back to aio=threads. This is still better than a fatal error after startup.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Message-ID: <20230914140101.1065008-2-stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
411132c9 | 27-Nov-2023 |
Kevin Wolf <kwolf@redhat.com> |
export/vhost-user-blk: Fix consecutive drains
The vhost-user-blk export implement AioContext switches in its drain implementation. This means that on drain_begin, it detaches the server from its Aio
export/vhost-user-blk: Fix consecutive drains
The vhost-user-blk export implement AioContext switches in its drain implementation. This means that on drain_begin, it detaches the server from its AioContext and on drain_end, attaches it again and schedules the server->co_trip coroutine in the updated AioContext.
However, nothing guarantees that server->co_trip is even safe to be scheduled. Not only is it unclear that the coroutine is actually in a state where it can be reentered externally without causing problems, but with two consecutive drains, it is possible that the scheduled coroutine didn't have a chance yet to run and trying to schedule an already scheduled coroutine a second time crashes with an assertion failure.
Following the model of NBD, this commit makes the vhost-user-blk export shut down server->co_trip during drain so that resuming the export means creating and scheduling a new coroutine, which is always safe.
There is one exception: If the drain call didn't poll (for example, this happens in the context of bdrv_graph_wrlock()), then the coroutine didn't have a chance to shut down. However, in this case the AioContext can't have changed; changing the AioContext always involves a polling drain. So in this case we can simply assert that the AioContext is unchanged and just leave the coroutine running or wake it up if it has yielded to wait for the AioContext to be attached again.
Fixes: e1054cd4aad03a493a5d1cded7508f7c348205bf Fixes: https://issues.redhat.com/browse/RHEL-1708 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231127115755.22846-1-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
9fb7b350 | 24-Nov-2023 |
Fam Zheng <fam@euphon.net> |
vmdk: Don't corrupt desc file in vmdk_write_cid
If the text description file is larger than DESC_SIZE, we force the last byte in the buffer to be 0 and write it out.
This results in a corruption.
vmdk: Don't corrupt desc file in vmdk_write_cid
If the text description file is larger than DESC_SIZE, we force the last byte in the buffer to be 0 and write it out.
This results in a corruption.
Try to allocate a big buffer in this case.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1923
Signed-off-by: Fam Zheng <fam@euphon.net> Message-ID: <20231124115654.3239137-1-fam@euphon.net> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
1dbc7d34 | 15-Nov-2023 |
Kevin Wolf <kwolf@redhat.com> |
stream: Fix AioContext locking during bdrv_graph_wrlock()
In stream_prepare(), we need to temporarily drop the AioContext lock that job_prepare_locked() took for us while calling the graph write loc
stream: Fix AioContext locking during bdrv_graph_wrlock()
In stream_prepare(), we need to temporarily drop the AioContext lock that job_prepare_locked() took for us while calling the graph write lock functions which can poll.
All block nodes related to this block job are in the same AioContext, so we can pass any of them to bdrv_graph_wrlock()/ bdrv_graph_wrunlock(). Unfortunately, the one that we picked is base, which can be NULL - and in this case the AioContext lock is not released and deadlocks can occur.
Fix this by passing s->target_bs, which is never NULL.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-4-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
6bc0bcc8 | 15-Nov-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Fix deadlocks in bdrv_graph_wrunlock()
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making p
block: Fix deadlocks in bdrv_graph_wrunlock()
bdrv_graph_wrunlock() calls aio_poll(), which may run callbacks that have a nested event loop. Nested event loops can depend on other iothreads making progress, so in order to allow them to make progress it must not hold the AioContext lock of another thread while calling aio_poll().
This introduces a @bs parameter to bdrv_graph_wrunlock() whose AioContext is temporarily dropped (which matches bdrv_graph_wrlock()), and a bdrv_graph_wrunlock_ctx() that can be used if the BlockDriverState doesn't necessarily exist any more when unlocking.
This also requires a change to bdrv_schedule_unref(), which was relying on the incorrectly taken lock. It needs to take the lock itself now. While this is a separate bug, it can't be fixed a separate patch because otherwise the intermediate state would either deadlock or try to release a lock that we don't even hold.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-3-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> [kwolf: Fixed up bdrv_schedule_unref()] Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
bb092d6d | 15-Nov-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
While not all callers of blk_remove_bs() are correct in this respect, the assumption in the function is that callers hold the AioContext lock o
block: Fix bdrv_graph_wrlock() call in blk_remove_bs()
While not all callers of blk_remove_bs() are correct in this respect, the assumption in the function is that callers hold the AioContext lock of the BlockBackend (this is required by the drain calls in it).
In order to avoid deadlock in the nested event loop, bdrv_graph_wrlock() has then to be called with the root BlockDriverState as its parameter instead of NULL, so that this AioContext lock is temporarily dropped.
Fixes: https://issues.redhat.com/browse/RHEL-1761 Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231115172012.112727-2-kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
6ab4f1c9 | 23-Oct-2023 |
Thomas Huth <thuth@redhat.com> |
block/snapshot: Fix compiler warning with -Wshadow=local
No need to declare a new variable in the the inner code block here, we can re-use the "ret" variable that has been declared at the beginning
block/snapshot: Fix compiler warning with -Wshadow=local
No need to declare a new variable in the the inner code block here, we can re-use the "ret" variable that has been declared at the beginning of the function. With this change, the code can now be successfully compiled with -Wshadow=local again.
Signed-off-by: Thomas Huth <thuth@redhat.com> Message-ID: <20231023175038.111607-1-thuth@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> [Commit message tweaked] Signed-off-by: Markus Armbruster <armbru@redhat.com>
show more ...
|
1f051dcb | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Protect bs->file with graph_lock
Almost all functions that access bs->file already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as
block: Protect bs->file with graph_lock
Almost all functions that access bs->file already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-25-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
a4b740db | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Take graph lock for most of .bdrv_open
Most implementations of .bdrv_open first open their file child (which is an operation that internally takes the write lock and therefore we shouldn't ho
block: Take graph lock for most of .bdrv_open
Most implementations of .bdrv_open first open their file child (which is an operation that internally takes the write lock and therefore we shouldn't hold the graph lock while calling it), and afterwards many operations that require holding the graph lock, e.g. for accessing bs->file.
This changes block drivers that follow this pattern to take the graph lock after opening the child node.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-24-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
65ff757d | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
vhdx: Take locks for accessing bs->file
This updates the vhdx code to add GRAPH_RDLOCK annotations for all places that read bs->file.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231
vhdx: Take locks for accessing bs->file
This updates the vhdx code to add GRAPH_RDLOCK annotations for all places that read bs->file.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-23-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
8f897341 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
qcow2: Take locks for accessing bs->file
This updates the qcow2 code to add GRAPH_RDLOCK annotations for all places that read bs->file.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <202
qcow2: Take locks for accessing bs->file
This updates the qcow2 code to add GRAPH_RDLOCK annotations for all places that read bs->file.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-22-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
79a55866 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Add missing GRAPH_RDLOCK annotations
This adds GRAPH_RDLOCK to some driver callbacks that are already called with the graph lock held, and which will need the annotation because they access b
block: Add missing GRAPH_RDLOCK annotations
This adds GRAPH_RDLOCK to some driver callbacks that are already called with the graph lock held, and which will need the annotation because they access bs->file, but don't have it yet.
This also covers a few callbacks that were not marked GRAPH_RDLOCK before, but where updating BlockDriver is trivially possible.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-21-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
e2dd2737 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Introduce bdrv_co_change_backing_file()
bdrv_change_backing_file() is called both inside and outside coroutine context. This makes it difficult for it to take the graph lock internally. It al
block: Introduce bdrv_co_change_backing_file()
bdrv_change_backing_file() is called both inside and outside coroutine context. This makes it difficult for it to take the graph lock internally. It also means that driver implementations need to be able to run outside of coroutines, too. Switch it to the usual model with a coroutine based implementation and a co_wrapper instead. The new function is marked GRAPH_RDLOCK.
As the co_wrapper now runs the function in the AioContext of the node (as it should always have done), this is not GLOBAL_STATE_CODE() any more.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-20-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
244b26d2 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
blkverify: Add locking for request_fn
This is either bdrv_co_preadv() or bdrv_co_pwritev() which both need to have the graph locked. Annotate the function pointer accordingly and add locking to its
blkverify: Add locking for request_fn
This is either bdrv_co_preadv() or bdrv_co_pwritev() which both need to have the graph locked. Annotate the function pointer accordingly and add locking to its callers.
This shouldn't actually have resulted in a bug because the graph lock is already held by blkverify_co_prwv(), which waits for the coroutines to terminate. Annotate with GRAPH_RDLOCK as well to make this clearer.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-19-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
004915a9 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Protect bs->backing with graph_lock
Almost all functions that access bs->backing already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itse
block: Protect bs->backing with graph_lock
Almost all functions that access bs->backing already take the graph lock now. Add locking to the remaining users and finally annotate the struct field itself as protected by the graph lock.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-18-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
ccd6a379 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_replace_node() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_replace_node(). Its callers may already want to hold t
block: Mark bdrv_replace_node() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_replace_node(). Its callers may already want to hold the graph lock and so wouldn't be able to call functions that take it internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-17-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
d0f9fd94 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_set_backing_hd_drained() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_set_backing_hd_drained(). Basically everthin
block: Mark bdrv_set_backing_hd_drained() GRAPH_WRLOCK
Instead of taking the writer lock internally, require callers to already hold it when calling bdrv_set_backing_hd_drained(). Basically everthing in the function needs the lock and its callers may already want to hold the graph lock and so wouldn't be able to call functions that take it internally.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-14-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
78a9c76e | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_cow_child() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_cow_child() need to hold a reader lock for the graph because it accesses bs->
block: Mark bdrv_cow_child() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_cow_child() need to hold a reader lock for the graph because it accesses bs->backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-13-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
79bb7627 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_chain_contains() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_chain_contains() need to hold a reader lock for the graph because it cal
block: Mark bdrv_chain_contains() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_chain_contains() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_bs(), which accesses bs->file/backing.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-11-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
9275fc72 | 27-Oct-2023 |
Kevin Wolf <kwolf@redhat.com> |
block: Mark bdrv_(un)freeze_backing_chain() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_(un)freeze_backing_chain() need to hold a reader lock for the
block: Mark bdrv_(un)freeze_backing_chain() and callers GRAPH_RDLOCK
This adds GRAPH_RDLOCK annotations to declare that callers of bdrv_(un)freeze_backing_chain() need to hold a reader lock for the graph because it calls bdrv_filter_or_cow_child(), which accesses bs->file/backing.
Use the opportunity to make bdrv_is_backing_chain_frozen() static, it has no external callers.
Signed-off-by: Kevin Wolf <kwolf@redhat.com> Message-ID: <20231027155333.420094-10-kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|