Revision tags: v9.2.0, v9.1.2, v9.1.1, v9.1.0 |
|
#
6370d13c |
| 21-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- virtio-blk: Multiqueue support (configurable iothread per queue) - Made NBD export and hw/scsi thread-sa
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- virtio-blk: Multiqueue support (configurable iothread per queue) - Made NBD export and hw/scsi thread-safe without AioContext lock - Fix crash when loading snapshot on inactive node
# -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmWEw/8RHGt3b2xmQHJl # ZGhhdC5jb20ACgkQfwmycsiPL9bX0Q/9G+Qx8mQGmbxJzvtW7/1eaeJ5CPCYT8w3 # 033S5hCil43mvX2aQKTFrh1Nz4aYlqMDyURvNu7nigyADY+kBpzzJ1MFr6WQrzYv # QEk4jf/FOllfKn8+/A0z2NJDhtpVgqKKHBsFZl8FBUcxd79daTaoPPM3BNNsOHQD # o7Z7hR/iEdG9dkAh/fpwctsgMO/CoN0BRRyN2OByj03zeu1TlDJ6lX0hxlcJl9Jw # vLo81rWTCqKRu+SbjBsb0HfYE2hP54A4hvxn4I9vYGYDz8ElucluYyeqUEK+mdrX # /DQBdb+Osl1FD6MuIaFR+Rgp9Mu5h6ZOdvUyCY0zuByti851hV8qjW9BtrTfqaMh # LMOKoL6c5B8XJYWVGAGrJexIw1hHq5WKdXN9zp4FZA4tOyHUMRjHuR1+zScU6gnU # WRSIQR46w75A13clWyJs9Hf/q5Fp/1KT4nfuZ/hmiXvxdsYY5x1w/W3s9tRNjYKL # d6FVk17cFc6Ksb7lWvDCgg61BNZtGm4Clmw0kJ6V1reiQz7AvDLmduLUQbmrVt7G # gWAY4b2L9YXJpEx5en0kE50KLAUw/E9ozbOq6ZT9nFUKeNAPC8PS5lK7vYVwebCk # VA0t8pFzKhdB1bJaG5fMSRPBuqkvhsaDEEDABlSro8dyyjoQBaEdk5P9Kxe66hBc # xhTmDPdv/JM= # =E3Zh # -----END PGP SIGNATURE----- # gpg: Signature made Thu 21 Dec 2023 18:02:23 EST # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (33 commits) virtio-blk: add iothread-vq-mapping parameter qdev: add IOThreadVirtQueueMappingList property type qdev-properties: alias all object class properties string-output-visitor: show structs as "<omitted>" block-coroutine-wrapper: use qemu_get_current_aio_context() block: remove outdated AioContext locking comments job: remove outdated AioContext locking comments scsi: remove outdated AioContext lock comment docs: remove AioContext lock from IOThread docs aio: remove aio_context_acquire()/aio_context_release() API aio-wait: draw equivalence between AIO_WAIT_WHILE() and AIO_WAIT_WHILE_UNLOCKED() scsi: remove AioContext locking block: remove bdrv_co_lock() block: remove AioContext locking graph-lock: remove AioContext locking aio: make aio_context_acquire()/aio_context_release() a no-op tests: remove aio_context_acquire() tests scsi: assert that callbacks run in the correct AioContext virtio-scsi: replace AioContext lock with tmf_bh_lock dma-helpers: don't lock AioContext in dma_blk_cb() ...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
#
b49f4755 |
| 05-Dec-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
block: remove AioContext locking
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users.
There isn't a clean way to split
block: remove AioContext locking
This is the big patch that removes aio_context_acquire()/aio_context_release() from the block layer and affected block layer users.
There isn't a clean way to split this patch and the reviewers are likely the same group of people, so I decided to do it in one patch.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Paul Durrant <paul@xen.org> Message-ID: <20231205182011.1976568-7-stefanha@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
6c9ae1ce |
| 31-Oct-2023 |
Stefan Hajnoczi <stefanha@redhat.com> |
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- virtio-blk: use blk_io_plug_call() instead of notification BH - mirror: allow switching from background
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- virtio-blk: use blk_io_plug_call() instead of notification BH - mirror: allow switching from background to active mode - qemu-img rebase: add compression support - Fix locking in media change monitor commands - Fix a few blockjob-related deadlocks when using iothread
# -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmVBTkERHGt3b2xmQHJl # ZGhhdC5jb20ACgkQfwmycsiPL9ZiqRAAqvsWbblmEGJ7TBKYQK3f8QshJ66RxzbC # 4eSjKHrciWNTeeIeU8r8OvFcPPoTcPXxpcmasD2gsAxG5W5N8vkPbBkW+YT4YdDJ # pWJXrbJ15nILC4DmnR1ARVtvxKgv9zy5LSm5bjss1K+OSYJl/nx+ILjmfVZnYDF7 # z1dP/G0JxKKm4JzAIdBE3uZS+6Q5kx/wGYlJv8EQmlH3DYfsJfy6Lthe9jfw8ijg # lSqLoQ+D0lEd6Bk4XbkUqqBxFcYBWTfU6qPZoyIO94zCTwTG9yIjmoivxmmfwQZq # cJUTGGZjcxpJYnvcC6P13WgcWBtcD9L2kYFVH0JyjpwcSg9cCGHMF66n9pSlyEGq # DUikwVzbTwOotwzYQyM88v4ET+2+Qdcwn8pRbv9PllEczh0kAsUAEuxSgtz4NEcN # bZrap/16xHFybNOKkMZcmpqxspT5NXKbDODUP0IvbSYMOYpWS983nBTxwMRpyHog # 2TFDZu4DjNiPkI2BcYM5VOKk6diNowZFShcEKvoaOLX/n9EBhP0tjoH9VUn1800F # myHrhF2jpIf9GhErMWB7N2W3/0aK0pqdQgbpVnd1ARDdIdYkr7G/S+50D9K80b6n # 0q2E7br4S5bcsY0HQzBL9YARSayY+lVOssLoolCWEsYzijdBQmAvs5THajFKcism # /idI6nlp2Vs= # =RdxS # -----END PGP SIGNATURE----- # gpg: Signature made Wed 01 Nov 2023 03:58:09 JST # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (27 commits) iotests: add test for changing mirror's copy_mode mirror: return mirror-specific information upon query blockjob: query driver-specific info via a new 'query' driver method qapi/block-core: turn BlockJobInfo into a union qapi/block-core: use JobType for BlockJobInfo's type mirror: implement mirror_change method block/mirror: determine copy_to_target only once block/mirror: move dirty bitmap to filter block/mirror: set actively_synced even after the job is ready blockjob: introduce block-job-change QMP command virtio-blk: remove batch notification BH virtio: use defer_call() in virtio_irqfd_notify() util/defer-call: move defer_call() to util/ block: rename blk_io_plug_call() API to defer_call() blockdev: mirror: avoid potential deadlock when using iothread block: avoid potential deadlock during bdrv_graph_wrlock() in bdrv_close() blockjob: drop AioContext lock before calling bdrv_graph_wrlock() iotests: Test media change with iothreads block: Fix locking in media change monitor commands iotests: add tests for "qemu-img rebase" with compression ...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
#
61a3a5a7 |
| 31-Oct-2023 |
Fiona Ebner <f.ebner@proxmox.com> |
blockjob: introduce block-job-change QMP command
which will allow changing job-type-specific options after job creation.
In the JobVerbTable, the same allow bits as for set-speed are used, because
blockjob: introduce block-job-change QMP command
which will allow changing job-type-specific options after job creation.
In the JobVerbTable, the same allow bits as for set-speed are used, because set-speed can be considered an existing change command.
Signed-off-by: Fiona Ebner <f.ebner@proxmox.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-ID: <20231031135431.393137-2-f.ebner@proxmox.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v8.0.0, v7.2.0 |
|
#
d5ab9490 |
| 30-Oct-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- Cleanup bs->backing and bs->file handling - Refactor bdrv_try_set_aio_context using transactions - Chang
Merge tag 'for-upstream' of https://repo.or.cz/qemu/kevin into staging
Block layer patches
- Cleanup bs->backing and bs->file handling - Refactor bdrv_try_set_aio_context using transactions - Changes for improved coroutine_fn consistency - vhost-user-blk: fix the resize crash - io_uring: Use of io_uring_register_ring_fd() led to breakage, revert - vvfat: Fix some problems with r/w mode - Code cleanup - MAINTAINERS: Fold "Block QAPI, monitor, ..." into "Block layer core"
# -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmNazhIRHGt3b2xmQHJl # ZGhhdC5jb20ACgkQfwmycsiPL9ZyTw/8Dfck/SuxfyeLlnQItkjaV4cnqWOU8vHs # 9x0KhlptCs+HXdF/3iicpA0lHojn7mNnbdFGjPRY4E0LriQv91TQ5ycdEmrseFPf # sgeQlgdKCVU/pHjZ2wYarm2pE43Cx85a5xuufmw+7w49dNNZn14l4t+DgviuClVM # nuVaogfZFbYyetre+Qd2TgLl+gJ+0d4o7Zs5lSWLrT8t0L9AGkcWPA7Nrbl6loIE # dOautV4G7jLjuMiCeJZOGcnuRVe3gCQ5rCGBFzzH4DUtz4BmiYx4hd3LMEsP0PMM # CrsfDZS04Ztybl9M7TmJuwkAm1gx1JDMOuJuh18lbJocIOBvhkKKxY2wI5LIdZVI # ZntmU36RowkX+GGu/PYpYyMjBDClJppZCl7vnjyLYsVt6r0Vu6SmlHpJhcRYabhe # 96Kv1LXH9A6+ogKPU3Layw6JGjg01GNr1ALuT7PO3pGto/JshmOuBEJJDucoF84M # 5AfxFCohMROVldwblA6M0eKnlQBgtr5BvtgbV54BBo88VlFJgDJFQn7R09cTFUEo # UwaJoS+nIaiZ0bQQVZhZloVppUaTdVJojzfVRCZZctga96/tu1HSFnGLnbEFpUN3 # KOf+XnVNS6Ro+nPSDf9bMjbIom2JicGFfV+6yMgIoxY/d5UA2dTZfefil4TAlSod # 6PsTgg+jrm8= # =/Fw0 # -----END PGP SIGNATURE----- # gpg: Signature made Thu 27 Oct 2022 14:29:38 EDT # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* tag 'for-upstream' of https://repo.or.cz/qemu/kevin: (58 commits) block/block-backend: blk_set_enable_write_cache is IO_CODE monitor: switch to *_co_* functions vmdk: switch to *_co_* functions vhdx: switch to *_co_* functions vdi: switch to *_co_* functions qed: switch to *_co_* functions qcow2: switch to *_co_* functions qcow: switch to *_co_* functions parallels: switch to *_co_* functions mirror: switch to *_co_* functions block: switch to *_co_* functions commit: switch to *_co_* functions vmdk: manually add more coroutine_fn annotations qcow2: manually add more coroutine_fn annotations qcow: manually add more coroutine_fn annotations blkdebug: add missing coroutine_fn annotation for indirect-called functions qcow2: add coroutine_fn annotation for indirect-called functions block: add missing coroutine_fn annotation to BlockDriverState callbacks coroutine-io: add missing coroutine_fn annotation to prototypes coroutine-lock: add missing coroutine_fn annotation to prototypes ...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
#
142e6907 |
| 25-Oct-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
block: remove bdrv_try_set_aio_context and replace it with bdrv_try_change_aio_context
No functional change intended.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Ke
block: remove bdrv_try_set_aio_context and replace it with bdrv_try_change_aio_context
No functional change intended.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20221025084952.2139888-11-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
7fa24b8d |
| 12-Oct-2022 |
Stefan Hajnoczi <stefanha@redhat.com> |
Merge tag 'for-upstream' of git://repo.or.cz/qemu/kevin into staging
Block layer patches
- job: replace AioContext lock with job_mutex - Fixes to make coroutine_fn annotations more accurate - QAPI
Merge tag 'for-upstream' of git://repo.or.cz/qemu/kevin into staging
Block layer patches
- job: replace AioContext lock with job_mutex - Fixes to make coroutine_fn annotations more accurate - QAPI schema: Fix incorrect example - Code cleanup
# -----BEGIN PGP SIGNATURE----- # # iQJFBAABCAAvFiEE3D3rFZqa+V09dFb+fwmycsiPL9YFAmNAAz8RHGt3b2xmQHJl # ZGhhdC5jb20ACgkQfwmycsiPL9a6zg//QYLx+FYMStb50lS+6VBio8AKOVbwn5zp # ZANoXinMknnxI5wTldjkkM1cBRg27BVjpOHz4XemBtQgT5nBqWq8+Ov31lwASVID # na/L9o4Pa0xmywM777K+edceWk0fpJTLmnFf1Qxan9qB/VSjNFtk+fjwFopoatKg # XbHd6maQtrY8bIOyBsBoZozNaS39E/uPqkP67V6GF09re17f0PBctGHKFkTKZr8w # 2HfyMt8/UIhFet++NFgxppTcvIKfZ20pk4AQ+yYsL+FxWr/cs4leKWl5BSc7thtP # Sm/y0WiEB4nPNo4CSf9sA1Vo8EIGYzBhUVteqYQUF2vSXSzFmZb191fLJRYwp5bQ # QxEmHzPVGqcUHr+jkfXI0yLolWduiKV1ATZ0zW3N41VfzGLYZdSgI2ZhbHJ0/yKO # ZhyC63gye9V6TXxviYIz2V6iOD8QuwJ8X1P0E3yRsGploF1UY/N1lwbmek1XhFn/ # +xn/mrTeV0lu4wKuWRpUfY2C/7SR0Za6MB2GqduRWnbcAonLH3/syAxXSfu2611N # Z1Cf9Wu8Mm0IQz0LbbVvEJZ4yoEPkg/tGH8q6dpau2uTfCb6sSylRxLcXEa5R0UQ # W+wX5GSoTDe4DQKOSaJE7jWV/QwY5diTLHBIvSF8uKAfeCenkDDLowrMvbWafL0X # XTFzpZ/1aA8= # =jMFT # -----END PGP SIGNATURE----- # gpg: Signature made Fri 07 Oct 2022 06:45:19 EDT # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* tag 'for-upstream' of git://repo.or.cz/qemu/kevin: (50 commits) file-posix: Remove unused s->discard_zeroes job: remove unused functions blockjob: remove unused functions block_job_query: remove atomic read job.c: enable job lock/unlock and remove Aiocontext locks job.h: categorize JobDriver callbacks that need the AioContext lock blockjob: protect iostatus field in BlockJob struct blockjob: rename notifier callbacks as _locked blockjob.h: categorize fields in struct BlockJob jobs: protect job.aio_context with BQL and job_mutex job: detect change of aiocontext within job coroutine jobs: group together API calls under the same job lock block/mirror.c: use of job helpers in drivers jobs: use job locks also in the unit tests jobs: add job lock in find_* functions blockjob: introduce block_job _locked() APIs job: move and update comments from blockjob.c job.c: add job_lock/unlock while keeping job.h intact aio-wait.h: introduce AIO_WAIT_WHILE_UNLOCKED job.c: API functions not used outside should be static ...
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
#
9bd4d3c2 |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job: remove unused functions
These public functions are not used anywhere, thus can be dropped. Also, since this is the final job API that doesn't use AioContext lock and replaces it with job_lock,
job: remove unused functions
These public functions are not used anywhere, thus can be dropped. Also, since this is the final job API that doesn't use AioContext lock and replaces it with job_lock, adjust all remaining function documentation to clearly specify if the job lock is taken or not.
Also document the locking requirements for a few functions where the second version is not removed.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20220926093214.506243-22-eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
6f592e5a |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job.c: enable job lock/unlock and remove Aiocontext locks
Change the job_{lock/unlock} and macros to use job_mutex.
Now that they are not nop anymore, remove the aiocontext to avoid deadlocks.
The
job.c: enable job lock/unlock and remove Aiocontext locks
Change the job_{lock/unlock} and macros to use job_mutex.
Now that they are not nop anymore, remove the aiocontext to avoid deadlocks.
Therefore: - when possible, remove completely the aiocontext lock/unlock pair - if it is used by some other function too, reduce the locking section as much as possible, leaving the job API outside. - change AIO_WAIT_WHILE in AIO_WAIT_WHILE_UNLOCKED, since we are not using the aiocontext lock anymore
The only functions that still need the aiocontext lock are: - the JobDriver callbacks, already documented in job.h - job_cancel_sync() in replication.c is called with aio_context_lock taken, but now job is using AIO_WAIT_WHILE_UNLOCKED so we need to release the lock.
Reduce the locking section to only cover the callback invocation and document the functions that take the AioContext lock, to avoid taking it twice.
Also remove real_job_{lock/unlock}, as they are replaced by the public functions.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220926093214.506243-19-eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
3ed4f708 |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
jobs: protect job.aio_context with BQL and job_mutex
In order to make it thread safe, implement a "fake rwlock", where we allow reads under BQL *or* job_mutex held, but writes only under BQL *and* j
jobs: protect job.aio_context with BQL and job_mutex
In order to make it thread safe, implement a "fake rwlock", where we allow reads under BQL *or* job_mutex held, but writes only under BQL *and* job_mutex.
The only write we have is in child_job_set_aio_ctx, which always happens under drain (so the job is paused). For this reason, introduce job_set_aio_context and make sure that the context is set under BQL, job_mutex and drain. Also make sure all other places where the aiocontext is read are protected.
The reads in commit.c and mirror.c are actually safe, because always done under BQL.
Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220926093214.506243-14-eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
ef02dac2 |
| 26-Sep-2022 |
Paolo Bonzini <pbonzini@redhat.com> |
job: detect change of aiocontext within job coroutine
We want to make sure access of job->aio_context is always done under either BQL or job_mutex. The problem is that using aio_co_enter(job->aiocon
job: detect change of aiocontext within job coroutine
We want to make sure access of job->aio_context is always done under either BQL or job_mutex. The problem is that using aio_co_enter(job->aiocontext, job->co) in job_start and job_enter_cond makes the coroutine immediately resume, so we can't hold the job lock. And caching it is not safe either, as it might change.
job_start is under BQL, so it can freely read job->aiocontext, but job_enter_cond is not. We want to avoid reading job->aio_context in job_enter_cond, therefore: 1) use aio_co_wake(), since it doesn't want an aiocontext as argument but uses job->co->ctx 2) detect possible discrepancy between job->co->ctx and job->aio_context by checking right after the coroutine resumes back from yielding if job->aio_context has changed. If so, reschedule the coroutine to the new context.
Calling bdrv_try_set_aio_context() will issue the following calls (simplified): * in terms of bdrv callbacks: .drained_begin -> .set_aio_context -> .drained_end * in terms of child_job functions: child_job_drained_begin -> child_job_set_aio_context -> child_job_drained_end * in terms of job functions: job_pause_locked -> job_set_aio_context -> job_resume_locked
We can see that after setting the new aio_context, job_resume_locked calls again job_enter_cond, which then invokes aio_co_wake(). But while job->aiocontext has been set in job_set_aio_context, job->co->ctx has not changed, so the coroutine would be entering in the wrong aiocontext.
Using aio_co_schedule in job_resume_locked() might seem as a valid alternative, but the problem is that the bh resuming the coroutine is not scheduled immediately, and if in the meanwhile another bdrv_try_set_aio_context() is run (see test_propagate_mirror() in test-block-iothread.c), we would have the first schedule in the wrong aiocontext, and the second set of drains won't even manage to schedule the coroutine, as job->busy would still be true from the previous job_resume_locked().
The solution is to stick with aio_co_wake() and detect every time the coroutine resumes back from yielding if job->aio_context has changed. If so, we can reschedule it to the new context.
Check for the aiocontext change in job_do_yield_locked because: 1) aio_co_reschedule_self requires to be in the running coroutine 2) since child_job_set_aio_context allows changing the aiocontext only while the job is paused, this is the exact place where the coroutine resumes, before running JobDriver's code.
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20220926093214.506243-13-eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
bf61c583 |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job: move and update comments from blockjob.c
This comment applies more on job, it was left in blockjob as in the past the whole job logic was implemented there.
Note: at this stage, job_{lock/unlo
job: move and update comments from blockjob.c
This comment applies more on job, it was left in blockjob as in the past the whole job logic was implemented there.
Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*.
No functional change intended.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20220926093214.506243-7-eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
afe1e8a7 |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job.c: add job_lock/unlock while keeping job.h intact
With "intact" we mean that all job.h functions implicitly take the lock. Therefore API callers are unmodified.
This means that: - many static f
job.c: add job_lock/unlock while keeping job.h intact
With "intact" we mean that all job.h functions implicitly take the lock. Therefore API callers are unmodified.
This means that: - many static functions that will be always called with job lock held become _locked, and call _locked functions - all public functions take the lock internally if needed, and call _locked functions - all public functions called internally by other functions in job.c will have a _locked counterpart (sometimes public), to avoid deadlocks (job lock already taken). These functions are not used for now. - some public functions called only from exernal files (not job.c) do not have _locked() counterpart and take the lock inside. Others won't need the lock at all because use fields only set at initialization and never modified.
job_{lock/unlock} is independent from real_job_{lock/unlock}.
Note: at this stage, job_{lock/unlock} and job lock guard macros are *nop*
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-Id: <20220926093214.506243-6-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
544f4d52 |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job.c: API functions not used outside should be static
job_event_* functions can all be static, as they are not used outside job.c.
Same applies for job_txn_add_job().
Signed-off-by: Emanuele Gius
job.c: API functions not used outside should be static
job_event_* functions can all be static, as they are not used outside job.c.
Same applies for job_txn_add_job().
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20220926093214.506243-4-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
55c5a25a |
| 26-Sep-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job.c: make job_mutex and job_lock/unlock() public
job mutex will be used to protect the job struct elements and list, replacing AioContext locks.
Right now use a shared lock for all jobs, in order
job.c: make job_mutex and job_lock/unlock() public
job mutex will be used to protect the job struct elements and list, replacing AioContext locks.
Right now use a shared lock for all jobs, in order to keep things simple. Once the AioContext lock is gone, we can introduce per-job locks.
To simplify the switch from aiocontext to job lock, introduce *nop* lock/unlock functions and macros. We want to always call job_lock/unlock outside the AioContext locks, and not vice-versa, otherwise we might get a deadlock. This is not straightforward to do, and that's why we start with nop functions. Once everything is protected by job_lock/unlock, we can change the nop into an actual mutex and remove the aiocontext lock.
Since job_mutex is already being used, add static real_job_{lock/unlock} for the existing usage.
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru> Message-Id: <20220926093214.506243-2-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
06753a07 |
| 22-Sep-2022 |
Paolo Bonzini <pbonzini@redhat.com> |
job: add missing coroutine_fn annotations
Callers of coroutine_fn must be coroutine_fn themselves, or the call must be within "if (qemu_in_coroutine())". Apply coroutine_fn to functions where this
job: add missing coroutine_fn annotations
Callers of coroutine_fn must be coroutine_fn themselves, or the call must be within "if (qemu_in_coroutine())". Apply coroutine_fn to functions where this holds.
Reviewed-by: Alberto Faria <afaria@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20220922084924.201610-22-pbonzini@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
Revision tags: v7.0.0 |
|
#
d7e2fe4a |
| 05-Mar-2022 |
Peter Maydell <peter.maydell@linaro.org> |
Merge remote-tracking branch 'remotes/kwolf-gitlab/tags/for-upstream' into staging
Block layer patches
- qemu-storage-daemon: Add --daemonize - Fix x-blockdev-amend and block node activation code w
Merge remote-tracking branch 'remotes/kwolf-gitlab/tags/for-upstream' into staging
Block layer patches
- qemu-storage-daemon: Add --daemonize - Fix x-blockdev-amend and block node activation code which incorrectly executed code in the iothread that must run in the main thread. - Add macros for coroutine-safe TLS variables (required for correctness with LTO) - Fix crashes with concurrent I/O and bdrv_refresh_limits() - Split block APIs in global state and I/O - iotests: Don't refuse to run at all without GNU sed, just skip tests that need it
# gpg: Signature made Fri 04 Mar 2022 17:18:31 GMT # gpg: using RSA key DC3DEB159A9AF95D3D7456FE7F09B272C88F2FD6 # gpg: issuer "kwolf@redhat.com" # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" [full] # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6
* remotes/kwolf-gitlab/tags/for-upstream: (50 commits) block/amend: Keep strong reference to BDS block/amend: Always call .bdrv_amend_clean() tests/qemu-iotests: Rework the checks and spots using GNU sed iotests/graph-changes-while-io: New test iotests: Allow using QMP with the QSD block: Make bdrv_refresh_limits() non-recursive job.h: assertions in the callers of JobDriver function pointers job.h: split function pointers in JobDriver block-backend-common.h: split function pointers in BlockDevOps block_int-common.h: assertions in the callers of BdrvChildClass function pointers block_int-common.h: split function pointers in BdrvChildClass block_int-common.h: assertions in the callers of BlockDriver function pointers block_int-common.h: split function pointers in BlockDriver block/coroutines: I/O and "I/O or GS" API block/copy-before-write.h: global state API + assertions include/block/snapshot: global state API + assertions assertions for blockdev.h global state API include/sysemu/blockdev.h: global state API assertions for blockjob.h global state API include/block/blockjob.h: global state API ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
c70b8031 |
| 03-Mar-2022 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job.h: assertions in the callers of JobDriver function pointers
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-32-eesposit@redhat.com> Signed-off-
job.h: assertions in the callers of JobDriver function pointers
Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Message-Id: <20220303151616.325444-32-eesposit@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
show more ...
|
#
d5a9f352 |
| 29-Dec-2021 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-jobs-2021-12-29' of https://src.openvz.org/scm/~vsementsov/qemu into staging
Jobs patches: - small fix of job_create() - refactoring: drop BlockJob.blk field
# gpg: Signature made
Merge tag 'pull-jobs-2021-12-29' of https://src.openvz.org/scm/~vsementsov/qemu into staging
Jobs patches: - small fix of job_create() - refactoring: drop BlockJob.blk field
# gpg: Signature made Wed 29 Dec 2021 11:11:25 AM PST # gpg: using RSA key 8B9C26CDB2FD147C880E86A1561F24C1F19F79FB # gpg: Good signature from "Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 8B9C 26CD B2FD 147C 880E 86A1 561F 24C1 F19F 79FB
* tag 'pull-jobs-2021-12-29' of https://src.openvz.org/scm/~vsementsov/qemu: blockjob: drop BlockJob.blk field test-bdrv-drain: don't use BlockJob.blk block/stream: add own blk test-blockjob-txn: don't abuse job->blk blockjob: implement and use block_job_get_aio_context job.c: add missing notifier initialization
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
Revision tags: v6.2.0 |
|
#
252f4091 |
| 03-Nov-2021 |
Emanuele Giuseppe Esposito <eesposit@redhat.com> |
job.c: add missing notifier initialization
It seems that on_idle list is not properly initialized like the other notifiers.
Fixes: 34dc97b9a0e ("blockjob: Wake up BDS when job becomes idle") Signed
job.c: add missing notifier initialization
It seems that on_idle list is not properly initialized like the other notifiers.
Fixes: 34dc97b9a0e ("blockjob: Wake up BDS when job becomes idle") Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
show more ...
|
#
14f12119 |
| 07-Oct-2021 |
Richard Henderson <richard.henderson@linaro.org> |
Merge remote-tracking branch 'remotes/vsementsov/tags/pull-jobs-2021-10-07-v2' into staging
mirror: Handle errors after READY cancel v2: add small fix by Stefano, Hanna's series fixed
# gpg: Signat
Merge remote-tracking branch 'remotes/vsementsov/tags/pull-jobs-2021-10-07-v2' into staging
mirror: Handle errors after READY cancel v2: add small fix by Stefano, Hanna's series fixed
# gpg: Signature made Thu 07 Oct 2021 08:25:07 AM PDT # gpg: using RSA key 8B9C26CDB2FD147C880E86A1561F24C1F19F79FB # gpg: Good signature from "Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>" [unknown] # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 8B9C 26CD B2FD 147C 880E 86A1 561F 24C1 F19F 79FB
* remotes/vsementsov/tags/pull-jobs-2021-10-07-v2: iotests: Add mirror-ready-cancel-error test mirror: Do not clear .cancelled mirror: Stop active mirroring after force-cancel mirror: Check job_is_cancelled() earlier mirror: Use job_is_cancelled() job: Add job_cancel_requested() job: Do not soft-cancel after a job is done jobs: Give Job.force_cancel more meaning job: @force parameter for job_cancel_sync() job: Force-cancel jobs in a failed transaction mirror: Drop s->synced mirror: Keep s->synced on error job: Context changes in job_completed_txn_abort() block/aio_task: assert `max_busy_tasks` is greater than 0 block/backup: avoid integer overflow of `max-workers`
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
a640fa0e |
| 06-Oct-2021 |
Hanna Reitz <hreitz@redhat.com> |
mirror: Do not clear .cancelled
Clearing .cancelled before leaving the main loop when the job has been soft-cancelled is no longer necessary since job_is_cancelled() only returns true for jobs that
mirror: Do not clear .cancelled
Clearing .cancelled before leaving the main loop when the job has been soft-cancelled is no longer necessary since job_is_cancelled() only returns true for jobs that have been force-cancelled.
Therefore, this only makes a differences in places that call job_cancel_requested(). In block/mirror.c, this is done only before .cancelled was cleared.
In job.c, there are two callers: - job_completed_txn_abort() asserts that .cancelled is true, so keeping it true will not affect this place.
- job_complete() refuses to let a job complete that has .cancelled set. It is correct to refuse to let the user invoke job-complete on mirror jobs that have already been soft-cancelled.
With this change, there are no places that reset .cancelled to false and so we can be sure that .force_cancel can only be true if .cancelled is true as well. Assert this in job_is_cancelled().
Signed-off-by: Hanna Reitz <hreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20211006151940.214590-13-hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
show more ...
|
#
08b83bff |
| 06-Oct-2021 |
Hanna Reitz <hreitz@redhat.com> |
job: Add job_cancel_requested()
Most callers of job_is_cancelled() actually want to know whether the job is on its way to immediate termination. For example, we refuse to pause jobs that are cancel
job: Add job_cancel_requested()
Most callers of job_is_cancelled() actually want to know whether the job is on its way to immediate termination. For example, we refuse to pause jobs that are cancelled; but this only makes sense for jobs that are really actually cancelled.
A mirror job that is cancelled during READY with force=false should absolutely be allowed to pause. This "cancellation" (which is actually a kind of completion) may take an indefinite amount of time, and so should behave like any job during normal operation. For example, with on-target-error=stop, the job should stop on write errors. (In contrast, force-cancelled jobs should not get write errors, as they should just terminate and not do further I/O.)
Therefore, redefine job_is_cancelled() to only return true for jobs that are force-cancelled (which as of HEAD^ means any job that interprets the cancellation request as a request for immediate termination), and add job_cancel_requested() as the general variant, which returns true for any jobs which have been requested to be cancelled, whether it be immediately or after an arbitrarily long completion phase.
Finally, here is a justification for how different job_is_cancelled() invocations are treated by this patch:
- block/mirror.c (mirror_run()): - The first invocation is a while loop that should loop until the job has been cancelled or scheduled for completion. What kind of cancel does not matter, only the fact that the job is supposed to end.
- The second invocation wants to know whether the job has been soft-cancelled. Calling job_cancel_requested() is a bit too broad, but if the job were force-cancelled, we should leave the main loop as soon as possible anyway, so this should not matter here.
- The last two invocations already check force_cancel, so they should continue to use job_is_cancelled().
- block/backup.c, block/commit.c, block/stream.c, anything in tests/: These jobs know only force-cancel, so there is no difference between job_is_cancelled() and job_cancel_requested(). We can continue using job_is_cancelled().
- job.c: - job_pause_point(), job_yield(), job_sleep_ns(): Only force-cancelled jobs should be prevented from being paused. Continue using job_is_cancelled().
- job_update_rc(), job_finalize_single(), job_finish_sync(): These functions are all called after the job has left its main loop. The mirror job (the only job that can be soft-cancelled) will clear .cancelled before leaving the main loop if it has been soft-cancelled. Therefore, these functions will observe .cancelled to be true only if the job has been force-cancelled. We can continue to use job_is_cancelled(). (Furthermore, conceptually, a soft-cancelled mirror job should not report to have been cancelled. It should report completion (see also the block-job-cancel QAPI documentation). Therefore, it makes sense for these functions not to distinguish between a soft-cancelled mirror job and a job that has completed as normal.)
- job_completed_txn_abort(): All jobs other than @job have been force-cancelled. job_is_cancelled() must be true for them. Regarding @job itself: job_completed_txn_abort() is mostly called when the job's return value is not 0. A soft-cancelled mirror has a return value of 0, and so will not end up here then. However, job_cancel() invokes job_completed_txn_abort() if the job has been deferred to the main loop, which is mostly the case for completed jobs (which skip the assertion), but not for sure. To be safe, use job_cancel_requested() in this assertion.
- job_complete(): This is function eventually invoked by the user (through qmp_block_job_complete() or qmp_job_complete(), or job_complete_sync(), which comes from qemu-img). The intention here is to prevent a user from invoking job-complete after the job has been cancelled. This should also apply to soft cancelling: After a mirror job has been soft-cancelled, the user should not be able to decide otherwise and have it complete as normal (i.e. pivoting to the target).
- job_cancel(): Both functions are equivalent (see comment there), but we want to use job_is_cancelled(), because this shows that we call job_completed_txn_abort() only for force-cancelled jobs. (As explained for job_update_rc(), soft-cancelled jobs should be treated as if they have completed as normal.)
Buglink: https://gitlab.com/qemu-project/qemu/-/issues/462 Signed-off-by: Hanna Reitz <hreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20211006151940.214590-9-hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
show more ...
|
#
401dd096 |
| 06-Oct-2021 |
Hanna Reitz <hreitz@redhat.com> |
job: Do not soft-cancel after a job is done
The only job that supports a soft cancel mode is the mirror job, and in such a case it resets its .cancelled field before it leaves its .run() function, s
job: Do not soft-cancel after a job is done
The only job that supports a soft cancel mode is the mirror job, and in such a case it resets its .cancelled field before it leaves its .run() function, so it does not really count as cancelled.
However, it is possible to cancel the job after .run() returns and before job_exit() (which is run in the main loop) is executed. Then, .cancelled would still be true and the job would count as cancelled. This does not seem to be in the interest of the mirror job, so adjust job_cancel_async() to not set .cancelled in such a case, and job_cancel() to not invoke job_completed_txn_abort().
Signed-off-by: Hanna Reitz <hreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Message-Id: <20211006151940.214590-8-hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
show more ...
|
#
73895f38 |
| 06-Oct-2021 |
Hanna Reitz <hreitz@redhat.com> |
jobs: Give Job.force_cancel more meaning
We largely have two cancel modes for jobs:
First, there is actual cancelling. The job is terminated as soon as possible, without trying to reach a consiste
jobs: Give Job.force_cancel more meaning
We largely have two cancel modes for jobs:
First, there is actual cancelling. The job is terminated as soon as possible, without trying to reach a consistent result.
Second, we have mirror in the READY state. Technically, the job is not really cancelled, but it just is a different completion mode. The job can still run for an indefinite amount of time while it tries to reach a consistent result.
We want to be able to clearly distinguish which cancel mode a job is in (when it has been cancelled). We can use Job.force_cancel for this, but right now it only reflects cancel requests from the user with force=true, but clearly, jobs that do not even distinguish between force=false and force=true are effectively always force-cancelled.
So this patch has Job.force_cancel signify whether the job will terminate as soon as possible (force_cancel=true) or whether it will effectively remain running despite being "cancelled" (force_cancel=false).
To this end, we let jobs that provide JobDriver.cancel() tell the generic job code whether they will terminate as soon as possible or not, and for jobs that do not provide that method we assume they will.
Signed-off-by: Hanna Reitz <hreitz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-Id: <20211006151940.214590-7-hreitz@redhat.com> Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
show more ...
|