History log of /openbmc/linux/block/blk-mq-sched.c (Results 101 – 125 of 700)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
Revision tags: v5.15.10, v5.15.9, v5.15.8
# 86329873 09-Dec-2021 Daniel Lezcano <daniel.lezcano@linaro.org>

Merge branch 'reset/of-get-optional-exclusive' of git://git.pengutronix.de/pza/linux into timers/drivers/next

"Add optional variant of of_reset_control_get_exclusive(). If the
requested reset is not

Merge branch 'reset/of-get-optional-exclusive' of git://git.pengutronix.de/pza/linux into timers/drivers/next

"Add optional variant of of_reset_control_get_exclusive(). If the
requested reset is not specified in the device tree, this function
returns NULL instead of an error."

This dependency is needed for the Generic Timer Module (a.k.a OSTM)
support for RZ/G2L.

Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>

show more ...


# 5d8dfaa7 09-Dec-2021 Dmitry Torokhov <dmitry.torokhov@gmail.com>

Merge tag 'v5.15' into next

Sync up with the mainline to get the latest APIs and DT bindings.


Revision tags: v5.15.7
# 4cafe86c 03-Dec-2021 Ming Lei <ming.lei@redhat.com>

blk-mq: run dispatch lock once in case of issuing from list

It isn't necessary to call blk_mq_run_dispatch_ops() once for issuing
single request directly, and enough to do it one time when issuing f

blk-mq: run dispatch lock once in case of issuing from list

It isn't necessary to call blk_mq_run_dispatch_ops() once for issuing
single request directly, and enough to do it one time when issuing from
whole list.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203131534.3668411-5-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


Revision tags: v5.15.6
# 87dd1d63 26-Nov-2021 Christoph Hellwig <hch@lst.de>

block: move blk_mq_sched_assign_ioc to blk-ioc.c

Move blk_mq_sched_assign_ioc so that many interfaces from the file can
be marked static. Rename the function to ioc_find_get_icq as well and
return

block: move blk_mq_sched_assign_ioc to blk-ioc.c

Move blk_mq_sched_assign_ioc so that many interfaces from the file can
be marked static. Rename the function to ioc_find_get_icq as well and
return the icq to simplify the interface.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# c2a32464 26-Nov-2021 Christoph Hellwig <hch@lst.de>

Revert "block: Provide blk_mq_sched_get_icq()"

This reverts commit 4896c4e64ba5d5d5acdbcf68c5910dd4f6d8fa62.

The helper is not needed any more.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link:

Revert "block: Provide blk_mq_sched_get_icq()"

This reverts commit 4896c4e64ba5d5d5acdbcf68c5910dd4f6d8fa62.

The helper is not needed any more.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# 790cf9c8 25-Nov-2021 Jan Kara <jack@suse.cz>

block: Provide blk_mq_sched_get_icq()

Currently we lookup ICQ only after the request is allocated. However BFQ
will want to decide how many scheduler tags it allows a given bfq queue
(effectively a

block: Provide blk_mq_sched_get_icq()

Currently we lookup ICQ only after the request is allocated. However BFQ
will want to decide how many scheduler tags it allows a given bfq queue
(effectively a process) to consume based on cgroup weight. So provide a
function blk_mq_sched_get_icq() so that BFQ can lookup ICQ earlier.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211125133645.27483-1-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


Revision tags: v5.15.5, v5.15.4, v5.15.3
# 5a9d041b 13-Nov-2021 Jens Axboe <axboe@kernel.dk>

block: move io_context creation into where it's needed

The only user of the io_context for IO is BFQ, yet we put the checking
and logic of it into the normal IO path.

Put the creation into blk_mq_s

block: move io_context creation into where it's needed

The only user of the io_context for IO is BFQ, yet we put the checking
and logic of it into the normal IO path.

Put the creation into blk_mq_sched_assign_ioc(), and have BFQ use that
helper.

Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# 448cc2fb 22-Nov-2021 Jani Nikula <jani.nikula@intel.com>

Merge drm/drm-next into drm-intel-next

Sync up with drm-next to get v5.16-rc2.

Signed-off-by: Jani Nikula <jani.nikula@intel.com>


# 8626afb1 22-Nov-2021 Tvrtko Ursulin <tvrtko.ursulin@intel.com>

Merge drm/drm-next into drm-intel-gt-next

Thomas needs the dma_resv_for_each_fence API for i915/ttm async migration
work.

Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>


# 50fc2494 18-Nov-2021 Jakub Kicinski <kuba@kernel.org>

Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net

Signed-off-by: Jakub Kicinski <kuba@kernel.org>


# a713ca23 18-Nov-2021 Thomas Zimmermann <tzimmermann@suse.de>

Merge drm/drm-next into drm-misc-next

Backmerging from drm/drm-next for v5.16-rc1.

Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>


# 467dd91e 16-Nov-2021 Maxime Ripard <maxime@cerno.tech>

Merge drm/drm-fixes into drm-misc-fixes

We need -rc1 to address a breakage in drm/scheduler affecting panfrost.

Signed-off-by: Maxime Ripard <maxime@cerno.tech>


# f44c7dbd 13-Nov-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'block-5.16-2021-11-13' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:
"Set of fixes that should go into this merge window:

- ioctl vs read data race fixes (Shin

Merge tag 'block-5.16-2021-11-13' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:
"Set of fixes that should go into this merge window:

- ioctl vs read data race fixes (Shin'ichiro)

- blkcg use-after-free fix (Laibin)

- Last piece of the puzzle for add_disk() error handling, enable
__must_check for (Luis)

- Request allocation fixes (Ming)

- Misc fixes (me)"

* tag 'block-5.16-2021-11-13' of git://git.kernel.dk/linux-block:
blk-mq: fix filesystem I/O request allocation
blkcg: Remove extra blkcg_bio_issue_init
block: Hold invalidate_lock in BLKRESETZONE ioctl
blk-mq: rename blk_attempt_bio_merge
blk-mq: don't grab ->q_usage_counter in blk_mq_sched_bio_merge
block: fix kerneldoc for disk_register_independent_access__ranges()
block: add __must_check for *add_disk*() callers
block: use enum type for blk_mq_alloc_data->rq_flags
block: Hold invalidate_lock in BLKZEROOUT ioctl
block: Hold invalidate_lock in BLKDISCARD ioctl

show more ...


Revision tags: v5.15.2
# 10f7335e 11-Nov-2021 Ming Lei <ming.lei@redhat.com>

blk-mq: don't grab ->q_usage_counter in blk_mq_sched_bio_merge

blk_mq_sched_bio_merge is only called from blk-mq.c:blk_attempt_bio_merge(),
which is called when queue usage counter is grabbed alread

blk-mq: don't grab ->q_usage_counter in blk_mq_sched_bio_merge

blk_mq_sched_bio_merge is only called from blk-mq.c:blk_attempt_bio_merge(),
which is called when queue usage counter is grabbed already:

1) blk_mq_get_new_requests()

2) blk_mq_get_request()
- cached request in current plug owns one queue usage counter

So don't grab ->q_usage_counter in blk_mq_sched_bio_merge(), and more
importantly this nest way causes hang in blk_mq_freeze_queue_wait().

Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211111085134.345235-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# 3e28850c 09-Nov-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'for-5.16/block-2021-11-09' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:

- Set of fixes for the batched tag allocation (Ming, me)

- add_disk() error handling fi

Merge tag 'for-5.16/block-2021-11-09' of git://git.kernel.dk/linux-block

Pull block fixes from Jens Axboe:

- Set of fixes for the batched tag allocation (Ming, me)

- add_disk() error handling fix (Luis)

- Nested queue quiesce fixes (Ming)

- Shared tags init error handling fix (Ye)

- Misc cleanups (Jean, Ming, me)

* tag 'for-5.16/block-2021-11-09' of git://git.kernel.dk/linux-block:
nvme: wait until quiesce is done
scsi: make sure that request queue queiesce and unquiesce balanced
scsi: avoid to quiesce sdev->request_queue two times
blk-mq: add one API for waiting until quiesce is done
blk-mq: don't free tags if the tag_set is used by other device in queue initialztion
block: fix device_add_disk() kobject_create_and_add() error handling
block: ensure cached plug request matches the current queue
block: move queue enter logic into blk_mq_submit_bio()
block: make bio_queue_enter() fast-path available inline
block: split request allocation components into helpers
block: have plug stored requests hold references to the queue
blk-mq: update hctx->nr_active in blk_mq_end_request_batch()
blk-mq: add RQF_ELV debug entry
blk-mq: only try to run plug merge if request has same queue with incoming bio
block: move RQF_ELV setting into allocators
dm: don't stop request queue after the dm device is suspended
block: replace always false argument with 'false'
block: assign correct tag before doing prefetch of request
blk-mq: fix redundant check of !e expression

show more ...


# 7f9f8792 06-Nov-2021 Arnaldo Carvalho de Melo <acme@redhat.com>

Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up some tools/perf/ patches that went via tip/perf/core, such
as:

tools/perf: Add mem_hops field in perf_mem_data_src structu

Merge remote-tracking branch 'torvalds/master' into perf/core

To pick up some tools/perf/ patches that went via tip/perf/core, such
as:

tools/perf: Add mem_hops field in perf_mem_data_src structure

Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>

show more ...


Revision tags: v5.15.1
# 900e0807 03-Nov-2021 Jens Axboe <axboe@kernel.dk>

block: move queue enter logic into blk_mq_submit_bio()

Retain the old logic for the fops based submit, but for our internal
blk_mq_submit_bio(), move the queue entering logic into the core
function

block: move queue enter logic into blk_mq_submit_bio()

Retain the old logic for the fops based submit, but for our internal
blk_mq_submit_bio(), move the queue entering logic into the core
function itself.

We need to be a bit careful if going into the scheduler, as a scheduler
or queue mappings can arbitrarily change before we have entered the queue.
Have the bio scheduler mapping do that separately, it's a very cheap
operation compared to actually doing merging locking and lookups.

Reviewed-by: Christoph Hellwig <hch@lst.de>
[axboe: update to check merge post submit_bio_checks() doing remap...]
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# 33c8846c 01-Nov-2021 Linus Torvalds <torvalds@linux-foundation.org>

Merge tag 'for-5.16/block-2021-10-29' of git://git.kernel.dk/linux-block

Pull block updates from Jens Axboe:

- mq-deadline accounting improvements (Bart)

- blk-wbt timer fix (Andrea)

- Untangl

Merge tag 'for-5.16/block-2021-10-29' of git://git.kernel.dk/linux-block

Pull block updates from Jens Axboe:

- mq-deadline accounting improvements (Bart)

- blk-wbt timer fix (Andrea)

- Untangle the block layer includes (Christoph)

- Rework the poll support to be bio based, which will enable adding
support for polling for bio based drivers (Christoph)

- Block layer core support for multi-actuator drives (Damien)

- blk-crypto improvements (Eric)

- Batched tag allocation support (me)

- Request completion batching support (me)

- Plugging improvements (me)

- Shared tag set improvements (John)

- Concurrent queue quiesce support (Ming)

- Cache bdev in ->private_data for block devices (Pavel)

- bdev dio improvements (Pavel)

- Block device invalidation and block size improvements (Xie)

- Various cleanups, fixes, and improvements (Christoph, Jackie,
Masahira, Tejun, Yu, Pavel, Zheng, me)

* tag 'for-5.16/block-2021-10-29' of git://git.kernel.dk/linux-block: (174 commits)
blk-mq-debugfs: Show active requests per queue for shared tags
block: improve readability of blk_mq_end_request_batch()
virtio-blk: Use blk_validate_block_size() to validate block size
loop: Use blk_validate_block_size() to validate block size
nbd: Use blk_validate_block_size() to validate block size
block: Add a helper to validate the block size
block: re-flow blk_mq_rq_ctx_init()
block: prefetch request to be initialized
block: pass in blk_mq_tags to blk_mq_rq_ctx_init()
block: add rq_flags to struct blk_mq_alloc_data
block: add async version of bio_set_polled
block: kill DIO_MULTI_BIO
block: kill unused polling bits in __blkdev_direct_IO()
block: avoid extra iter advance with async iocb
block: Add independent access ranges support
blk-mq: don't issue request directly in case that current is to be blocked
sbitmap: silence data race warning
blk-cgroup: synchronize blkg creation against policy deactivation
block: refactor bio_iov_bvec_set()
block: add single bio async direct IO helper
...

show more ...


Revision tags: v5.15
# ef1661ba 29-Oct-2021 Jean Sacren <sakiwit@gmail.com>

blk-mq: fix redundant check of !e expression

In the if branch, e is checked. In the else branch, ->dispatch_busy is
merely a number and has no effect on !e. We should remove the check of
!e since

blk-mq: fix redundant check of !e expression

In the if branch, e is checked. In the else branch, ->dispatch_busy is
merely a number and has no effect on !e. We should remove the check of
!e since it is always true.

Signed-off-by: Jean Sacren <sakiwit@gmail.com>
Link: https://lore.kernel.org/r/20211029202945.3052-1-sakiwit@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# 8bdf7b3f 22-Oct-2021 John Garry <john.garry@huawei.com>

blk-mq-sched: Don't reference queue tagset in blk_mq_sched_tags_teardown()

We should not reference the queue tagset in blk_mq_sched_tags_teardown()
(see function comment) for the blk-mq flags, so us

blk-mq-sched: Don't reference queue tagset in blk_mq_sched_tags_teardown()

We should not reference the queue tagset in blk_mq_sched_tags_teardown()
(see function comment) for the blk-mq flags, so use the passed flags
instead.

This solves a use-after-free, similarly fixed earlier (and since broken
again) in commit f0c1c4d2864e ("blk-mq: fix use-after-free in
blk_mq_exit_sched").

Reported-by: Linux Kernel Functional Testing <lkft@linaro.org>
Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org>
Tested-by: Anders Roxell <anders.roxell@linaro.org>
Fixes: e155b0c238b2 ("blk-mq: Use shared tags for shared sbitmap support")
Signed-off-by: John Garry <john.garry@huawei.com>
Link: https://lore.kernel.org/r/1634890340-15432-1-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# 179ae84f 20-Oct-2021 Pavel Begunkov <asml.silence@gmail.com>

block: clean up blk_mq_submit_bio() merging

Combine blk_mq_sched_bio_merge() and blk_attempt_plug_merge() under a
common if, so we don't check it twice.

Signed-off-by: Pavel Begunkov <asml.silence@

block: clean up blk_mq_submit_bio() merging

Combine blk_mq_sched_bio_merge() and blk_attempt_plug_merge() under a
common if, so we don't check it twice.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/daedc90d4029a5d1d73344771632b1faca3aaf81.1634755800.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


Revision tags: v5.14.14, v5.14.13
# 9a14d6ce 16-Oct-2021 Jens Axboe <axboe@kernel.dk>

block: remove debugfs blk_mq_ctx dispatched/merged/completed attributes

These were added as part of early days debugging for blk-mq, and they
are not really useful anymore. Rather than spend cycles

block: remove debugfs blk_mq_ctx dispatched/merged/completed attributes

These were added as part of early days debugging for blk-mq, and they
are not really useful anymore. Rather than spend cycles updating them,
just get rid of them.

As a bonus, this shrinks the per-cpu software queue size from 256b
to 192b. That's a whole cacheline less.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


Revision tags: v5.14.12, v5.14.11
# e9ea1596 09-Oct-2021 Pavel Begunkov <asml.silence@gmail.com>

blk-mq: inline hot part of __blk_mq_sched_restart

Extract a fast check out of __block_mq_sched_restart() and inline it for
performance reasons.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com

blk-mq: inline hot part of __blk_mq_sched_restart

Extract a fast check out of __block_mq_sched_restart() and inline it for
performance reasons.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/894abaa0998e5999f2fe18f271e5efdfc2c32bd2.1633781740.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


Revision tags: v5.14.10
# 079a2e3e 05-Oct-2021 John Garry <john.garry@huawei.com>

blk-mq: Change shared sbitmap naming to shared tags

Now that shared sbitmap support really means shared tags, rename symbols
to match that.

Signed-off-by: John Garry <john.garry@huawei.com>
Link: h

blk-mq: Change shared sbitmap naming to shared tags

Now that shared sbitmap support really means shared tags, rename symbols
to match that.

Signed-off-by: John Garry <john.garry@huawei.com>
Link: https://lore.kernel.org/r/1633429419-228500-15-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


# e155b0c2 05-Oct-2021 John Garry <john.garry@huawei.com>

blk-mq: Use shared tags for shared sbitmap support

Currently we use separate sbitmap pairs and active_queues atomic_t for
shared sbitmap support.

However a full sets of static requests are used per

blk-mq: Use shared tags for shared sbitmap support

Currently we use separate sbitmap pairs and active_queues atomic_t for
shared sbitmap support.

However a full sets of static requests are used per HW queue, which is
quite wasteful, considering that the total number of requests usable at
any given time across all HW queues is limited by the shared sbitmap depth.

As such, it is considerably more memory efficient in the case of shared
sbitmap to allocate a set of static rqs per tag set or request queue, and
not per HW queue.

So replace the sbitmap pairs and active_queues atomic_t with a shared
tags per tagset and request queue, which will hold a set of shared static
rqs.

Since there is now no valid HW queue index to be passed to the blk_mq_ops
.init and .exit_request callbacks, pass an invalid index token. This
changes the semantics of the APIs, such that the callback would need to
validate the HW queue index before using it. Currently no user of shared
sbitmap actually uses the HW queue index (as would be expected).

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/1633429419-228500-13-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>

show more ...


12345678910>>...28