Lines Matching +full:- +full:affinity
33 thread system-wide. A single MT wq needed to keep around the same
60 * Use per-CPU unified worker pools shared by all wq to provide
83 called worker-pools.
85 The cmwq design differentiates between the user-facing workqueues that
87 which manages worker-pools and processes the queued work items.
89 There are two worker-pools, one for normal work items and the other
91 worker-pools to serve work items queued on unbound workqueues - the
102 When a work item is queued to a workqueue, the target worker-pool is
104 and appended on the shared worklist of the worker-pool. For example,
106 be queued on the worklist of either normal or highpri worker-pool that
115 Each worker-pool bound to an actual CPU implements concurrency
116 management by hooking into the scheduler. The worker-pool is notified
122 workers on the CPU, the worker-pool doesn't start execution of a new
144 wq's that have a rescue-worker reserved for execution under memory
145 pressure. Else it is possible that the worker-pool deadlocks waiting
154 removal. ``alloc_workqueue()`` takes three arguments - ``@name``,
165 ---------
169 worker-pools which host workers which are not bound to any
172 worker-pools try to start execution of work items as soon as
196 worker-pool of the target cpu. Highpri worker-pools are
199 Note that normal and highpri worker-pools don't interact with
207 worker-pool from starting execution. This is useful for bound
214 non-CPU-intensive work items can delay execution of CPU
221 --------------
226 at the same time per CPU. This is always a per-CPU attribute, even for
243 unbound worker-pools and only one work item could be active at any given
248 be used to achieve system-wide ST behavior.
350 Affinity Scopes
353 An unbound workqueue groups CPUs according to its affinity scope to improve
354 cache locality. For example, if a workqueue is using the default affinity
361 Workqueue currently supports the following affinity scopes.
369 worker on the same CPU. This makes unbound workqueues behave as per-cpu
379 cases. This is the default affinity scope.
388 The default affinity scope can be changed with the module parameter
389 ``workqueue.default_affinity_scope`` and a specific workqueue's affinity
392 If ``WQ_SYSFS`` is set, the workqueue will have the following affinity scope
397 Read to see the current affinity scope. Write to change.
403 0 by default indicating that affinity scopes are not strict. When a work
404 item starts execution, workqueue makes a best-effort attempt to ensure
405 that the worker is inside its affinity scope, which is called
412 scope. This may be useful when crossing affinity scopes has other
418 Affinity Scopes and Performance
423 kernel, there exists a pronounced trade-off between locality and utilization
429 enough across the affinity scopes by the issuers. The following performance
430 testing with dm-crypt clearly illustrates this trade-off.
432 The tests are run on a CPU with 12-cores/24-threads split across four L3
434 ``/dev/dm-0`` is a dm-crypt device created on NVME SSD (Samsung 990 PRO) and
439 -------------------------------------------------------------
443 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k --ioengine=libaio \
444 --iodepth=64 --runtime=60 --numjobs=24 --time_based --group_reporting \
445 --name=iops-test-job --verify=sha512
447 There are 24 issuers, each issuing 64 IOs concurrently. ``--verify=sha512``
450 are the read bandwidths and CPU utilizations depending on different affinity
454 .. list-table::
456 :header-rows: 1
458 * - Affinity
459 - Bandwidth (MiBps)
460 - CPU util (%)
462 * - system
463 - 1159.40 ±1.34
464 - 99.31 ±0.02
466 * - cache
467 - 1166.40 ±0.89
468 - 99.34 ±0.01
470 * - cache (strict)
471 - 1166.00 ±0.71
472 - 99.35 ±0.01
476 machine but the cache-affine ones outperform by 0.6% thanks to improved
481 -----------------------------------------------------
485 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
486 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=8 \
487 --time_based --group_reporting --name=iops-test-job --verify=sha512
489 The only difference from the previous scenario is ``--numjobs=8``. There are
493 .. list-table::
495 :header-rows: 1
497 * - Affinity
498 - Bandwidth (MiBps)
499 - CPU util (%)
501 * - system
502 - 1155.40 ±0.89
503 - 97.41 ±0.05
505 * - cache
506 - 1154.40 ±1.14
507 - 96.15 ±0.09
509 * - cache (strict)
510 - 1112.00 ±4.64
511 - 93.26 ±0.35
524 -----------------------------------------------------------
528 $ fio --filename=/dev/dm-0 --direct=1 --rw=randrw --bs=32k \
529 --ioengine=libaio --iodepth=64 --runtime=60 --numjobs=4 \
530 --time_based --group_reporting --name=iops-test-job --verify=sha512
532 Again, the only difference is ``--numjobs=4``. With the number of issuers
536 .. list-table::
538 :header-rows: 1
540 * - Affinity
541 - Bandwidth (MiBps)
542 - CPU util (%)
544 * - system
545 - 993.60 ±1.82
546 - 75.49 ±0.06
548 * - cache
549 - 973.40 ±1.52
550 - 74.90 ±0.07
552 * - cache (strict)
553 - 828.20 ±4.49
554 - 66.84 ±0.29
561 ------------------------------
563 In the above experiments, the efficiency advantage of the "cache" affinity
568 While the loss of work-conservation in certain scenarios hurts, it is a lot
571 affinity scope for unbound pools.
578 * An unbound workqueue with strict "cpu" affinity scope behaves the same as
579 ``WQ_CPU_INTENSIVE`` per-cpu workqueue. There is no real advanage to the
582 * Affinity scopes are introduced in Linux v6.5. To emulate the previous
583 behavior, use strict "numa" affinity scope.
585 * The loss of work-conservation in non-strict affinity scopes is likely
588 work-conservation in most cases. As such, it is possible that future
595 Use tools/workqueue/wq_dump.py to examine unbound CPU affinity
599 Affinity Scopes
630 pod_node [0]=-1
636 pool[01] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 0
638 pool[03] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 1
640 pool[05] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 2
642 pool[07] ref= 1 nice=-20 idle/workers= 2/ 2 cpu= 3
646 pool[11] ref= 1 nice=-20 idle/workers= 1/ 1 cpus=0000000f
647 pool[12] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=00000003
648 pool[13] ref= 2 nice=-20 idle/workers= 1/ 1 cpus=0000000c
650 Workqueue CPU -> pool
676 events 18545 0 6.1 0 5 - -
677 events_highpri 8 0 0.0 0 0 - -
678 events_long 3 0 0.0 0 0 - -
679 events_unbound 38306 0 0.1 - 7 - -
680 events_freezable 0 0 0.0 0 0 - -
681 events_power_efficient 29598 0 0.2 0 0 - -
682 events_freezable_power_ 10 0 0.0 0 0 - -
683 sock_diag_events 0 0 0.0 0 0 - -
686 events 18548 0 6.1 0 5 - -
687 events_highpri 8 0 0.0 0 0 - -
688 events_long 3 0 0.0 0 0 - -
689 events_unbound 38322 0 0.1 - 7 - -
690 events_freezable 0 0 0.0 0 0 - -
691 events_power_efficient 29603 0 0.2 0 0 - -
692 events_freezable_power_ 10 0 0.0 0 0 - -
693 sock_diag_events 0 0 0.0 0 0 - -
740 Non-reentrance Conditions
743 Workqueue guarantees that a work item cannot be re-entrant if the following
751 executed by at most one worker system-wide at any given time.
761 .. kernel-doc:: include/linux/workqueue.h
763 .. kernel-doc:: kernel/workqueue.c