1==================================== 2Concurrency Managed Workqueue (cmwq) 3==================================== 4 5:Date: September, 2010 6:Author: Tejun Heo <tj@kernel.org> 7:Author: Florian Mickler <florian@mickler.org> 8 9 10Introduction 11============ 12 13There are many cases where an asynchronous process execution context 14is needed and the workqueue (wq) API is the most commonly used 15mechanism for such cases. 16 17When such an asynchronous execution context is needed, a work item 18describing which function to execute is put on a queue. An 19independent thread serves as the asynchronous execution context. The 20queue is called workqueue and the thread is called worker. 21 22While there are work items on the workqueue the worker executes the 23functions associated with the work items one after the other. When 24there is no work item left on the workqueue the worker becomes idle. 25When a new work item gets queued, the worker begins executing again. 26 27 28Why cmwq? 29========= 30 31In the original wq implementation, a multi threaded (MT) wq had one 32worker thread per CPU and a single threaded (ST) wq had one worker 33thread system-wide. A single MT wq needed to keep around the same 34number of workers as the number of CPUs. The kernel grew a lot of MT 35wq users over the years and with the number of CPU cores continuously 36rising, some systems saturated the default 32k PID space just booting 37up. 38 39Although MT wq wasted a lot of resource, the level of concurrency 40provided was unsatisfactory. The limitation was common to both ST and 41MT wq albeit less severe on MT. Each wq maintained its own separate 42worker pool. An MT wq could provide only one execution context per CPU 43while an ST wq one for the whole system. Work items had to compete for 44those very limited execution contexts leading to various problems 45including proneness to deadlocks around the single execution context. 46 47The tension between the provided level of concurrency and resource 48usage also forced its users to make unnecessary tradeoffs like libata 49choosing to use ST wq for polling PIOs and accepting an unnecessary 50limitation that no two polling PIOs can progress at the same time. As 51MT wq don't provide much better concurrency, users which require 52higher level of concurrency, like async or fscache, had to implement 53their own thread pool. 54 55Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with 56focus on the following goals. 57 58* Maintain compatibility with the original workqueue API. 59 60* Use per-CPU unified worker pools shared by all wq to provide 61 flexible level of concurrency on demand without wasting a lot of 62 resource. 63 64* Automatically regulate worker pool and level of concurrency so that 65 the API users don't need to worry about such details. 66 67 68The Design 69========== 70 71In order to ease the asynchronous execution of functions a new 72abstraction, the work item, is introduced. 73 74A work item is a simple struct that holds a pointer to the function 75that is to be executed asynchronously. Whenever a driver or subsystem 76wants a function to be executed asynchronously it has to set up a work 77item pointing to that function and queue that work item on a 78workqueue. 79 80Special purpose threads, called worker threads, execute the functions 81off of the queue, one after the other. If no work is queued, the 82worker threads become idle. These worker threads are managed in so 83called worker-pools. 84 85The cmwq design differentiates between the user-facing workqueues that 86subsystems and drivers queue work items on and the backend mechanism 87which manages worker-pools and processes the queued work items. 88 89There are two worker-pools, one for normal work items and the other 90for high priority ones, for each possible CPU and some extra 91worker-pools to serve work items queued on unbound workqueues - the 92number of these backing pools is dynamic. 93 94Subsystems and drivers can create and queue work items through special 95workqueue API functions as they see fit. They can influence some 96aspects of the way the work items are executed by setting flags on the 97workqueue they are putting the work item on. These flags include 98things like CPU locality, concurrency limits, priority and more. To 99get a detailed overview refer to the API description of 100``alloc_workqueue()`` below. 101 102When a work item is queued to a workqueue, the target worker-pool is 103determined according to the queue parameters and workqueue attributes 104and appended on the shared worklist of the worker-pool. For example, 105unless specifically overridden, a work item of a bound workqueue will 106be queued on the worklist of either normal or highpri worker-pool that 107is associated to the CPU the issuer is running on. 108 109For any worker pool implementation, managing the concurrency level 110(how many execution contexts are active) is an important issue. cmwq 111tries to keep the concurrency at a minimal but sufficient level. 112Minimal to save resources and sufficient in that the system is used at 113its full capacity. 114 115Each worker-pool bound to an actual CPU implements concurrency 116management by hooking into the scheduler. The worker-pool is notified 117whenever an active worker wakes up or sleeps and keeps track of the 118number of the currently runnable workers. Generally, work items are 119not expected to hog a CPU and consume many cycles. That means 120maintaining just enough concurrency to prevent work processing from 121stalling should be optimal. As long as there are one or more runnable 122workers on the CPU, the worker-pool doesn't start execution of a new 123work, but, when the last running worker goes to sleep, it immediately 124schedules a new worker so that the CPU doesn't sit idle while there 125are pending work items. This allows using a minimal number of workers 126without losing execution bandwidth. 127 128Keeping idle workers around doesn't cost other than the memory space 129for kthreads, so cmwq holds onto idle ones for a while before killing 130them. 131 132For unbound workqueues, the number of backing pools is dynamic. 133Unbound workqueue can be assigned custom attributes using 134``apply_workqueue_attrs()`` and workqueue will automatically create 135backing worker pools matching the attributes. The responsibility of 136regulating concurrency level is on the users. There is also a flag to 137mark a bound wq to ignore the concurrency management. Please refer to 138the API section for details. 139 140Forward progress guarantee relies on that workers can be created when 141more execution contexts are necessary, which in turn is guaranteed 142through the use of rescue workers. All work items which might be used 143on code paths that handle memory reclaim are required to be queued on 144wq's that have a rescue-worker reserved for execution under memory 145pressure. Else it is possible that the worker-pool deadlocks waiting 146for execution contexts to free up. 147 148 149Application Programming Interface (API) 150======================================= 151 152``alloc_workqueue()`` allocates a wq. The original 153``create_*workqueue()`` functions are deprecated and scheduled for 154removal. ``alloc_workqueue()`` takes three arguments - ``@name``, 155``@flags`` and ``@max_active``. ``@name`` is the name of the wq and 156also used as the name of the rescuer thread if there is one. 157 158A wq no longer manages execution resources but serves as a domain for 159forward progress guarantee, flush and work item attributes. ``@flags`` 160and ``@max_active`` control how work items are assigned execution 161resources, scheduled and executed. 162 163 164``flags`` 165--------- 166 167``WQ_UNBOUND`` 168 Work items queued to an unbound wq are served by the special 169 worker-pools which host workers which are not bound to any 170 specific CPU. This makes the wq behave as a simple execution 171 context provider without concurrency management. The unbound 172 worker-pools try to start execution of work items as soon as 173 possible. Unbound wq sacrifices locality but is useful for 174 the following cases. 175 176 * Wide fluctuation in the concurrency level requirement is 177 expected and using bound wq may end up creating large number 178 of mostly unused workers across different CPUs as the issuer 179 hops through different CPUs. 180 181 * Long running CPU intensive workloads which can be better 182 managed by the system scheduler. 183 184``WQ_FREEZABLE`` 185 A freezable wq participates in the freeze phase of the system 186 suspend operations. Work items on the wq are drained and no 187 new work item starts execution until thawed. 188 189``WQ_MEM_RECLAIM`` 190 All wq which might be used in the memory reclaim paths **MUST** 191 have this flag set. The wq is guaranteed to have at least one 192 execution context regardless of memory pressure. 193 194``WQ_HIGHPRI`` 195 Work items of a highpri wq are queued to the highpri 196 worker-pool of the target cpu. Highpri worker-pools are 197 served by worker threads with elevated nice level. 198 199 Note that normal and highpri worker-pools don't interact with 200 each other. Each maintains its separate pool of workers and 201 implements concurrency management among its workers. 202 203``WQ_CPU_INTENSIVE`` 204 Work items of a CPU intensive wq do not contribute to the 205 concurrency level. In other words, runnable CPU intensive 206 work items will not prevent other work items in the same 207 worker-pool from starting execution. This is useful for bound 208 work items which are expected to hog CPU cycles so that their 209 execution is regulated by the system scheduler. 210 211 Although CPU intensive work items don't contribute to the 212 concurrency level, start of their executions is still 213 regulated by the concurrency management and runnable 214 non-CPU-intensive work items can delay execution of CPU 215 intensive work items. 216 217 This flag is meaningless for unbound wq. 218 219 220``max_active`` 221-------------- 222 223``@max_active`` determines the maximum number of execution contexts 224per CPU which can be assigned to the work items of a wq. For example, 225with ``@max_active`` of 16, at most 16 work items of the wq can be 226executing at the same time per CPU. 227 228Currently, for a bound wq, the maximum limit for ``@max_active`` is 229512 and the default value used when 0 is specified is 256. For an 230unbound wq, the limit is higher of 512 and 4 * 231``num_possible_cpus()``. These values are chosen sufficiently high 232such that they are not the limiting factor while providing protection 233in runaway cases. 234 235The number of active work items of a wq is usually regulated by the 236users of the wq, more specifically, by how many work items the users 237may queue at the same time. Unless there is a specific need for 238throttling the number of active work items, specifying '0' is 239recommended. 240 241Some users depend on the strict execution ordering of ST wq. The 242combination of ``@max_active`` of 1 and ``WQ_UNBOUND`` used to 243achieve this behavior. Work items on such wq were always queued to the 244unbound worker-pools and only one work item could be active at any given 245time thus achieving the same ordering property as ST wq. 246 247In the current implementation the above configuration only guarantees 248ST behavior within a given NUMA node. Instead ``alloc_ordered_queue()`` should 249be used to achieve system-wide ST behavior. 250 251 252Example Execution Scenarios 253=========================== 254 255The following example execution scenarios try to illustrate how cmwq 256behave under different configurations. 257 258 Work items w0, w1, w2 are queued to a bound wq q0 on the same CPU. 259 w0 burns CPU for 5ms then sleeps for 10ms then burns CPU for 5ms 260 again before finishing. w1 and w2 burn CPU for 5ms then sleep for 261 10ms. 262 263Ignoring all other tasks, works and processing overhead, and assuming 264simple FIFO scheduling, the following is one highly simplified version 265of possible sequences of events with the original wq. :: 266 267 TIME IN MSECS EVENT 268 0 w0 starts and burns CPU 269 5 w0 sleeps 270 15 w0 wakes up and burns CPU 271 20 w0 finishes 272 20 w1 starts and burns CPU 273 25 w1 sleeps 274 35 w1 wakes up and finishes 275 35 w2 starts and burns CPU 276 40 w2 sleeps 277 50 w2 wakes up and finishes 278 279And with cmwq with ``@max_active`` >= 3, :: 280 281 TIME IN MSECS EVENT 282 0 w0 starts and burns CPU 283 5 w0 sleeps 284 5 w1 starts and burns CPU 285 10 w1 sleeps 286 10 w2 starts and burns CPU 287 15 w2 sleeps 288 15 w0 wakes up and burns CPU 289 20 w0 finishes 290 20 w1 wakes up and finishes 291 25 w2 wakes up and finishes 292 293If ``@max_active`` == 2, :: 294 295 TIME IN MSECS EVENT 296 0 w0 starts and burns CPU 297 5 w0 sleeps 298 5 w1 starts and burns CPU 299 10 w1 sleeps 300 15 w0 wakes up and burns CPU 301 20 w0 finishes 302 20 w1 wakes up and finishes 303 20 w2 starts and burns CPU 304 25 w2 sleeps 305 35 w2 wakes up and finishes 306 307Now, let's assume w1 and w2 are queued to a different wq q1 which has 308``WQ_CPU_INTENSIVE`` set, :: 309 310 TIME IN MSECS EVENT 311 0 w0 starts and burns CPU 312 5 w0 sleeps 313 5 w1 and w2 start and burn CPU 314 10 w1 sleeps 315 15 w2 sleeps 316 15 w0 wakes up and burns CPU 317 20 w0 finishes 318 20 w1 wakes up and finishes 319 25 w2 wakes up and finishes 320 321 322Guidelines 323========== 324 325* Do not forget to use ``WQ_MEM_RECLAIM`` if a wq may process work 326 items which are used during memory reclaim. Each wq with 327 ``WQ_MEM_RECLAIM`` set has an execution context reserved for it. If 328 there is dependency among multiple work items used during memory 329 reclaim, they should be queued to separate wq each with 330 ``WQ_MEM_RECLAIM``. 331 332* Unless strict ordering is required, there is no need to use ST wq. 333 334* Unless there is a specific need, using 0 for @max_active is 335 recommended. In most use cases, concurrency level usually stays 336 well under the default limit. 337 338* A wq serves as a domain for forward progress guarantee 339 (``WQ_MEM_RECLAIM``, flush and work item attributes. Work items 340 which are not involved in memory reclaim and don't need to be 341 flushed as a part of a group of work items, and don't require any 342 special attribute, can use one of the system wq. There is no 343 difference in execution characteristics between using a dedicated wq 344 and a system wq. 345 346* Unless work items are expected to consume a huge amount of CPU 347 cycles, using a bound wq is usually beneficial due to the increased 348 level of locality in wq operations and work item execution. 349 350 351Debugging 352========= 353 354Because the work functions are executed by generic worker threads 355there are a few tricks needed to shed some light on misbehaving 356workqueue users. 357 358Worker threads show up in the process list as: :: 359 360 root 5671 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/0:1] 361 root 5672 0.0 0.0 0 0 ? S 12:07 0:00 [kworker/1:2] 362 root 5673 0.0 0.0 0 0 ? S 12:12 0:00 [kworker/0:0] 363 root 5674 0.0 0.0 0 0 ? S 12:13 0:00 [kworker/1:0] 364 365If kworkers are going crazy (using too much cpu), there are two types 366of possible problems: 367 368 1. Something being scheduled in rapid succession 369 2. A single work item that consumes lots of cpu cycles 370 371The first one can be tracked using tracing: :: 372 373 $ echo workqueue:workqueue_queue_work > /sys/kernel/debug/tracing/set_event 374 $ cat /sys/kernel/debug/tracing/trace_pipe > out.txt 375 (wait a few secs) 376 ^C 377 378If something is busy looping on work queueing, it would be dominating 379the output and the offender can be determined with the work item 380function. 381 382For the second type of problems it should be possible to just check 383the stack trace of the offending worker thread. :: 384 385 $ cat /proc/THE_OFFENDING_KWORKER/stack 386 387The work item's function should be trivially visible in the stack 388trace. 389 390Non-reentrance Conditions 391========================= 392 393Workqueue guarantees that a work item cannot be re-entrant if the following 394conditions hold after a work item gets queued: 395 396 1. The work function hasn't been changed. 397 2. No one queues the work item to another workqueue. 398 3. The work item hasn't been reinitiated. 399 400In other words, if the above conditions hold, the work item is guaranteed to be 401executed by at most one worker system-wide at any given time. 402 403Note that requeuing the work item (to the same queue) in the self function 404doesn't break these conditions, so it's safe to do. Otherwise, caution is 405required when breaking the conditions inside a work function. 406 407 408Kernel Inline Documentations Reference 409====================================== 410 411.. kernel-doc:: include/linux/workqueue.h 412 413.. kernel-doc:: kernel/workqueue.c 414