1.. SPDX-License-Identifier: GPL-2.0 2 3================================ 4Review Checklist for RCU Patches 5================================ 6 7 8This document contains a checklist for producing and reviewing patches 9that make use of RCU. Violating any of the rules listed below will 10result in the same sorts of problems that leaving out a locking primitive 11would cause. This list is based on experiences reviewing such patches 12over a rather long period of time, but improvements are always welcome! 13 140. Is RCU being applied to a read-mostly situation? If the data 15 structure is updated more than about 10% of the time, then you 16 should strongly consider some other approach, unless detailed 17 performance measurements show that RCU is nonetheless the right 18 tool for the job. Yes, RCU does reduce read-side overhead by 19 increasing write-side overhead, which is exactly why normal uses 20 of RCU will do much more reading than updating. 21 22 Another exception is where performance is not an issue, and RCU 23 provides a simpler implementation. An example of this situation 24 is the dynamic NMI code in the Linux 2.6 kernel, at least on 25 architectures where NMIs are rare. 26 27 Yet another exception is where the low real-time latency of RCU's 28 read-side primitives is critically important. 29 30 One final exception is where RCU readers are used to prevent 31 the ABA problem (https://en.wikipedia.org/wiki/ABA_problem) 32 for lockless updates. This does result in the mildly 33 counter-intuitive situation where rcu_read_lock() and 34 rcu_read_unlock() are used to protect updates, however, this 35 approach provides the same potential simplifications that garbage 36 collectors do. 37 381. Does the update code have proper mutual exclusion? 39 40 RCU does allow *readers* to run (almost) naked, but *writers* must 41 still use some sort of mutual exclusion, such as: 42 43 a. locking, 44 b. atomic operations, or 45 c. restricting updates to a single task. 46 47 If you choose #b, be prepared to describe how you have handled 48 memory barriers on weakly ordered machines (pretty much all of 49 them -- even x86 allows later loads to be reordered to precede 50 earlier stores), and be prepared to explain why this added 51 complexity is worthwhile. If you choose #c, be prepared to 52 explain how this single task does not become a major bottleneck on 53 big multiprocessor machines (for example, if the task is updating 54 information relating to itself that other tasks can read, there 55 by definition can be no bottleneck). Note that the definition 56 of "large" has changed significantly: Eight CPUs was "large" 57 in the year 2000, but a hundred CPUs was unremarkable in 2017. 58 592. Do the RCU read-side critical sections make proper use of 60 rcu_read_lock() and friends? These primitives are needed 61 to prevent grace periods from ending prematurely, which 62 could result in data being unceremoniously freed out from 63 under your read-side code, which can greatly increase the 64 actuarial risk of your kernel. 65 66 As a rough rule of thumb, any dereference of an RCU-protected 67 pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(), 68 rcu_read_lock_sched(), or by the appropriate update-side lock. 69 Explicit disabling of preemption (preempt_disable(), for example) 70 can serve as rcu_read_lock_sched(), but is less readable and 71 prevents lockdep from detecting locking issues. 72 73 Please not that you *cannot* rely on code known to be built 74 only in non-preemptible kernels. Such code can and will break, 75 especially in kernels built with CONFIG_PREEMPT_COUNT=y. 76 77 Letting RCU-protected pointers "leak" out of an RCU read-side 78 critical section is every bit as bad as letting them leak out 79 from under a lock. Unless, of course, you have arranged some 80 other means of protection, such as a lock or a reference count 81 *before* letting them out of the RCU read-side critical section. 82 833. Does the update code tolerate concurrent accesses? 84 85 The whole point of RCU is to permit readers to run without 86 any locks or atomic operations. This means that readers will 87 be running while updates are in progress. There are a number 88 of ways to handle this concurrency, depending on the situation: 89 90 a. Use the RCU variants of the list and hlist update 91 primitives to add, remove, and replace elements on 92 an RCU-protected list. Alternatively, use the other 93 RCU-protected data structures that have been added to 94 the Linux kernel. 95 96 This is almost always the best approach. 97 98 b. Proceed as in (a) above, but also maintain per-element 99 locks (that are acquired by both readers and writers) 100 that guard per-element state. Of course, fields that 101 the readers refrain from accessing can be guarded by 102 some other lock acquired only by updaters, if desired. 103 104 This works quite well, also. 105 106 c. Make updates appear atomic to readers. For example, 107 pointer updates to properly aligned fields will 108 appear atomic, as will individual atomic primitives. 109 Sequences of operations performed under a lock will *not* 110 appear to be atomic to RCU readers, nor will sequences 111 of multiple atomic primitives. 112 113 This can work, but is starting to get a bit tricky. 114 115 d. Carefully order the updates and the reads so that 116 readers see valid data at all phases of the update. 117 This is often more difficult than it sounds, especially 118 given modern CPUs' tendency to reorder memory references. 119 One must usually liberally sprinkle memory barriers 120 (smp_wmb(), smp_rmb(), smp_mb()) through the code, 121 making it difficult to understand and to test. 122 123 It is usually better to group the changing data into 124 a separate structure, so that the change may be made 125 to appear atomic by updating a pointer to reference 126 a new structure containing updated values. 127 1284. Weakly ordered CPUs pose special challenges. Almost all CPUs 129 are weakly ordered -- even x86 CPUs allow later loads to be 130 reordered to precede earlier stores. RCU code must take all of 131 the following measures to prevent memory-corruption problems: 132 133 a. Readers must maintain proper ordering of their memory 134 accesses. The rcu_dereference() primitive ensures that 135 the CPU picks up the pointer before it picks up the data 136 that the pointer points to. This really is necessary 137 on Alpha CPUs. 138 139 The rcu_dereference() primitive is also an excellent 140 documentation aid, letting the person reading the 141 code know exactly which pointers are protected by RCU. 142 Please note that compilers can also reorder code, and 143 they are becoming increasingly aggressive about doing 144 just that. The rcu_dereference() primitive therefore also 145 prevents destructive compiler optimizations. However, 146 with a bit of devious creativity, it is possible to 147 mishandle the return value from rcu_dereference(). 148 Please see rcu_dereference.rst for more information. 149 150 The rcu_dereference() primitive is used by the 151 various "_rcu()" list-traversal primitives, such 152 as the list_for_each_entry_rcu(). Note that it is 153 perfectly legal (if redundant) for update-side code to 154 use rcu_dereference() and the "_rcu()" list-traversal 155 primitives. This is particularly useful in code that 156 is common to readers and updaters. However, lockdep 157 will complain if you access rcu_dereference() outside 158 of an RCU read-side critical section. See lockdep.rst 159 to learn what to do about this. 160 161 Of course, neither rcu_dereference() nor the "_rcu()" 162 list-traversal primitives can substitute for a good 163 concurrency design coordinating among multiple updaters. 164 165 b. If the list macros are being used, the list_add_tail_rcu() 166 and list_add_rcu() primitives must be used in order 167 to prevent weakly ordered machines from misordering 168 structure initialization and pointer planting. 169 Similarly, if the hlist macros are being used, the 170 hlist_add_head_rcu() primitive is required. 171 172 c. If the list macros are being used, the list_del_rcu() 173 primitive must be used to keep list_del()'s pointer 174 poisoning from inflicting toxic effects on concurrent 175 readers. Similarly, if the hlist macros are being used, 176 the hlist_del_rcu() primitive is required. 177 178 The list_replace_rcu() and hlist_replace_rcu() primitives 179 may be used to replace an old structure with a new one 180 in their respective types of RCU-protected lists. 181 182 d. Rules similar to (4b) and (4c) apply to the "hlist_nulls" 183 type of RCU-protected linked lists. 184 185 e. Updates must ensure that initialization of a given 186 structure happens before pointers to that structure are 187 publicized. Use the rcu_assign_pointer() primitive 188 when publicizing a pointer to a structure that can 189 be traversed by an RCU read-side critical section. 190 1915. If call_rcu() or call_srcu() is used, the callback function will 192 be called from softirq context. In particular, it cannot block. 193 If you need the callback to block, run that code in a workqueue 194 handler scheduled from the callback. The queue_rcu_work() 195 function does this for you in the case of call_rcu(). 196 1976. Since synchronize_rcu() can block, it cannot be called 198 from any sort of irq context. The same rule applies 199 for synchronize_srcu(), synchronize_rcu_expedited(), and 200 synchronize_srcu_expedited(). 201 202 The expedited forms of these primitives have the same semantics 203 as the non-expedited forms, but expediting is both expensive and 204 (with the exception of synchronize_srcu_expedited()) unfriendly 205 to real-time workloads. Use of the expedited primitives should 206 be restricted to rare configuration-change operations that would 207 not normally be undertaken while a real-time workload is running. 208 However, real-time workloads can use rcupdate.rcu_normal kernel 209 boot parameter to completely disable expedited grace periods, 210 though this might have performance implications. 211 212 In particular, if you find yourself invoking one of the expedited 213 primitives repeatedly in a loop, please do everyone a favor: 214 Restructure your code so that it batches the updates, allowing 215 a single non-expedited primitive to cover the entire batch. 216 This will very likely be faster than the loop containing the 217 expedited primitive, and will be much much easier on the rest 218 of the system, especially to real-time workloads running on 219 the rest of the system. 220 2217. As of v4.20, a given kernel implements only one RCU flavor, which 222 is RCU-sched for PREEMPTION=n and RCU-preempt for PREEMPTION=y. 223 If the updater uses call_rcu() or synchronize_rcu(), then 224 the corresponding readers may use: (1) rcu_read_lock() and 225 rcu_read_unlock(), (2) any pair of primitives that disables 226 and re-enables softirq, for example, rcu_read_lock_bh() and 227 rcu_read_unlock_bh(), or (3) any pair of primitives that disables 228 and re-enables preemption, for example, rcu_read_lock_sched() and 229 rcu_read_unlock_sched(). If the updater uses synchronize_srcu() 230 or call_srcu(), then the corresponding readers must use 231 srcu_read_lock() and srcu_read_unlock(), and with the same 232 srcu_struct. The rules for the expedited RCU grace-period-wait 233 primitives are the same as for their non-expedited counterparts. 234 235 If the updater uses call_rcu_tasks() or synchronize_rcu_tasks(), 236 then the readers must refrain from executing voluntary 237 context switches, that is, from blocking. If the updater uses 238 call_rcu_tasks_trace() or synchronize_rcu_tasks_trace(), then 239 the corresponding readers must use rcu_read_lock_trace() and 240 rcu_read_unlock_trace(). If an updater uses call_rcu_tasks_rude() 241 or synchronize_rcu_tasks_rude(), then the corresponding readers 242 must use anything that disables interrupts. 243 244 Mixing things up will result in confusion and broken kernels, and 245 has even resulted in an exploitable security issue. Therefore, 246 when using non-obvious pairs of primitives, commenting is 247 of course a must. One example of non-obvious pairing is 248 the XDP feature in networking, which calls BPF programs from 249 network-driver NAPI (softirq) context. BPF relies heavily on RCU 250 protection for its data structures, but because the BPF program 251 invocation happens entirely within a single local_bh_disable() 252 section in a NAPI poll cycle, this usage is safe. The reason 253 that this usage is safe is that readers can use anything that 254 disables BH when updaters use call_rcu() or synchronize_rcu(). 255 2568. Although synchronize_rcu() is slower than is call_rcu(), it 257 usually results in simpler code. So, unless update performance is 258 critically important, the updaters cannot block, or the latency of 259 synchronize_rcu() is visible from userspace, synchronize_rcu() 260 should be used in preference to call_rcu(). Furthermore, 261 kfree_rcu() usually results in even simpler code than does 262 synchronize_rcu() without synchronize_rcu()'s multi-millisecond 263 latency. So please take advantage of kfree_rcu()'s "fire and 264 forget" memory-freeing capabilities where it applies. 265 266 An especially important property of the synchronize_rcu() 267 primitive is that it automatically self-limits: if grace periods 268 are delayed for whatever reason, then the synchronize_rcu() 269 primitive will correspondingly delay updates. In contrast, 270 code using call_rcu() should explicitly limit update rate in 271 cases where grace periods are delayed, as failing to do so can 272 result in excessive realtime latencies or even OOM conditions. 273 274 Ways of gaining this self-limiting property when using call_rcu() 275 include: 276 277 a. Keeping a count of the number of data-structure elements 278 used by the RCU-protected data structure, including 279 those waiting for a grace period to elapse. Enforce a 280 limit on this number, stalling updates as needed to allow 281 previously deferred frees to complete. Alternatively, 282 limit only the number awaiting deferred free rather than 283 the total number of elements. 284 285 One way to stall the updates is to acquire the update-side 286 mutex. (Don't try this with a spinlock -- other CPUs 287 spinning on the lock could prevent the grace period 288 from ever ending.) Another way to stall the updates 289 is for the updates to use a wrapper function around 290 the memory allocator, so that this wrapper function 291 simulates OOM when there is too much memory awaiting an 292 RCU grace period. There are of course many other 293 variations on this theme. 294 295 b. Limiting update rate. For example, if updates occur only 296 once per hour, then no explicit rate limiting is 297 required, unless your system is already badly broken. 298 Older versions of the dcache subsystem take this approach, 299 guarding updates with a global lock, limiting their rate. 300 301 c. Trusted update -- if updates can only be done manually by 302 superuser or some other trusted user, then it might not 303 be necessary to automatically limit them. The theory 304 here is that superuser already has lots of ways to crash 305 the machine. 306 307 d. Periodically invoke synchronize_rcu(), permitting a limited 308 number of updates per grace period. Better yet, periodically 309 invoke rcu_barrier() to wait for all outstanding callbacks. 310 311 The same cautions apply to call_srcu() and kfree_rcu(). 312 313 Note that although these primitives do take action to avoid memory 314 exhaustion when any given CPU has too many callbacks, a determined 315 user could still exhaust memory. This is especially the case 316 if a system with a large number of CPUs has been configured to 317 offload all of its RCU callbacks onto a single CPU, or if the 318 system has relatively little free memory. 319 3209. All RCU list-traversal primitives, which include 321 rcu_dereference(), list_for_each_entry_rcu(), and 322 list_for_each_safe_rcu(), must be either within an RCU read-side 323 critical section or must be protected by appropriate update-side 324 locks. RCU read-side critical sections are delimited by 325 rcu_read_lock() and rcu_read_unlock(), or by similar primitives 326 such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which 327 case the matching rcu_dereference() primitive must be used in 328 order to keep lockdep happy, in this case, rcu_dereference_bh(). 329 330 The reason that it is permissible to use RCU list-traversal 331 primitives when the update-side lock is held is that doing so 332 can be quite helpful in reducing code bloat when common code is 333 shared between readers and updaters. Additional primitives 334 are provided for this case, as discussed in lockdep.rst. 335 336 One exception to this rule is when data is only ever added to 337 the linked data structure, and is never removed during any 338 time that readers might be accessing that structure. In such 339 cases, READ_ONCE() may be used in place of rcu_dereference() 340 and the read-side markers (rcu_read_lock() and rcu_read_unlock(), 341 for example) may be omitted. 342 34310. Conversely, if you are in an RCU read-side critical section, 344 and you don't hold the appropriate update-side lock, you *must* 345 use the "_rcu()" variants of the list macros. Failing to do so 346 will break Alpha, cause aggressive compilers to generate bad code, 347 and confuse people trying to read your code. 348 34911. Any lock acquired by an RCU callback must be acquired elsewhere 350 with softirq disabled, e.g., via spin_lock_irqsave(), 351 spin_lock_bh(), etc. Failing to disable softirq on a given 352 acquisition of that lock will result in deadlock as soon as 353 the RCU softirq handler happens to run your RCU callback while 354 interrupting that acquisition's critical section. 355 35612. RCU callbacks can be and are executed in parallel. In many cases, 357 the callback code simply wrappers around kfree(), so that this 358 is not an issue (or, more accurately, to the extent that it is 359 an issue, the memory-allocator locking handles it). However, 360 if the callbacks do manipulate a shared data structure, they 361 must use whatever locking or other synchronization is required 362 to safely access and/or modify that data structure. 363 364 Do not assume that RCU callbacks will be executed on the same 365 CPU that executed the corresponding call_rcu() or call_srcu(). 366 For example, if a given CPU goes offline while having an RCU 367 callback pending, then that RCU callback will execute on some 368 surviving CPU. (If this was not the case, a self-spawning RCU 369 callback would prevent the victim CPU from ever going offline.) 370 Furthermore, CPUs designated by rcu_nocbs= might well *always* 371 have their RCU callbacks executed on some other CPUs, in fact, 372 for some real-time workloads, this is the whole point of using 373 the rcu_nocbs= kernel boot parameter. 374 37513. Unlike other forms of RCU, it *is* permissible to block in an 376 SRCU read-side critical section (demarked by srcu_read_lock() 377 and srcu_read_unlock()), hence the "SRCU": "sleepable RCU". 378 Please note that if you don't need to sleep in read-side critical 379 sections, you should be using RCU rather than SRCU, because RCU 380 is almost always faster and easier to use than is SRCU. 381 382 Also unlike other forms of RCU, explicit initialization and 383 cleanup is required either at build time via DEFINE_SRCU() 384 or DEFINE_STATIC_SRCU() or at runtime via init_srcu_struct() 385 and cleanup_srcu_struct(). These last two are passed a 386 "struct srcu_struct" that defines the scope of a given 387 SRCU domain. Once initialized, the srcu_struct is passed 388 to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(), 389 synchronize_srcu_expedited(), and call_srcu(). A given 390 synchronize_srcu() waits only for SRCU read-side critical 391 sections governed by srcu_read_lock() and srcu_read_unlock() 392 calls that have been passed the same srcu_struct. This property 393 is what makes sleeping read-side critical sections tolerable -- 394 a given subsystem delays only its own updates, not those of other 395 subsystems using SRCU. Therefore, SRCU is less prone to OOM the 396 system than RCU would be if RCU's read-side critical sections 397 were permitted to sleep. 398 399 The ability to sleep in read-side critical sections does not 400 come for free. First, corresponding srcu_read_lock() and 401 srcu_read_unlock() calls must be passed the same srcu_struct. 402 Second, grace-period-detection overhead is amortized only 403 over those updates sharing a given srcu_struct, rather than 404 being globally amortized as they are for other forms of RCU. 405 Therefore, SRCU should be used in preference to rw_semaphore 406 only in extremely read-intensive situations, or in situations 407 requiring SRCU's read-side deadlock immunity or low read-side 408 realtime latency. You should also consider percpu_rw_semaphore 409 when you need lightweight readers. 410 411 SRCU's expedited primitive (synchronize_srcu_expedited()) 412 never sends IPIs to other CPUs, so it is easier on 413 real-time workloads than is synchronize_rcu_expedited(). 414 415 Note that rcu_assign_pointer() relates to SRCU just as it does to 416 other forms of RCU, but instead of rcu_dereference() you should 417 use srcu_dereference() in order to avoid lockdep splats. 418 41914. The whole point of call_rcu(), synchronize_rcu(), and friends 420 is to wait until all pre-existing readers have finished before 421 carrying out some otherwise-destructive operation. It is 422 therefore critically important to *first* remove any path 423 that readers can follow that could be affected by the 424 destructive operation, and *only then* invoke call_rcu(), 425 synchronize_rcu(), or friends. 426 427 Because these primitives only wait for pre-existing readers, it 428 is the caller's responsibility to guarantee that any subsequent 429 readers will execute safely. 430 43115. The various RCU read-side primitives do *not* necessarily contain 432 memory barriers. You should therefore plan for the CPU 433 and the compiler to freely reorder code into and out of RCU 434 read-side critical sections. It is the responsibility of the 435 RCU update-side primitives to deal with this. 436 437 For SRCU readers, you can use smp_mb__after_srcu_read_unlock() 438 immediately after an srcu_read_unlock() to get a full barrier. 439 44016. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the 441 __rcu sparse checks to validate your RCU code. These can help 442 find problems as follows: 443 444 CONFIG_PROVE_LOCKING: 445 check that accesses to RCU-protected data 446 structures are carried out under the proper RCU 447 read-side critical section, while holding the right 448 combination of locks, or whatever other conditions 449 are appropriate. 450 451 CONFIG_DEBUG_OBJECTS_RCU_HEAD: 452 check that you don't pass the 453 same object to call_rcu() (or friends) before an RCU 454 grace period has elapsed since the last time that you 455 passed that same object to call_rcu() (or friends). 456 457 __rcu sparse checks: 458 tag the pointer to the RCU-protected data 459 structure with __rcu, and sparse will warn you if you 460 access that pointer without the services of one of the 461 variants of rcu_dereference(). 462 463 These debugging aids can help you find problems that are 464 otherwise extremely difficult to spot. 465 46617. If you register a callback using call_rcu() or call_srcu(), and 467 pass in a function defined within a loadable module, then it in 468 necessary to wait for all pending callbacks to be invoked after 469 the last invocation and before unloading that module. Note that 470 it is absolutely *not* sufficient to wait for a grace period! 471 The current (say) synchronize_rcu() implementation is *not* 472 guaranteed to wait for callbacks registered on other CPUs. 473 Or even on the current CPU if that CPU recently went offline 474 and came back online. 475 476 You instead need to use one of the barrier functions: 477 478 - call_rcu() -> rcu_barrier() 479 - call_srcu() -> srcu_barrier() 480 481 However, these barrier functions are absolutely *not* guaranteed 482 to wait for a grace period. In fact, if there are no call_rcu() 483 callbacks waiting anywhere in the system, rcu_barrier() is within 484 its rights to return immediately. 485 486 So if you need to wait for both an RCU grace period and for 487 all pre-existing call_rcu() callbacks, you will need to execute 488 both rcu_barrier() and synchronize_rcu(), if necessary, using 489 something like workqueues to execute them concurrently. 490 491 See rcubarrier.rst for more information. 492