1.. _whatisrcu_doc: 2 3What is RCU? -- "Read, Copy, Update" 4====================================== 5 6Please note that the "What is RCU?" LWN series is an excellent place 7to start learning about RCU: 8 9| 1. What is RCU, Fundamentally? https://lwn.net/Articles/262464/ 10| 2. What is RCU? Part 2: Usage https://lwn.net/Articles/263130/ 11| 3. RCU part 3: the RCU API https://lwn.net/Articles/264090/ 12| 4. The RCU API, 2010 Edition https://lwn.net/Articles/418853/ 13| 2010 Big API Table https://lwn.net/Articles/419086/ 14| 5. The RCU API, 2014 Edition https://lwn.net/Articles/609904/ 15| 2014 Big API Table https://lwn.net/Articles/609973/ 16| 6. The RCU API, 2019 Edition https://lwn.net/Articles/777036/ 17| 2019 Big API Table https://lwn.net/Articles/777165/ 18 19For those preferring video: 20 21| 1. Unraveling RCU Mysteries: Fundamentals https://www.linuxfoundation.org/webinars/unraveling-rcu-usage-mysteries 22| 2. Unraveling RCU Mysteries: Additional Use Cases https://www.linuxfoundation.org/webinars/unraveling-rcu-usage-mysteries-additional-use-cases 23 24 25What is RCU? 26 27RCU is a synchronization mechanism that was added to the Linux kernel 28during the 2.5 development effort that is optimized for read-mostly 29situations. Although RCU is actually quite simple, making effective use 30of it requires you to think differently about your code. Another part 31of the problem is the mistaken assumption that there is "one true way" to 32describe and to use RCU. Instead, the experience has been that different 33people must take different paths to arrive at an understanding of RCU, 34depending on their experiences and use cases. This document provides 35several different paths, as follows: 36 37:ref:`1. RCU OVERVIEW <1_whatisRCU>` 38 39:ref:`2. WHAT IS RCU'S CORE API? <2_whatisRCU>` 40 41:ref:`3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? <3_whatisRCU>` 42 43:ref:`4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? <4_whatisRCU>` 44 45:ref:`5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? <5_whatisRCU>` 46 47:ref:`6. ANALOGY WITH READER-WRITER LOCKING <6_whatisRCU>` 48 49:ref:`7. ANALOGY WITH REFERENCE COUNTING <7_whatisRCU>` 50 51:ref:`8. FULL LIST OF RCU APIs <8_whatisRCU>` 52 53:ref:`9. ANSWERS TO QUICK QUIZZES <9_whatisRCU>` 54 55People who prefer starting with a conceptual overview should focus on 56Section 1, though most readers will profit by reading this section at 57some point. People who prefer to start with an API that they can then 58experiment with should focus on Section 2. People who prefer to start 59with example uses should focus on Sections 3 and 4. People who need to 60understand the RCU implementation should focus on Section 5, then dive 61into the kernel source code. People who reason best by analogy should 62focus on Section 6. Section 7 serves as an index to the docbook API 63documentation, and Section 8 is the traditional answer key. 64 65So, start with the section that makes the most sense to you and your 66preferred method of learning. If you need to know everything about 67everything, feel free to read the whole thing -- but if you are really 68that type of person, you have perused the source code and will therefore 69never need this document anyway. ;-) 70 71.. _1_whatisRCU: 72 731. RCU OVERVIEW 74---------------- 75 76The basic idea behind RCU is to split updates into "removal" and 77"reclamation" phases. The removal phase removes references to data items 78within a data structure (possibly by replacing them with references to 79new versions of these data items), and can run concurrently with readers. 80The reason that it is safe to run the removal phase concurrently with 81readers is the semantics of modern CPUs guarantee that readers will see 82either the old or the new version of the data structure rather than a 83partially updated reference. The reclamation phase does the work of reclaiming 84(e.g., freeing) the data items removed from the data structure during the 85removal phase. Because reclaiming data items can disrupt any readers 86concurrently referencing those data items, the reclamation phase must 87not start until readers no longer hold references to those data items. 88 89Splitting the update into removal and reclamation phases permits the 90updater to perform the removal phase immediately, and to defer the 91reclamation phase until all readers active during the removal phase have 92completed, either by blocking until they finish or by registering a 93callback that is invoked after they finish. Only readers that are active 94during the removal phase need be considered, because any reader starting 95after the removal phase will be unable to gain a reference to the removed 96data items, and therefore cannot be disrupted by the reclamation phase. 97 98So the typical RCU update sequence goes something like the following: 99 100a. Remove pointers to a data structure, so that subsequent 101 readers cannot gain a reference to it. 102 103b. Wait for all previous readers to complete their RCU read-side 104 critical sections. 105 106c. At this point, there cannot be any readers who hold references 107 to the data structure, so it now may safely be reclaimed 108 (e.g., kfree()d). 109 110Step (b) above is the key idea underlying RCU's deferred destruction. 111The ability to wait until all readers are done allows RCU readers to 112use much lighter-weight synchronization, in some cases, absolutely no 113synchronization at all. In contrast, in more conventional lock-based 114schemes, readers must use heavy-weight synchronization in order to 115prevent an updater from deleting the data structure out from under them. 116This is because lock-based updaters typically update data items in place, 117and must therefore exclude readers. In contrast, RCU-based updaters 118typically take advantage of the fact that writes to single aligned 119pointers are atomic on modern CPUs, allowing atomic insertion, removal, 120and replacement of data items in a linked structure without disrupting 121readers. Concurrent RCU readers can then continue accessing the old 122versions, and can dispense with the atomic operations, memory barriers, 123and communications cache misses that are so expensive on present-day 124SMP computer systems, even in absence of lock contention. 125 126In the three-step procedure shown above, the updater is performing both 127the removal and the reclamation step, but it is often helpful for an 128entirely different thread to do the reclamation, as is in fact the case 129in the Linux kernel's directory-entry cache (dcache). Even if the same 130thread performs both the update step (step (a) above) and the reclamation 131step (step (c) above), it is often helpful to think of them separately. 132For example, RCU readers and updaters need not communicate at all, 133but RCU provides implicit low-overhead communication between readers 134and reclaimers, namely, in step (b) above. 135 136So how the heck can a reclaimer tell when a reader is done, given 137that readers are not doing any sort of synchronization operations??? 138Read on to learn about how RCU's API makes this easy. 139 140.. _2_whatisRCU: 141 1422. WHAT IS RCU'S CORE API? 143--------------------------- 144 145The core RCU API is quite small: 146 147a. rcu_read_lock() 148b. rcu_read_unlock() 149c. synchronize_rcu() / call_rcu() 150d. rcu_assign_pointer() 151e. rcu_dereference() 152 153There are many other members of the RCU API, but the rest can be 154expressed in terms of these five, though most implementations instead 155express synchronize_rcu() in terms of the call_rcu() callback API. 156 157The five core RCU APIs are described below, the other 18 will be enumerated 158later. See the kernel docbook documentation for more info, or look directly 159at the function header comments. 160 161rcu_read_lock() 162^^^^^^^^^^^^^^^ 163 void rcu_read_lock(void); 164 165 This temporal primitive is used by a reader to inform the 166 reclaimer that the reader is entering an RCU read-side critical 167 section. It is illegal to block while in an RCU read-side 168 critical section, though kernels built with CONFIG_PREEMPT_RCU 169 can preempt RCU read-side critical sections. Any RCU-protected 170 data structure accessed during an RCU read-side critical section 171 is guaranteed to remain unreclaimed for the full duration of that 172 critical section. Reference counts may be used in conjunction 173 with RCU to maintain longer-term references to data structures. 174 175rcu_read_unlock() 176^^^^^^^^^^^^^^^^^ 177 void rcu_read_unlock(void); 178 179 This temporal primitives is used by a reader to inform the 180 reclaimer that the reader is exiting an RCU read-side critical 181 section. Note that RCU read-side critical sections may be nested 182 and/or overlapping. 183 184synchronize_rcu() 185^^^^^^^^^^^^^^^^^ 186 void synchronize_rcu(void); 187 188 This temporal primitive marks the end of updater code and the 189 beginning of reclaimer code. It does this by blocking until 190 all pre-existing RCU read-side critical sections on all CPUs 191 have completed. Note that synchronize_rcu() will **not** 192 necessarily wait for any subsequent RCU read-side critical 193 sections to complete. For example, consider the following 194 sequence of events:: 195 196 CPU 0 CPU 1 CPU 2 197 ----------------- ------------------------- --------------- 198 1. rcu_read_lock() 199 2. enters synchronize_rcu() 200 3. rcu_read_lock() 201 4. rcu_read_unlock() 202 5. exits synchronize_rcu() 203 6. rcu_read_unlock() 204 205 To reiterate, synchronize_rcu() waits only for ongoing RCU 206 read-side critical sections to complete, not necessarily for 207 any that begin after synchronize_rcu() is invoked. 208 209 Of course, synchronize_rcu() does not necessarily return 210 **immediately** after the last pre-existing RCU read-side critical 211 section completes. For one thing, there might well be scheduling 212 delays. For another thing, many RCU implementations process 213 requests in batches in order to improve efficiencies, which can 214 further delay synchronize_rcu(). 215 216 Since synchronize_rcu() is the API that must figure out when 217 readers are done, its implementation is key to RCU. For RCU 218 to be useful in all but the most read-intensive situations, 219 synchronize_rcu()'s overhead must also be quite small. 220 221 The call_rcu() API is an asynchronous callback form of 222 synchronize_rcu(), and is described in more detail in a later 223 section. Instead of blocking, it registers a function and 224 argument which are invoked after all ongoing RCU read-side 225 critical sections have completed. This callback variant is 226 particularly useful in situations where it is illegal to block 227 or where update-side performance is critically important. 228 229 However, the call_rcu() API should not be used lightly, as use 230 of the synchronize_rcu() API generally results in simpler code. 231 In addition, the synchronize_rcu() API has the nice property 232 of automatically limiting update rate should grace periods 233 be delayed. This property results in system resilience in face 234 of denial-of-service attacks. Code using call_rcu() should limit 235 update rate in order to gain this same sort of resilience. See 236 checklist.rst for some approaches to limiting the update rate. 237 238rcu_assign_pointer() 239^^^^^^^^^^^^^^^^^^^^ 240 void rcu_assign_pointer(p, typeof(p) v); 241 242 Yes, rcu_assign_pointer() **is** implemented as a macro, though it 243 would be cool to be able to declare a function in this manner. 244 (Compiler experts will no doubt disagree.) 245 246 The updater uses this spatial macro to assign a new value to an 247 RCU-protected pointer, in order to safely communicate the change 248 in value from the updater to the reader. This is a spatial (as 249 opposed to temporal) macro. It does not evaluate to an rvalue, 250 but it does execute any memory-barrier instructions required 251 for a given CPU architecture. Its ordering properties are that 252 of a store-release operation. 253 254 Perhaps just as important, it serves to document (1) which 255 pointers are protected by RCU and (2) the point at which a 256 given structure becomes accessible to other CPUs. That said, 257 rcu_assign_pointer() is most frequently used indirectly, via 258 the _rcu list-manipulation primitives such as list_add_rcu(). 259 260rcu_dereference() 261^^^^^^^^^^^^^^^^^ 262 typeof(p) rcu_dereference(p); 263 264 Like rcu_assign_pointer(), rcu_dereference() must be implemented 265 as a macro. 266 267 The reader uses the spatial rcu_dereference() macro to fetch 268 an RCU-protected pointer, which returns a value that may 269 then be safely dereferenced. Note that rcu_dereference() 270 does not actually dereference the pointer, instead, it 271 protects the pointer for later dereferencing. It also 272 executes any needed memory-barrier instructions for a given 273 CPU architecture. Currently, only Alpha needs memory barriers 274 within rcu_dereference() -- on other CPUs, it compiles to a 275 volatile load. 276 277 Common coding practice uses rcu_dereference() to copy an 278 RCU-protected pointer to a local variable, then dereferences 279 this local variable, for example as follows:: 280 281 p = rcu_dereference(head.next); 282 return p->data; 283 284 However, in this case, one could just as easily combine these 285 into one statement:: 286 287 return rcu_dereference(head.next)->data; 288 289 If you are going to be fetching multiple fields from the 290 RCU-protected structure, using the local variable is of 291 course preferred. Repeated rcu_dereference() calls look 292 ugly, do not guarantee that the same pointer will be returned 293 if an update happened while in the critical section, and incur 294 unnecessary overhead on Alpha CPUs. 295 296 Note that the value returned by rcu_dereference() is valid 297 only within the enclosing RCU read-side critical section [1]_. 298 For example, the following is **not** legal:: 299 300 rcu_read_lock(); 301 p = rcu_dereference(head.next); 302 rcu_read_unlock(); 303 x = p->address; /* BUG!!! */ 304 rcu_read_lock(); 305 y = p->data; /* BUG!!! */ 306 rcu_read_unlock(); 307 308 Holding a reference from one RCU read-side critical section 309 to another is just as illegal as holding a reference from 310 one lock-based critical section to another! Similarly, 311 using a reference outside of the critical section in which 312 it was acquired is just as illegal as doing so with normal 313 locking. 314 315 As with rcu_assign_pointer(), an important function of 316 rcu_dereference() is to document which pointers are protected by 317 RCU, in particular, flagging a pointer that is subject to changing 318 at any time, including immediately after the rcu_dereference(). 319 And, again like rcu_assign_pointer(), rcu_dereference() is 320 typically used indirectly, via the _rcu list-manipulation 321 primitives, such as list_for_each_entry_rcu() [2]_. 322 323.. [1] The variant rcu_dereference_protected() can be used outside 324 of an RCU read-side critical section as long as the usage is 325 protected by locks acquired by the update-side code. This variant 326 avoids the lockdep warning that would happen when using (for 327 example) rcu_dereference() without rcu_read_lock() protection. 328 Using rcu_dereference_protected() also has the advantage 329 of permitting compiler optimizations that rcu_dereference() 330 must prohibit. The rcu_dereference_protected() variant takes 331 a lockdep expression to indicate which locks must be acquired 332 by the caller. If the indicated protection is not provided, 333 a lockdep splat is emitted. See Design/Requirements/Requirements.rst 334 and the API's code comments for more details and example usage. 335 336.. [2] If the list_for_each_entry_rcu() instance might be used by 337 update-side code as well as by RCU readers, then an additional 338 lockdep expression can be added to its list of arguments. 339 For example, given an additional "lock_is_held(&mylock)" argument, 340 the RCU lockdep code would complain only if this instance was 341 invoked outside of an RCU read-side critical section and without 342 the protection of mylock. 343 344The following diagram shows how each API communicates among the 345reader, updater, and reclaimer. 346:: 347 348 349 rcu_assign_pointer() 350 +--------+ 351 +---------------------->| reader |---------+ 352 | +--------+ | 353 | | | 354 | | | Protect: 355 | | | rcu_read_lock() 356 | | | rcu_read_unlock() 357 | rcu_dereference() | | 358 +---------+ | | 359 | updater |<----------------+ | 360 +---------+ V 361 | +-----------+ 362 +----------------------------------->| reclaimer | 363 +-----------+ 364 Defer: 365 synchronize_rcu() & call_rcu() 366 367 368The RCU infrastructure observes the temporal sequence of rcu_read_lock(), 369rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in 370order to determine when (1) synchronize_rcu() invocations may return 371to their callers and (2) call_rcu() callbacks may be invoked. Efficient 372implementations of the RCU infrastructure make heavy use of batching in 373order to amortize their overhead over many uses of the corresponding APIs. 374The rcu_assign_pointer() and rcu_dereference() invocations communicate 375spatial changes via stores to and loads from the RCU-protected pointer in 376question. 377 378There are at least three flavors of RCU usage in the Linux kernel. The diagram 379above shows the most common one. On the updater side, the rcu_assign_pointer(), 380synchronize_rcu() and call_rcu() primitives used are the same for all three 381flavors. However for protection (on the reader side), the primitives used vary 382depending on the flavor: 383 384a. rcu_read_lock() / rcu_read_unlock() 385 rcu_dereference() 386 387b. rcu_read_lock_bh() / rcu_read_unlock_bh() 388 local_bh_disable() / local_bh_enable() 389 rcu_dereference_bh() 390 391c. rcu_read_lock_sched() / rcu_read_unlock_sched() 392 preempt_disable() / preempt_enable() 393 local_irq_save() / local_irq_restore() 394 hardirq enter / hardirq exit 395 NMI enter / NMI exit 396 rcu_dereference_sched() 397 398These three flavors are used as follows: 399 400a. RCU applied to normal data structures. 401 402b. RCU applied to networking data structures that may be subjected 403 to remote denial-of-service attacks. 404 405c. RCU applied to scheduler and interrupt/NMI-handler tasks. 406 407Again, most uses will be of (a). The (b) and (c) cases are important 408for specialized uses, but are relatively uncommon. The SRCU, RCU-Tasks, 409RCU-Tasks-Rude, and RCU-Tasks-Trace have similar relationships among 410their assorted primitives. 411 412.. _3_whatisRCU: 413 4143. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? 415----------------------------------------------- 416 417This section shows a simple use of the core RCU API to protect a 418global pointer to a dynamically allocated structure. More-typical 419uses of RCU may be found in listRCU.rst, arrayRCU.rst, and NMI-RCU.rst. 420:: 421 422 struct foo { 423 int a; 424 char b; 425 long c; 426 }; 427 DEFINE_SPINLOCK(foo_mutex); 428 429 struct foo __rcu *gbl_foo; 430 431 /* 432 * Create a new struct foo that is the same as the one currently 433 * pointed to by gbl_foo, except that field "a" is replaced 434 * with "new_a". Points gbl_foo to the new structure, and 435 * frees up the old structure after a grace period. 436 * 437 * Uses rcu_assign_pointer() to ensure that concurrent readers 438 * see the initialized version of the new structure. 439 * 440 * Uses synchronize_rcu() to ensure that any readers that might 441 * have references to the old structure complete before freeing 442 * the old structure. 443 */ 444 void foo_update_a(int new_a) 445 { 446 struct foo *new_fp; 447 struct foo *old_fp; 448 449 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); 450 spin_lock(&foo_mutex); 451 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); 452 *new_fp = *old_fp; 453 new_fp->a = new_a; 454 rcu_assign_pointer(gbl_foo, new_fp); 455 spin_unlock(&foo_mutex); 456 synchronize_rcu(); 457 kfree(old_fp); 458 } 459 460 /* 461 * Return the value of field "a" of the current gbl_foo 462 * structure. Use rcu_read_lock() and rcu_read_unlock() 463 * to ensure that the structure does not get deleted out 464 * from under us, and use rcu_dereference() to ensure that 465 * we see the initialized version of the structure (important 466 * for DEC Alpha and for people reading the code). 467 */ 468 int foo_get_a(void) 469 { 470 int retval; 471 472 rcu_read_lock(); 473 retval = rcu_dereference(gbl_foo)->a; 474 rcu_read_unlock(); 475 return retval; 476 } 477 478So, to sum up: 479 480- Use rcu_read_lock() and rcu_read_unlock() to guard RCU 481 read-side critical sections. 482 483- Within an RCU read-side critical section, use rcu_dereference() 484 to dereference RCU-protected pointers. 485 486- Use some solid design (such as locks or semaphores) to 487 keep concurrent updates from interfering with each other. 488 489- Use rcu_assign_pointer() to update an RCU-protected pointer. 490 This primitive protects concurrent readers from the updater, 491 **not** concurrent updates from each other! You therefore still 492 need to use locking (or something similar) to keep concurrent 493 rcu_assign_pointer() primitives from interfering with each other. 494 495- Use synchronize_rcu() **after** removing a data element from an 496 RCU-protected data structure, but **before** reclaiming/freeing 497 the data element, in order to wait for the completion of all 498 RCU read-side critical sections that might be referencing that 499 data item. 500 501See checklist.rst for additional rules to follow when using RCU. 502And again, more-typical uses of RCU may be found in listRCU.rst, 503arrayRCU.rst, and NMI-RCU.rst. 504 505.. _4_whatisRCU: 506 5074. WHAT IF MY UPDATING THREAD CANNOT BLOCK? 508-------------------------------------------- 509 510In the example above, foo_update_a() blocks until a grace period elapses. 511This is quite simple, but in some cases one cannot afford to wait so 512long -- there might be other high-priority work to be done. 513 514In such cases, one uses call_rcu() rather than synchronize_rcu(). 515The call_rcu() API is as follows:: 516 517 void call_rcu(struct rcu_head *head, rcu_callback_t func); 518 519This function invokes func(head) after a grace period has elapsed. 520This invocation might happen from either softirq or process context, 521so the function is not permitted to block. The foo struct needs to 522have an rcu_head structure added, perhaps as follows:: 523 524 struct foo { 525 int a; 526 char b; 527 long c; 528 struct rcu_head rcu; 529 }; 530 531The foo_update_a() function might then be written as follows:: 532 533 /* 534 * Create a new struct foo that is the same as the one currently 535 * pointed to by gbl_foo, except that field "a" is replaced 536 * with "new_a". Points gbl_foo to the new structure, and 537 * frees up the old structure after a grace period. 538 * 539 * Uses rcu_assign_pointer() to ensure that concurrent readers 540 * see the initialized version of the new structure. 541 * 542 * Uses call_rcu() to ensure that any readers that might have 543 * references to the old structure complete before freeing the 544 * old structure. 545 */ 546 void foo_update_a(int new_a) 547 { 548 struct foo *new_fp; 549 struct foo *old_fp; 550 551 new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); 552 spin_lock(&foo_mutex); 553 old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); 554 *new_fp = *old_fp; 555 new_fp->a = new_a; 556 rcu_assign_pointer(gbl_foo, new_fp); 557 spin_unlock(&foo_mutex); 558 call_rcu(&old_fp->rcu, foo_reclaim); 559 } 560 561The foo_reclaim() function might appear as follows:: 562 563 void foo_reclaim(struct rcu_head *rp) 564 { 565 struct foo *fp = container_of(rp, struct foo, rcu); 566 567 foo_cleanup(fp->a); 568 569 kfree(fp); 570 } 571 572The container_of() primitive is a macro that, given a pointer into a 573struct, the type of the struct, and the pointed-to field within the 574struct, returns a pointer to the beginning of the struct. 575 576The use of call_rcu() permits the caller of foo_update_a() to 577immediately regain control, without needing to worry further about the 578old version of the newly updated element. It also clearly shows the 579RCU distinction between updater, namely foo_update_a(), and reclaimer, 580namely foo_reclaim(). 581 582The summary of advice is the same as for the previous section, except 583that we are now using call_rcu() rather than synchronize_rcu(): 584 585- Use call_rcu() **after** removing a data element from an 586 RCU-protected data structure in order to register a callback 587 function that will be invoked after the completion of all RCU 588 read-side critical sections that might be referencing that 589 data item. 590 591If the callback for call_rcu() is not doing anything more than calling 592kfree() on the structure, you can use kfree_rcu() instead of call_rcu() 593to avoid having to write your own callback:: 594 595 kfree_rcu(old_fp, rcu); 596 597If the occasional sleep is permitted, the single-argument form may 598be used, omitting the rcu_head structure from struct foo. 599 600 kfree_rcu_mightsleep(old_fp); 601 602This variant almost never blocks, but might do so by invoking 603synchronize_rcu() in response to memory-allocation failure. 604 605Again, see checklist.rst for additional rules governing the use of RCU. 606 607.. _5_whatisRCU: 608 6095. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? 610------------------------------------------------ 611 612One of the nice things about RCU is that it has extremely simple "toy" 613implementations that are a good first step towards understanding the 614production-quality implementations in the Linux kernel. This section 615presents two such "toy" implementations of RCU, one that is implemented 616in terms of familiar locking primitives, and another that more closely 617resembles "classic" RCU. Both are way too simple for real-world use, 618lacking both functionality and performance. However, they are useful 619in getting a feel for how RCU works. See kernel/rcu/update.c for a 620production-quality implementation, and see: 621 622 https://docs.google.com/document/d/1X0lThx8OK0ZgLMqVoXiR4ZrGURHrXK6NyLRbeXe3Xac/edit 623 624for papers describing the Linux kernel RCU implementation. The OLS'01 625and OLS'02 papers are a good introduction, and the dissertation provides 626more details on the current implementation as of early 2004. 627 628 6295A. "TOY" IMPLEMENTATION #1: LOCKING 630^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 631This section presents a "toy" RCU implementation that is based on 632familiar locking primitives. Its overhead makes it a non-starter for 633real-life use, as does its lack of scalability. It is also unsuitable 634for realtime use, since it allows scheduling latency to "bleed" from 635one read-side critical section to another. It also assumes recursive 636reader-writer locks: If you try this with non-recursive locks, and 637you allow nested rcu_read_lock() calls, you can deadlock. 638 639However, it is probably the easiest implementation to relate to, so is 640a good starting point. 641 642It is extremely simple:: 643 644 static DEFINE_RWLOCK(rcu_gp_mutex); 645 646 void rcu_read_lock(void) 647 { 648 read_lock(&rcu_gp_mutex); 649 } 650 651 void rcu_read_unlock(void) 652 { 653 read_unlock(&rcu_gp_mutex); 654 } 655 656 void synchronize_rcu(void) 657 { 658 write_lock(&rcu_gp_mutex); 659 smp_mb__after_spinlock(); 660 write_unlock(&rcu_gp_mutex); 661 } 662 663[You can ignore rcu_assign_pointer() and rcu_dereference() without missing 664much. But here are simplified versions anyway. And whatever you do, 665don't forget about them when submitting patches making use of RCU!]:: 666 667 #define rcu_assign_pointer(p, v) \ 668 ({ \ 669 smp_store_release(&(p), (v)); \ 670 }) 671 672 #define rcu_dereference(p) \ 673 ({ \ 674 typeof(p) _________p1 = READ_ONCE(p); \ 675 (_________p1); \ 676 }) 677 678 679The rcu_read_lock() and rcu_read_unlock() primitive read-acquire 680and release a global reader-writer lock. The synchronize_rcu() 681primitive write-acquires this same lock, then releases it. This means 682that once synchronize_rcu() exits, all RCU read-side critical sections 683that were in progress before synchronize_rcu() was called are guaranteed 684to have completed -- there is no way that synchronize_rcu() would have 685been able to write-acquire the lock otherwise. The smp_mb__after_spinlock() 686promotes synchronize_rcu() to a full memory barrier in compliance with 687the "Memory-Barrier Guarantees" listed in: 688 689 Design/Requirements/Requirements.rst 690 691It is possible to nest rcu_read_lock(), since reader-writer locks may 692be recursively acquired. Note also that rcu_read_lock() is immune 693from deadlock (an important property of RCU). The reason for this is 694that the only thing that can block rcu_read_lock() is a synchronize_rcu(). 695But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, 696so there can be no deadlock cycle. 697 698.. _quiz_1: 699 700Quick Quiz #1: 701 Why is this argument naive? How could a deadlock 702 occur when using this algorithm in a real-world Linux 703 kernel? How could this deadlock be avoided? 704 705:ref:`Answers to Quick Quiz <9_whatisRCU>` 706 7075B. "TOY" EXAMPLE #2: CLASSIC RCU 708^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 709This section presents a "toy" RCU implementation that is based on 710"classic RCU". It is also short on performance (but only for updates) and 711on features such as hotplug CPU and the ability to run in CONFIG_PREEMPTION 712kernels. The definitions of rcu_dereference() and rcu_assign_pointer() 713are the same as those shown in the preceding section, so they are omitted. 714:: 715 716 void rcu_read_lock(void) { } 717 718 void rcu_read_unlock(void) { } 719 720 void synchronize_rcu(void) 721 { 722 int cpu; 723 724 for_each_possible_cpu(cpu) 725 run_on(cpu); 726 } 727 728Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. 729This is the great strength of classic RCU in a non-preemptive kernel: 730read-side overhead is precisely zero, at least on non-Alpha CPUs. 731And there is absolutely no way that rcu_read_lock() can possibly 732participate in a deadlock cycle! 733 734The implementation of synchronize_rcu() simply schedules itself on each 735CPU in turn. The run_on() primitive can be implemented straightforwardly 736in terms of the sched_setaffinity() primitive. Of course, a somewhat less 737"toy" implementation would restore the affinity upon completion rather 738than just leaving all tasks running on the last CPU, but when I said 739"toy", I meant **toy**! 740 741So how the heck is this supposed to work??? 742 743Remember that it is illegal to block while in an RCU read-side critical 744section. Therefore, if a given CPU executes a context switch, we know 745that it must have completed all preceding RCU read-side critical sections. 746Once **all** CPUs have executed a context switch, then **all** preceding 747RCU read-side critical sections will have completed. 748 749So, suppose that we remove a data item from its structure and then invoke 750synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed 751that there are no RCU read-side critical sections holding a reference 752to that data item, so we can safely reclaim it. 753 754.. _quiz_2: 755 756Quick Quiz #2: 757 Give an example where Classic RCU's read-side 758 overhead is **negative**. 759 760:ref:`Answers to Quick Quiz <9_whatisRCU>` 761 762.. _quiz_3: 763 764Quick Quiz #3: 765 If it is illegal to block in an RCU read-side 766 critical section, what the heck do you do in 767 CONFIG_PREEMPT_RT, where normal spinlocks can block??? 768 769:ref:`Answers to Quick Quiz <9_whatisRCU>` 770 771.. _6_whatisRCU: 772 7736. ANALOGY WITH READER-WRITER LOCKING 774-------------------------------------- 775 776Although RCU can be used in many different ways, a very common use of 777RCU is analogous to reader-writer locking. The following unified 778diff shows how closely related RCU and reader-writer locking can be. 779:: 780 781 @@ -5,5 +5,5 @@ struct el { 782 int data; 783 /* Other data fields */ 784 }; 785 -rwlock_t listmutex; 786 +spinlock_t listmutex; 787 struct el head; 788 789 @@ -13,15 +14,15 @@ 790 struct list_head *lp; 791 struct el *p; 792 793 - read_lock(&listmutex); 794 - list_for_each_entry(p, head, lp) { 795 + rcu_read_lock(); 796 + list_for_each_entry_rcu(p, head, lp) { 797 if (p->key == key) { 798 *result = p->data; 799 - read_unlock(&listmutex); 800 + rcu_read_unlock(); 801 return 1; 802 } 803 } 804 - read_unlock(&listmutex); 805 + rcu_read_unlock(); 806 return 0; 807 } 808 809 @@ -29,15 +30,16 @@ 810 { 811 struct el *p; 812 813 - write_lock(&listmutex); 814 + spin_lock(&listmutex); 815 list_for_each_entry(p, head, lp) { 816 if (p->key == key) { 817 - list_del(&p->list); 818 - write_unlock(&listmutex); 819 + list_del_rcu(&p->list); 820 + spin_unlock(&listmutex); 821 + synchronize_rcu(); 822 kfree(p); 823 return 1; 824 } 825 } 826 - write_unlock(&listmutex); 827 + spin_unlock(&listmutex); 828 return 0; 829 } 830 831Or, for those who prefer a side-by-side listing:: 832 833 1 struct el { 1 struct el { 834 2 struct list_head list; 2 struct list_head list; 835 3 long key; 3 long key; 836 4 spinlock_t mutex; 4 spinlock_t mutex; 837 5 int data; 5 int data; 838 6 /* Other data fields */ 6 /* Other data fields */ 839 7 }; 7 }; 840 8 rwlock_t listmutex; 8 spinlock_t listmutex; 841 9 struct el head; 9 struct el head; 842 843:: 844 845 1 int search(long key, int *result) 1 int search(long key, int *result) 846 2 { 2 { 847 3 struct list_head *lp; 3 struct list_head *lp; 848 4 struct el *p; 4 struct el *p; 849 5 5 850 6 read_lock(&listmutex); 6 rcu_read_lock(); 851 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { 852 8 if (p->key == key) { 8 if (p->key == key) { 853 9 *result = p->data; 9 *result = p->data; 854 10 read_unlock(&listmutex); 10 rcu_read_unlock(); 855 11 return 1; 11 return 1; 856 12 } 12 } 857 13 } 13 } 858 14 read_unlock(&listmutex); 14 rcu_read_unlock(); 859 15 return 0; 15 return 0; 860 16 } 16 } 861 862:: 863 864 1 int delete(long key) 1 int delete(long key) 865 2 { 2 { 866 3 struct el *p; 3 struct el *p; 867 4 4 868 5 write_lock(&listmutex); 5 spin_lock(&listmutex); 869 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { 870 7 if (p->key == key) { 7 if (p->key == key) { 871 8 list_del(&p->list); 8 list_del_rcu(&p->list); 872 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); 873 10 synchronize_rcu(); 874 10 kfree(p); 11 kfree(p); 875 11 return 1; 12 return 1; 876 12 } 13 } 877 13 } 14 } 878 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); 879 15 return 0; 16 return 0; 880 16 } 17 } 881 882Either way, the differences are quite small. Read-side locking moves 883to rcu_read_lock() and rcu_read_unlock, update-side locking moves from 884a reader-writer lock to a simple spinlock, and a synchronize_rcu() 885precedes the kfree(). 886 887However, there is one potential catch: the read-side and update-side 888critical sections can now run concurrently. In many cases, this will 889not be a problem, but it is necessary to check carefully regardless. 890For example, if multiple independent list updates must be seen as 891a single atomic update, converting to RCU will require special care. 892 893Also, the presence of synchronize_rcu() means that the RCU version of 894delete() can now block. If this is a problem, there is a callback-based 895mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can 896be used in place of synchronize_rcu(). 897 898.. _7_whatisRCU: 899 9007. ANALOGY WITH REFERENCE COUNTING 901----------------------------------- 902 903The reader-writer analogy (illustrated by the previous section) is not 904always the best way to think about using RCU. Another helpful analogy 905considers RCU an effective reference count on everything which is 906protected by RCU. 907 908A reference count typically does not prevent the referenced object's 909values from changing, but does prevent changes to type -- particularly the 910gross change of type that happens when that object's memory is freed and 911re-allocated for some other purpose. Once a type-safe reference to the 912object is obtained, some other mechanism is needed to ensure consistent 913access to the data in the object. This could involve taking a spinlock, 914but with RCU the typical approach is to perform reads with SMP-aware 915operations such as smp_load_acquire(), to perform updates with atomic 916read-modify-write operations, and to provide the necessary ordering. 917RCU provides a number of support functions that embed the required 918operations and ordering, such as the list_for_each_entry_rcu() macro 919used in the previous section. 920 921A more focused view of the reference counting behavior is that, 922between rcu_read_lock() and rcu_read_unlock(), any reference taken with 923rcu_dereference() on a pointer marked as ``__rcu`` can be treated as 924though a reference-count on that object has been temporarily increased. 925This prevents the object from changing type. Exactly what this means 926will depend on normal expectations of objects of that type, but it 927typically includes that spinlocks can still be safely locked, normal 928reference counters can be safely manipulated, and ``__rcu`` pointers 929can be safely dereferenced. 930 931Some operations that one might expect to see on an object for 932which an RCU reference is held include: 933 934 - Copying out data that is guaranteed to be stable by the object's type. 935 - Using kref_get_unless_zero() or similar to get a longer-term 936 reference. This may fail of course. 937 - Acquiring a spinlock in the object, and checking if the object still 938 is the expected object and if so, manipulating it freely. 939 940The understanding that RCU provides a reference that only prevents a 941change of type is particularly visible with objects allocated from a 942slab cache marked ``SLAB_TYPESAFE_BY_RCU``. RCU operations may yield a 943reference to an object from such a cache that has been concurrently freed 944and the memory reallocated to a completely different object, though of 945the same type. In this case RCU doesn't even protect the identity of the 946object from changing, only its type. So the object found may not be the 947one expected, but it will be one where it is safe to take a reference 948(and then potentially acquiring a spinlock), allowing subsequent code 949to check whether the identity matches expectations. It is tempting 950to simply acquire the spinlock without first taking the reference, but 951unfortunately any spinlock in a ``SLAB_TYPESAFE_BY_RCU`` object must be 952initialized after each and every call to kmem_cache_alloc(), which renders 953reference-free spinlock acquisition completely unsafe. Therefore, when 954using ``SLAB_TYPESAFE_BY_RCU``, make proper use of a reference counter. 955(Those willing to use a kmem_cache constructor may also use locking, 956including cache-friendly sequence locking.) 957 958With traditional reference counting -- such as that implemented by the 959kref library in Linux -- there is typically code that runs when the last 960reference to an object is dropped. With kref, this is the function 961passed to kref_put(). When RCU is being used, such finalization code 962must not be run until all ``__rcu`` pointers referencing the object have 963been updated, and then a grace period has passed. Every remaining 964globally visible pointer to the object must be considered to be a 965potential counted reference, and the finalization code is typically run 966using call_rcu() only after all those pointers have been changed. 967 968To see how to choose between these two analogies -- of RCU as a 969reader-writer lock and RCU as a reference counting system -- it is useful 970to reflect on the scale of the thing being protected. The reader-writer 971lock analogy looks at larger multi-part objects such as a linked list 972and shows how RCU can facilitate concurrency while elements are added 973to, and removed from, the list. The reference-count analogy looks at 974the individual objects and looks at how they can be accessed safely 975within whatever whole they are a part of. 976 977.. _8_whatisRCU: 978 9798. FULL LIST OF RCU APIs 980------------------------- 981 982The RCU APIs are documented in docbook-format header comments in the 983Linux-kernel source code, but it helps to have a full list of the 984APIs, since there does not appear to be a way to categorize them 985in docbook. Here is the list, by category. 986 987RCU list traversal:: 988 989 list_entry_rcu 990 list_entry_lockless 991 list_first_entry_rcu 992 list_next_rcu 993 list_for_each_entry_rcu 994 list_for_each_entry_continue_rcu 995 list_for_each_entry_from_rcu 996 list_first_or_null_rcu 997 list_next_or_null_rcu 998 hlist_first_rcu 999 hlist_next_rcu 1000 hlist_pprev_rcu 1001 hlist_for_each_entry_rcu 1002 hlist_for_each_entry_rcu_bh 1003 hlist_for_each_entry_from_rcu 1004 hlist_for_each_entry_continue_rcu 1005 hlist_for_each_entry_continue_rcu_bh 1006 hlist_nulls_first_rcu 1007 hlist_nulls_for_each_entry_rcu 1008 hlist_bl_first_rcu 1009 hlist_bl_for_each_entry_rcu 1010 1011RCU pointer/list update:: 1012 1013 rcu_assign_pointer 1014 list_add_rcu 1015 list_add_tail_rcu 1016 list_del_rcu 1017 list_replace_rcu 1018 hlist_add_behind_rcu 1019 hlist_add_before_rcu 1020 hlist_add_head_rcu 1021 hlist_add_tail_rcu 1022 hlist_del_rcu 1023 hlist_del_init_rcu 1024 hlist_replace_rcu 1025 list_splice_init_rcu 1026 list_splice_tail_init_rcu 1027 hlist_nulls_del_init_rcu 1028 hlist_nulls_del_rcu 1029 hlist_nulls_add_head_rcu 1030 hlist_bl_add_head_rcu 1031 hlist_bl_del_init_rcu 1032 hlist_bl_del_rcu 1033 hlist_bl_set_first_rcu 1034 1035RCU:: 1036 1037 Critical sections Grace period Barrier 1038 1039 rcu_read_lock synchronize_net rcu_barrier 1040 rcu_read_unlock synchronize_rcu 1041 rcu_dereference synchronize_rcu_expedited 1042 rcu_read_lock_held call_rcu 1043 rcu_dereference_check kfree_rcu 1044 rcu_dereference_protected 1045 1046bh:: 1047 1048 Critical sections Grace period Barrier 1049 1050 rcu_read_lock_bh call_rcu rcu_barrier 1051 rcu_read_unlock_bh synchronize_rcu 1052 [local_bh_disable] synchronize_rcu_expedited 1053 [and friends] 1054 rcu_dereference_bh 1055 rcu_dereference_bh_check 1056 rcu_dereference_bh_protected 1057 rcu_read_lock_bh_held 1058 1059sched:: 1060 1061 Critical sections Grace period Barrier 1062 1063 rcu_read_lock_sched call_rcu rcu_barrier 1064 rcu_read_unlock_sched synchronize_rcu 1065 [preempt_disable] synchronize_rcu_expedited 1066 [and friends] 1067 rcu_read_lock_sched_notrace 1068 rcu_read_unlock_sched_notrace 1069 rcu_dereference_sched 1070 rcu_dereference_sched_check 1071 rcu_dereference_sched_protected 1072 rcu_read_lock_sched_held 1073 1074 1075RCU-Tasks:: 1076 1077 Critical sections Grace period Barrier 1078 1079 N/A call_rcu_tasks rcu_barrier_tasks 1080 synchronize_rcu_tasks 1081 1082 1083RCU-Tasks-Rude:: 1084 1085 Critical sections Grace period Barrier 1086 1087 N/A call_rcu_tasks_rude rcu_barrier_tasks_rude 1088 synchronize_rcu_tasks_rude 1089 1090 1091RCU-Tasks-Trace:: 1092 1093 Critical sections Grace period Barrier 1094 1095 rcu_read_lock_trace call_rcu_tasks_trace rcu_barrier_tasks_trace 1096 rcu_read_unlock_trace synchronize_rcu_tasks_trace 1097 1098 1099SRCU:: 1100 1101 Critical sections Grace period Barrier 1102 1103 srcu_read_lock call_srcu srcu_barrier 1104 srcu_read_unlock synchronize_srcu 1105 srcu_dereference synchronize_srcu_expedited 1106 srcu_dereference_check 1107 srcu_read_lock_held 1108 1109SRCU: Initialization/cleanup:: 1110 1111 DEFINE_SRCU 1112 DEFINE_STATIC_SRCU 1113 init_srcu_struct 1114 cleanup_srcu_struct 1115 1116All: lockdep-checked RCU utility APIs:: 1117 1118 RCU_LOCKDEP_WARN 1119 rcu_sleep_check 1120 1121All: Unchecked RCU-protected pointer access:: 1122 1123 rcu_dereference_raw 1124 1125All: Unchecked RCU-protected pointer access with dereferencing prohibited:: 1126 1127 rcu_access_pointer 1128 1129See the comment headers in the source code (or the docbook generated 1130from them) for more information. 1131 1132However, given that there are no fewer than four families of RCU APIs 1133in the Linux kernel, how do you choose which one to use? The following 1134list can be helpful: 1135 1136a. Will readers need to block? If so, you need SRCU. 1137 1138b. Will readers need to block and are you doing tracing, for 1139 example, ftrace or BPF? If so, you need RCU-tasks, 1140 RCU-tasks-rude, and/or RCU-tasks-trace. 1141 1142c. What about the -rt patchset? If readers would need to block in 1143 an non-rt kernel, you need SRCU. If readers would block when 1144 acquiring spinlocks in a -rt kernel, but not in a non-rt kernel, 1145 SRCU is not necessary. (The -rt patchset turns spinlocks into 1146 sleeplocks, hence this distinction.) 1147 1148d. Do you need to treat NMI handlers, hardirq handlers, 1149 and code segments with preemption disabled (whether 1150 via preempt_disable(), local_irq_save(), local_bh_disable(), 1151 or some other mechanism) as if they were explicit RCU readers? 1152 If so, RCU-sched readers are the only choice that will work 1153 for you, but since about v4.20 you use can use the vanilla RCU 1154 update primitives. 1155 1156e. Do you need RCU grace periods to complete even in the face of 1157 softirq monopolization of one or more of the CPUs? For example, 1158 is your code subject to network-based denial-of-service attacks? 1159 If so, you should disable softirq across your readers, for 1160 example, by using rcu_read_lock_bh(). Since about v4.20 you 1161 use can use the vanilla RCU update primitives. 1162 1163f. Is your workload too update-intensive for normal use of 1164 RCU, but inappropriate for other synchronization mechanisms? 1165 If so, consider SLAB_TYPESAFE_BY_RCU (which was originally 1166 named SLAB_DESTROY_BY_RCU). But please be careful! 1167 1168g. Do you need read-side critical sections that are respected even 1169 on CPUs that are deep in the idle loop, during entry to or exit 1170 from user-mode execution, or on an offlined CPU? If so, SRCU 1171 and RCU Tasks Trace are the only choices that will work for you, 1172 with SRCU being strongly preferred in almost all cases. 1173 1174h. Otherwise, use RCU. 1175 1176Of course, this all assumes that you have determined that RCU is in fact 1177the right tool for your job. 1178 1179.. _9_whatisRCU: 1180 11819. ANSWERS TO QUICK QUIZZES 1182---------------------------- 1183 1184Quick Quiz #1: 1185 Why is this argument naive? How could a deadlock 1186 occur when using this algorithm in a real-world Linux 1187 kernel? [Referring to the lock-based "toy" RCU 1188 algorithm.] 1189 1190Answer: 1191 Consider the following sequence of events: 1192 1193 1. CPU 0 acquires some unrelated lock, call it 1194 "problematic_lock", disabling irq via 1195 spin_lock_irqsave(). 1196 1197 2. CPU 1 enters synchronize_rcu(), write-acquiring 1198 rcu_gp_mutex. 1199 1200 3. CPU 0 enters rcu_read_lock(), but must wait 1201 because CPU 1 holds rcu_gp_mutex. 1202 1203 4. CPU 1 is interrupted, and the irq handler 1204 attempts to acquire problematic_lock. 1205 1206 The system is now deadlocked. 1207 1208 One way to avoid this deadlock is to use an approach like 1209 that of CONFIG_PREEMPT_RT, where all normal spinlocks 1210 become blocking locks, and all irq handlers execute in 1211 the context of special tasks. In this case, in step 4 1212 above, the irq handler would block, allowing CPU 1 to 1213 release rcu_gp_mutex, avoiding the deadlock. 1214 1215 Even in the absence of deadlock, this RCU implementation 1216 allows latency to "bleed" from readers to other 1217 readers through synchronize_rcu(). To see this, 1218 consider task A in an RCU read-side critical section 1219 (thus read-holding rcu_gp_mutex), task B blocked 1220 attempting to write-acquire rcu_gp_mutex, and 1221 task C blocked in rcu_read_lock() attempting to 1222 read_acquire rcu_gp_mutex. Task A's RCU read-side 1223 latency is holding up task C, albeit indirectly via 1224 task B. 1225 1226 Realtime RCU implementations therefore use a counter-based 1227 approach where tasks in RCU read-side critical sections 1228 cannot be blocked by tasks executing synchronize_rcu(). 1229 1230:ref:`Back to Quick Quiz #1 <quiz_1>` 1231 1232Quick Quiz #2: 1233 Give an example where Classic RCU's read-side 1234 overhead is **negative**. 1235 1236Answer: 1237 Imagine a single-CPU system with a non-CONFIG_PREEMPTION 1238 kernel where a routing table is used by process-context 1239 code, but can be updated by irq-context code (for example, 1240 by an "ICMP REDIRECT" packet). The usual way of handling 1241 this would be to have the process-context code disable 1242 interrupts while searching the routing table. Use of 1243 RCU allows such interrupt-disabling to be dispensed with. 1244 Thus, without RCU, you pay the cost of disabling interrupts, 1245 and with RCU you don't. 1246 1247 One can argue that the overhead of RCU in this 1248 case is negative with respect to the single-CPU 1249 interrupt-disabling approach. Others might argue that 1250 the overhead of RCU is merely zero, and that replacing 1251 the positive overhead of the interrupt-disabling scheme 1252 with the zero-overhead RCU scheme does not constitute 1253 negative overhead. 1254 1255 In real life, of course, things are more complex. But 1256 even the theoretical possibility of negative overhead for 1257 a synchronization primitive is a bit unexpected. ;-) 1258 1259:ref:`Back to Quick Quiz #2 <quiz_2>` 1260 1261Quick Quiz #3: 1262 If it is illegal to block in an RCU read-side 1263 critical section, what the heck do you do in 1264 CONFIG_PREEMPT_RT, where normal spinlocks can block??? 1265 1266Answer: 1267 Just as CONFIG_PREEMPT_RT permits preemption of spinlock 1268 critical sections, it permits preemption of RCU 1269 read-side critical sections. It also permits 1270 spinlocks blocking while in RCU read-side critical 1271 sections. 1272 1273 Why the apparent inconsistency? Because it is 1274 possible to use priority boosting to keep the RCU 1275 grace periods short if need be (for example, if running 1276 short of memory). In contrast, if blocking waiting 1277 for (say) network reception, there is no way to know 1278 what should be boosted. Especially given that the 1279 process we need to boost might well be a human being 1280 who just went out for a pizza or something. And although 1281 a computer-operated cattle prod might arouse serious 1282 interest, it might also provoke serious objections. 1283 Besides, how does the computer know what pizza parlor 1284 the human being went to??? 1285 1286:ref:`Back to Quick Quiz #3 <quiz_3>` 1287 1288ACKNOWLEDGEMENTS 1289 1290My thanks to the people who helped make this human-readable, including 1291Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. 1292 1293 1294For more information, see http://www.rdrop.com/users/paulmck/RCU. 1295