1====================================================== 2A Tour Through TREE_RCU's Grace-Period Memory Ordering 3====================================================== 4 5August 8, 2017 6 7This article was contributed by Paul E. McKenney 8 9Introduction 10============ 11 12This document gives a rough visual overview of how Tree RCU's 13grace-period memory ordering guarantee is provided. 14 15What Is Tree RCU's Grace Period Memory Ordering Guarantee? 16========================================================== 17 18RCU grace periods provide extremely strong memory-ordering guarantees 19for non-idle non-offline code. 20Any code that happens after the end of a given RCU grace period is guaranteed 21to see the effects of all accesses prior to the beginning of that grace 22period that are within RCU read-side critical sections. 23Similarly, any code that happens before the beginning of a given RCU grace 24period is guaranteed to not see the effects of all accesses following the end 25of that grace period that are within RCU read-side critical sections. 26 27Note well that RCU-sched read-side critical sections include any region 28of code for which preemption is disabled. 29Given that each individual machine instruction can be thought of as 30an extremely small region of preemption-disabled code, one can think of 31``synchronize_rcu()`` as ``smp_mb()`` on steroids. 32 33RCU updaters use this guarantee by splitting their updates into 34two phases, one of which is executed before the grace period and 35the other of which is executed after the grace period. 36In the most common use case, phase one removes an element from 37a linked RCU-protected data structure, and phase two frees that element. 38For this to work, any readers that have witnessed state prior to the 39phase-one update (in the common case, removal) must not witness state 40following the phase-two update (in the common case, freeing). 41 42The RCU implementation provides this guarantee using a network 43of lock-based critical sections, memory barriers, and per-CPU 44processing, as is described in the following sections. 45 46Tree RCU Grace Period Memory Ordering Building Blocks 47===================================================== 48 49The workhorse for RCU's grace-period memory ordering is the 50critical section for the ``rcu_node`` structure's 51``->lock``. These critical sections use helper functions for lock 52acquisition, including ``raw_spin_lock_rcu_node()``, 53``raw_spin_lock_irq_rcu_node()``, and ``raw_spin_lock_irqsave_rcu_node()``. 54Their lock-release counterparts are ``raw_spin_unlock_rcu_node()``, 55``raw_spin_unlock_irq_rcu_node()``, and 56``raw_spin_unlock_irqrestore_rcu_node()``, respectively. 57For completeness, a ``raw_spin_trylock_rcu_node()`` is also provided. 58The key point is that the lock-acquisition functions, including 59``raw_spin_trylock_rcu_node()``, all invoke ``smp_mb__after_unlock_lock()`` 60immediately after successful acquisition of the lock. 61 62Therefore, for any given ``rcu_node`` structure, any access 63happening before one of the above lock-release functions will be seen 64by all CPUs as happening before any access happening after a later 65one of the above lock-acquisition functions. 66Furthermore, any access happening before one of the 67above lock-release function on any given CPU will be seen by all 68CPUs as happening before any access happening after a later one 69of the above lock-acquisition functions executing on that same CPU, 70even if the lock-release and lock-acquisition functions are operating 71on different ``rcu_node`` structures. 72Tree RCU uses these two ordering guarantees to form an ordering 73network among all CPUs that were in any way involved in the grace 74period, including any CPUs that came online or went offline during 75the grace period in question. 76 77The following litmus test exhibits the ordering effects of these 78lock-acquisition and lock-release functions:: 79 80 1 int x, y, z; 81 2 82 3 void task0(void) 83 4 { 84 5 raw_spin_lock_rcu_node(rnp); 85 6 WRITE_ONCE(x, 1); 86 7 r1 = READ_ONCE(y); 87 8 raw_spin_unlock_rcu_node(rnp); 88 9 } 89 10 90 11 void task1(void) 91 12 { 92 13 raw_spin_lock_rcu_node(rnp); 93 14 WRITE_ONCE(y, 1); 94 15 r2 = READ_ONCE(z); 95 16 raw_spin_unlock_rcu_node(rnp); 96 17 } 97 18 98 19 void task2(void) 99 20 { 100 21 WRITE_ONCE(z, 1); 101 22 smp_mb(); 102 23 r3 = READ_ONCE(x); 103 24 } 104 25 105 26 WARN_ON(r1 == 0 && r2 == 0 && r3 == 0); 106 107The ``WARN_ON()`` is evaluated at "the end of time", 108after all changes have propagated throughout the system. 109Without the ``smp_mb__after_unlock_lock()`` provided by the 110acquisition functions, this ``WARN_ON()`` could trigger, for example 111on PowerPC. 112The ``smp_mb__after_unlock_lock()`` invocations prevent this 113``WARN_ON()`` from triggering. 114 115+-----------------------------------------------------------------------+ 116| **Quick Quiz**: | 117+-----------------------------------------------------------------------+ 118| But the chain of rcu_node-structure lock acquisitions guarantees | 119| that new readers will see all of the updater's pre-grace-period | 120| accesses and also guarantees that the updater's post-grace-period | 121| accesses will see all of the old reader's accesses. So why do we | 122| need all of those calls to smp_mb__after_unlock_lock()? | 123+-----------------------------------------------------------------------+ 124| **Answer**: | 125+-----------------------------------------------------------------------+ 126| Because we must provide ordering for RCU's polling grace-period | 127| primitives, for example, get_state_synchronize_rcu() and | 128| poll_state_synchronize_rcu(). Consider this code:: | 129| | 130| CPU 0 CPU 1 | 131| ---- ---- | 132| WRITE_ONCE(X, 1) WRITE_ONCE(Y, 1) | 133| g = get_state_synchronize_rcu() smp_mb() | 134| while (!poll_state_synchronize_rcu(g)) r1 = READ_ONCE(X) | 135| continue; | 136| r0 = READ_ONCE(Y) | 137| | 138| RCU guarantees that the outcome r0 == 0 && r1 == 0 will not | 139| happen, even if CPU 1 is in an RCU extended quiescent state | 140| (idle or offline) and thus won't interact directly with the RCU | 141| core processing at all. | 142+-----------------------------------------------------------------------+ 143 144This approach must be extended to include idle CPUs, which need 145RCU's grace-period memory ordering guarantee to extend to any 146RCU read-side critical sections preceding and following the current 147idle sojourn. 148This case is handled by calls to the strongly ordered 149``atomic_add_return()`` read-modify-write atomic operation that 150is invoked within ``rcu_dynticks_eqs_enter()`` at idle-entry 151time and within ``rcu_dynticks_eqs_exit()`` at idle-exit time. 152The grace-period kthread invokes ``rcu_dynticks_snap()`` and 153``rcu_dynticks_in_eqs_since()`` (both of which invoke 154an ``atomic_add_return()`` of zero) to detect idle CPUs. 155 156+-----------------------------------------------------------------------+ 157| **Quick Quiz**: | 158+-----------------------------------------------------------------------+ 159| But what about CPUs that remain offline for the entire grace period? | 160+-----------------------------------------------------------------------+ 161| **Answer**: | 162+-----------------------------------------------------------------------+ 163| Such CPUs will be offline at the beginning of the grace period, so | 164| the grace period won't expect quiescent states from them. Races | 165| between grace-period start and CPU-hotplug operations are mediated | 166| by the CPU's leaf ``rcu_node`` structure's ``->lock`` as described | 167| above. | 168+-----------------------------------------------------------------------+ 169 170The approach must be extended to handle one final case, that of waking a 171task blocked in ``synchronize_rcu()``. This task might be affinitied to 172a CPU that is not yet aware that the grace period has ended, and thus 173might not yet be subject to the grace period's memory ordering. 174Therefore, there is an ``smp_mb()`` after the return from 175``wait_for_completion()`` in the ``synchronize_rcu()`` code path. 176 177+-----------------------------------------------------------------------+ 178| **Quick Quiz**: | 179+-----------------------------------------------------------------------+ 180| What? Where??? I don't see any ``smp_mb()`` after the return from | 181| ``wait_for_completion()``!!! | 182+-----------------------------------------------------------------------+ 183| **Answer**: | 184+-----------------------------------------------------------------------+ 185| That would be because I spotted the need for that ``smp_mb()`` during | 186| the creation of this documentation, and it is therefore unlikely to | 187| hit mainline before v4.14. Kudos to Lance Roy, Will Deacon, Peter | 188| Zijlstra, and Jonathan Cameron for asking questions that sensitized | 189| me to the rather elaborate sequence of events that demonstrate the | 190| need for this memory barrier. | 191+-----------------------------------------------------------------------+ 192 193Tree RCU's grace--period memory-ordering guarantees rely most heavily on 194the ``rcu_node`` structure's ``->lock`` field, so much so that it is 195necessary to abbreviate this pattern in the diagrams in the next 196section. For example, consider the ``rcu_prepare_for_idle()`` function 197shown below, which is one of several functions that enforce ordering of 198newly arrived RCU callbacks against future grace periods: 199 200:: 201 202 1 static void rcu_prepare_for_idle(void) 203 2 { 204 3 bool needwake; 205 4 struct rcu_data *rdp; 206 5 struct rcu_dynticks *rdtp = this_cpu_ptr(&rcu_dynticks); 207 6 struct rcu_node *rnp; 208 7 struct rcu_state *rsp; 209 8 int tne; 210 9 211 10 if (IS_ENABLED(CONFIG_RCU_NOCB_CPU_ALL) || 212 11 rcu_is_nocb_cpu(smp_processor_id())) 213 12 return; 214 13 tne = READ_ONCE(tick_nohz_active); 215 14 if (tne != rdtp->tick_nohz_enabled_snap) { 216 15 if (rcu_cpu_has_callbacks(NULL)) 217 16 invoke_rcu_core(); 218 17 rdtp->tick_nohz_enabled_snap = tne; 219 18 return; 220 19 } 221 20 if (!tne) 222 21 return; 223 22 if (rdtp->all_lazy && 224 23 rdtp->nonlazy_posted != rdtp->nonlazy_posted_snap) { 225 24 rdtp->all_lazy = false; 226 25 rdtp->nonlazy_posted_snap = rdtp->nonlazy_posted; 227 26 invoke_rcu_core(); 228 27 return; 229 28 } 230 29 if (rdtp->last_accelerate == jiffies) 231 30 return; 232 31 rdtp->last_accelerate = jiffies; 233 32 for_each_rcu_flavor(rsp) { 234 33 rdp = this_cpu_ptr(rsp->rda); 235 34 if (rcu_segcblist_pend_cbs(&rdp->cblist)) 236 35 continue; 237 36 rnp = rdp->mynode; 238 37 raw_spin_lock_rcu_node(rnp); 239 38 needwake = rcu_accelerate_cbs(rsp, rnp, rdp); 240 39 raw_spin_unlock_rcu_node(rnp); 241 40 if (needwake) 242 41 rcu_gp_kthread_wake(rsp); 243 42 } 244 43 } 245 246But the only part of ``rcu_prepare_for_idle()`` that really matters for 247this discussion are lines 37–39. We will therefore abbreviate this 248function as follows: 249 250.. kernel-figure:: rcu_node-lock.svg 251 252The box represents the ``rcu_node`` structure's ``->lock`` critical 253section, with the double line on top representing the additional 254``smp_mb__after_unlock_lock()``. 255 256Tree RCU Grace Period Memory Ordering Components 257~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 258 259Tree RCU's grace-period memory-ordering guarantee is provided by a 260number of RCU components: 261 262#. `Callback Registry`_ 263#. `Grace-Period Initialization`_ 264#. `Self-Reported Quiescent States`_ 265#. `Dynamic Tick Interface`_ 266#. `CPU-Hotplug Interface`_ 267#. `Forcing Quiescent States`_ 268#. `Grace-Period Cleanup`_ 269#. `Callback Invocation`_ 270 271Each of the following section looks at the corresponding component in 272detail. 273 274Callback Registry 275^^^^^^^^^^^^^^^^^ 276 277If RCU's grace-period guarantee is to mean anything at all, any access 278that happens before a given invocation of ``call_rcu()`` must also 279happen before the corresponding grace period. The implementation of this 280portion of RCU's grace period guarantee is shown in the following 281figure: 282 283.. kernel-figure:: TreeRCU-callback-registry.svg 284 285Because ``call_rcu()`` normally acts only on CPU-local state, it 286provides no ordering guarantees, either for itself or for phase one of 287the update (which again will usually be removal of an element from an 288RCU-protected data structure). It simply enqueues the ``rcu_head`` 289structure on a per-CPU list, which cannot become associated with a grace 290period until a later call to ``rcu_accelerate_cbs()``, as shown in the 291diagram above. 292 293One set of code paths shown on the left invokes ``rcu_accelerate_cbs()`` 294via ``note_gp_changes()``, either directly from ``call_rcu()`` (if the 295current CPU is inundated with queued ``rcu_head`` structures) or more 296likely from an ``RCU_SOFTIRQ`` handler. Another code path in the middle 297is taken only in kernels built with ``CONFIG_RCU_FAST_NO_HZ=y``, which 298invokes ``rcu_accelerate_cbs()`` via ``rcu_prepare_for_idle()``. The 299final code path on the right is taken only in kernels built with 300``CONFIG_HOTPLUG_CPU=y``, which invokes ``rcu_accelerate_cbs()`` via 301``rcu_advance_cbs()``, ``rcu_migrate_callbacks``, 302``rcutree_migrate_callbacks()``, and ``takedown_cpu()``, which in turn 303is invoked on a surviving CPU after the outgoing CPU has been completely 304offlined. 305 306There are a few other code paths within grace-period processing that 307opportunistically invoke ``rcu_accelerate_cbs()``. However, either way, 308all of the CPU's recently queued ``rcu_head`` structures are associated 309with a future grace-period number under the protection of the CPU's lead 310``rcu_node`` structure's ``->lock``. In all cases, there is full 311ordering against any prior critical section for that same ``rcu_node`` 312structure's ``->lock``, and also full ordering against any of the 313current task's or CPU's prior critical sections for any ``rcu_node`` 314structure's ``->lock``. 315 316The next section will show how this ordering ensures that any accesses 317prior to the ``call_rcu()`` (particularly including phase one of the 318update) happen before the start of the corresponding grace period. 319 320+-----------------------------------------------------------------------+ 321| **Quick Quiz**: | 322+-----------------------------------------------------------------------+ 323| But what about ``synchronize_rcu()``? | 324+-----------------------------------------------------------------------+ 325| **Answer**: | 326+-----------------------------------------------------------------------+ 327| The ``synchronize_rcu()`` passes ``call_rcu()`` to ``wait_rcu_gp()``, | 328| which invokes it. So either way, it eventually comes down to | 329| ``call_rcu()``. | 330+-----------------------------------------------------------------------+ 331 332Grace-Period Initialization 333^^^^^^^^^^^^^^^^^^^^^^^^^^^ 334 335Grace-period initialization is carried out by the grace-period kernel 336thread, which makes several passes over the ``rcu_node`` tree within the 337``rcu_gp_init()`` function. This means that showing the full flow of 338ordering through the grace-period computation will require duplicating 339this tree. If you find this confusing, please note that the state of the 340``rcu_node`` changes over time, just like Heraclitus's river. However, 341to keep the ``rcu_node`` river tractable, the grace-period kernel 342thread's traversals are presented in multiple parts, starting in this 343section with the various phases of grace-period initialization. 344 345The first ordering-related grace-period initialization action is to 346advance the ``rcu_state`` structure's ``->gp_seq`` grace-period-number 347counter, as shown below: 348 349.. kernel-figure:: TreeRCU-gp-init-1.svg 350 351The actual increment is carried out using ``smp_store_release()``, which 352helps reject false-positive RCU CPU stall detection. Note that only the 353root ``rcu_node`` structure is touched. 354 355The first pass through the ``rcu_node`` tree updates bitmasks based on 356CPUs having come online or gone offline since the start of the previous 357grace period. In the common case where the number of online CPUs for 358this ``rcu_node`` structure has not transitioned to or from zero, this 359pass will scan only the leaf ``rcu_node`` structures. However, if the 360number of online CPUs for a given leaf ``rcu_node`` structure has 361transitioned from zero, ``rcu_init_new_rnp()`` will be invoked for the 362first incoming CPU. Similarly, if the number of online CPUs for a given 363leaf ``rcu_node`` structure has transitioned to zero, 364``rcu_cleanup_dead_rnp()`` will be invoked for the last outgoing CPU. 365The diagram below shows the path of ordering if the leftmost 366``rcu_node`` structure onlines its first CPU and if the next 367``rcu_node`` structure has no online CPUs (or, alternatively if the 368leftmost ``rcu_node`` structure offlines its last CPU and if the next 369``rcu_node`` structure has no online CPUs). 370 371.. kernel-figure:: TreeRCU-gp-init-2.svg 372 373The final ``rcu_gp_init()`` pass through the ``rcu_node`` tree traverses 374breadth-first, setting each ``rcu_node`` structure's ``->gp_seq`` field 375to the newly advanced value from the ``rcu_state`` structure, as shown 376in the following diagram. 377 378.. kernel-figure:: TreeRCU-gp-init-3.svg 379 380This change will also cause each CPU's next call to 381``__note_gp_changes()`` to notice that a new grace period has started, 382as described in the next section. But because the grace-period kthread 383started the grace period at the root (with the advancing of the 384``rcu_state`` structure's ``->gp_seq`` field) before setting each leaf 385``rcu_node`` structure's ``->gp_seq`` field, each CPU's observation of 386the start of the grace period will happen after the actual start of the 387grace period. 388 389+-----------------------------------------------------------------------+ 390| **Quick Quiz**: | 391+-----------------------------------------------------------------------+ 392| But what about the CPU that started the grace period? Why wouldn't it | 393| see the start of the grace period right when it started that grace | 394| period? | 395+-----------------------------------------------------------------------+ 396| **Answer**: | 397+-----------------------------------------------------------------------+ 398| In some deep philosophical and overly anthromorphized sense, yes, the | 399| CPU starting the grace period is immediately aware of having done so. | 400| However, if we instead assume that RCU is not self-aware, then even | 401| the CPU starting the grace period does not really become aware of the | 402| start of this grace period until its first call to | 403| ``__note_gp_changes()``. On the other hand, this CPU potentially gets | 404| early notification because it invokes ``__note_gp_changes()`` during | 405| its last ``rcu_gp_init()`` pass through its leaf ``rcu_node`` | 406| structure. | 407+-----------------------------------------------------------------------+ 408 409Self-Reported Quiescent States 410^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 411 412When all entities that might block the grace period have reported 413quiescent states (or as described in a later section, had quiescent 414states reported on their behalf), the grace period can end. Online 415non-idle CPUs report their own quiescent states, as shown in the 416following diagram: 417 418.. kernel-figure:: TreeRCU-qs.svg 419 420This is for the last CPU to report a quiescent state, which signals the 421end of the grace period. Earlier quiescent states would push up the 422``rcu_node`` tree only until they encountered an ``rcu_node`` structure 423that is waiting for additional quiescent states. However, ordering is 424nevertheless preserved because some later quiescent state will acquire 425that ``rcu_node`` structure's ``->lock``. 426 427Any number of events can lead up to a CPU invoking ``note_gp_changes`` 428(or alternatively, directly invoking ``__note_gp_changes()``), at which 429point that CPU will notice the start of a new grace period while holding 430its leaf ``rcu_node`` lock. Therefore, all execution shown in this 431diagram happens after the start of the grace period. In addition, this 432CPU will consider any RCU read-side critical section that started before 433the invocation of ``__note_gp_changes()`` to have started before the 434grace period, and thus a critical section that the grace period must 435wait on. 436 437+-----------------------------------------------------------------------+ 438| **Quick Quiz**: | 439+-----------------------------------------------------------------------+ 440| But a RCU read-side critical section might have started after the | 441| beginning of the grace period (the advancing of ``->gp_seq`` from | 442| earlier), so why should the grace period wait on such a critical | 443| section? | 444+-----------------------------------------------------------------------+ 445| **Answer**: | 446+-----------------------------------------------------------------------+ 447| It is indeed not necessary for the grace period to wait on such a | 448| critical section. However, it is permissible to wait on it. And it is | 449| furthermore important to wait on it, as this lazy approach is far | 450| more scalable than a “big bang” all-at-once grace-period start could | 451| possibly be. | 452+-----------------------------------------------------------------------+ 453 454If the CPU does a context switch, a quiescent state will be noted by 455``rcu_note_context_switch()`` on the left. On the other hand, if the CPU 456takes a scheduler-clock interrupt while executing in usermode, a 457quiescent state will be noted by ``rcu_sched_clock_irq()`` on the right. 458Either way, the passage through a quiescent state will be noted in a 459per-CPU variable. 460 461The next time an ``RCU_SOFTIRQ`` handler executes on this CPU (for 462example, after the next scheduler-clock interrupt), ``rcu_core()`` will 463invoke ``rcu_check_quiescent_state()``, which will notice the recorded 464quiescent state, and invoke ``rcu_report_qs_rdp()``. If 465``rcu_report_qs_rdp()`` verifies that the quiescent state really does 466apply to the current grace period, it invokes ``rcu_report_rnp()`` which 467traverses up the ``rcu_node`` tree as shown at the bottom of the 468diagram, clearing bits from each ``rcu_node`` structure's ``->qsmask`` 469field, and propagating up the tree when the result is zero. 470 471Note that traversal passes upwards out of a given ``rcu_node`` structure 472only if the current CPU is reporting the last quiescent state for the 473subtree headed by that ``rcu_node`` structure. A key point is that if a 474CPU's traversal stops at a given ``rcu_node`` structure, then there will 475be a later traversal by another CPU (or perhaps the same one) that 476proceeds upwards from that point, and the ``rcu_node`` ``->lock`` 477guarantees that the first CPU's quiescent state happens before the 478remainder of the second CPU's traversal. Applying this line of thought 479repeatedly shows that all CPUs' quiescent states happen before the last 480CPU traverses through the root ``rcu_node`` structure, the “last CPU” 481being the one that clears the last bit in the root ``rcu_node`` 482structure's ``->qsmask`` field. 483 484Dynamic Tick Interface 485^^^^^^^^^^^^^^^^^^^^^^ 486 487Due to energy-efficiency considerations, RCU is forbidden from 488disturbing idle CPUs. CPUs are therefore required to notify RCU when 489entering or leaving idle state, which they do via fully ordered 490value-returning atomic operations on a per-CPU variable. The ordering 491effects are as shown below: 492 493.. kernel-figure:: TreeRCU-dyntick.svg 494 495The RCU grace-period kernel thread samples the per-CPU idleness variable 496while holding the corresponding CPU's leaf ``rcu_node`` structure's 497``->lock``. This means that any RCU read-side critical sections that 498precede the idle period (the oval near the top of the diagram above) 499will happen before the end of the current grace period. Similarly, the 500beginning of the current grace period will happen before any RCU 501read-side critical sections that follow the idle period (the oval near 502the bottom of the diagram above). 503 504Plumbing this into the full grace-period execution is described 505`below <Forcing Quiescent States_>`__. 506 507CPU-Hotplug Interface 508^^^^^^^^^^^^^^^^^^^^^ 509 510RCU is also forbidden from disturbing offline CPUs, which might well be 511powered off and removed from the system completely. CPUs are therefore 512required to notify RCU of their comings and goings as part of the 513corresponding CPU hotplug operations. The ordering effects are shown 514below: 515 516.. kernel-figure:: TreeRCU-hotplug.svg 517 518Because CPU hotplug operations are much less frequent than idle 519transitions, they are heavier weight, and thus acquire the CPU's leaf 520``rcu_node`` structure's ``->lock`` and update this structure's 521``->qsmaskinitnext``. The RCU grace-period kernel thread samples this 522mask to detect CPUs having gone offline since the beginning of this 523grace period. 524 525Plumbing this into the full grace-period execution is described 526`below <Forcing Quiescent States_>`__. 527 528Forcing Quiescent States 529^^^^^^^^^^^^^^^^^^^^^^^^ 530 531As noted above, idle and offline CPUs cannot report their own quiescent 532states, and therefore the grace-period kernel thread must do the 533reporting on their behalf. This process is called “forcing quiescent 534states”, it is repeated every few jiffies, and its ordering effects are 535shown below: 536 537.. kernel-figure:: TreeRCU-gp-fqs.svg 538 539Each pass of quiescent state forcing is guaranteed to traverse the leaf 540``rcu_node`` structures, and if there are no new quiescent states due to 541recently idled and/or offlined CPUs, then only the leaves are traversed. 542However, if there is a newly offlined CPU as illustrated on the left or 543a newly idled CPU as illustrated on the right, the corresponding 544quiescent state will be driven up towards the root. As with 545self-reported quiescent states, the upwards driving stops once it 546reaches an ``rcu_node`` structure that has quiescent states outstanding 547from other CPUs. 548 549+-----------------------------------------------------------------------+ 550| **Quick Quiz**: | 551+-----------------------------------------------------------------------+ 552| The leftmost drive to root stopped before it reached the root | 553| ``rcu_node`` structure, which means that there are still CPUs | 554| subordinate to that structure on which the current grace period is | 555| waiting. Given that, how is it possible that the rightmost drive to | 556| root ended the grace period? | 557+-----------------------------------------------------------------------+ 558| **Answer**: | 559+-----------------------------------------------------------------------+ 560| Good analysis! It is in fact impossible in the absence of bugs in | 561| RCU. But this diagram is complex enough as it is, so simplicity | 562| overrode accuracy. You can think of it as poetic license, or you can | 563| think of it as misdirection that is resolved in the | 564| `stitched-together diagram <Putting It All Together_>`__. | 565+-----------------------------------------------------------------------+ 566 567Grace-Period Cleanup 568^^^^^^^^^^^^^^^^^^^^ 569 570Grace-period cleanup first scans the ``rcu_node`` tree breadth-first 571advancing all the ``->gp_seq`` fields, then it advances the 572``rcu_state`` structure's ``->gp_seq`` field. The ordering effects are 573shown below: 574 575.. kernel-figure:: TreeRCU-gp-cleanup.svg 576 577As indicated by the oval at the bottom of the diagram, once grace-period 578cleanup is complete, the next grace period can begin. 579 580+-----------------------------------------------------------------------+ 581| **Quick Quiz**: | 582+-----------------------------------------------------------------------+ 583| But when precisely does the grace period end? | 584+-----------------------------------------------------------------------+ 585| **Answer**: | 586+-----------------------------------------------------------------------+ 587| There is no useful single point at which the grace period can be said | 588| to end. The earliest reasonable candidate is as soon as the last CPU | 589| has reported its quiescent state, but it may be some milliseconds | 590| before RCU becomes aware of this. The latest reasonable candidate is | 591| once the ``rcu_state`` structure's ``->gp_seq`` field has been | 592| updated, but it is quite possible that some CPUs have already | 593| completed phase two of their updates by that time. In short, if you | 594| are going to work with RCU, you need to learn to embrace uncertainty. | 595+-----------------------------------------------------------------------+ 596 597Callback Invocation 598^^^^^^^^^^^^^^^^^^^ 599 600Once a given CPU's leaf ``rcu_node`` structure's ``->gp_seq`` field has 601been updated, that CPU can begin invoking its RCU callbacks that were 602waiting for this grace period to end. These callbacks are identified by 603``rcu_advance_cbs()``, which is usually invoked by 604``__note_gp_changes()``. As shown in the diagram below, this invocation 605can be triggered by the scheduling-clock interrupt 606(``rcu_sched_clock_irq()`` on the left) or by idle entry 607(``rcu_cleanup_after_idle()`` on the right, but only for kernels build 608with ``CONFIG_RCU_FAST_NO_HZ=y``). Either way, ``RCU_SOFTIRQ`` is 609raised, which results in ``rcu_do_batch()`` invoking the callbacks, 610which in turn allows those callbacks to carry out (either directly or 611indirectly via wakeup) the needed phase-two processing for each update. 612 613.. kernel-figure:: TreeRCU-callback-invocation.svg 614 615Please note that callback invocation can also be prompted by any number 616of corner-case code paths, for example, when a CPU notes that it has 617excessive numbers of callbacks queued. In all cases, the CPU acquires 618its leaf ``rcu_node`` structure's ``->lock`` before invoking callbacks, 619which preserves the required ordering against the newly completed grace 620period. 621 622However, if the callback function communicates to other CPUs, for 623example, doing a wakeup, then it is that function's responsibility to 624maintain ordering. For example, if the callback function wakes up a task 625that runs on some other CPU, proper ordering must in place in both the 626callback function and the task being awakened. To see why this is 627important, consider the top half of the `grace-period 628cleanup`_ diagram. The callback might be 629running on a CPU corresponding to the leftmost leaf ``rcu_node`` 630structure, and awaken a task that is to run on a CPU corresponding to 631the rightmost leaf ``rcu_node`` structure, and the grace-period kernel 632thread might not yet have reached the rightmost leaf. In this case, the 633grace period's memory ordering might not yet have reached that CPU, so 634again the callback function and the awakened task must supply proper 635ordering. 636 637Putting It All Together 638~~~~~~~~~~~~~~~~~~~~~~~ 639 640A stitched-together diagram is here: 641 642.. kernel-figure:: TreeRCU-gp.svg 643 644Legal Statement 645~~~~~~~~~~~~~~~ 646 647This work represents the view of the author and does not necessarily 648represent the view of IBM. 649 650Linux is a registered trademark of Linus Torvalds. 651 652Other company, product, and service names may be trademarks or service 653marks of others. 654