1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 Paul E. McKenney <paulmck@linux.vnet.ibm.com> 7 Will Deacon <will.deacon@arm.com> 8 Peter Zijlstra <peterz@infradead.org> 9 10========== 11DISCLAIMER 12========== 13 14This document is not a specification; it is intentionally (for the sake of 15brevity) and unintentionally (due to being human) incomplete. This document is 16meant as a guide to using the various memory barriers provided by Linux, but 17in case of any doubt (and there are many) please ask. 18 19To repeat, this document is not a specification of what Linux expects from 20hardware. 21 22The purpose of this document is twofold: 23 24 (1) to specify the minimum functionality that one can rely on for any 25 particular barrier, and 26 27 (2) to provide a guide as to how to use the barriers that are available. 28 29Note that an architecture can provide more than the minimum requirement 30for any particular barrier, but if the architecture provides less than 31that, that architecture is incorrect. 32 33Note also that it is possible that a barrier may be a no-op for an 34architecture because the way that arch works renders an explicit barrier 35unnecessary in that case. 36 37 38======== 39CONTENTS 40======== 41 42 (*) Abstract memory access model. 43 44 - Device operations. 45 - Guarantees. 46 47 (*) What are memory barriers? 48 49 - Varieties of memory barrier. 50 - What may not be assumed about memory barriers? 51 - Data dependency barriers. 52 - Control dependencies. 53 - SMP barrier pairing. 54 - Examples of memory barrier sequences. 55 - Read memory barriers vs load speculation. 56 - Multicopy atomicity. 57 58 (*) Explicit kernel barriers. 59 60 - Compiler barrier. 61 - CPU memory barriers. 62 - MMIO write barrier. 63 64 (*) Implicit kernel memory barriers. 65 66 - Lock acquisition functions. 67 - Interrupt disabling functions. 68 - Sleep and wake-up functions. 69 - Miscellaneous functions. 70 71 (*) Inter-CPU acquiring barrier effects. 72 73 - Acquires vs memory accesses. 74 - Acquires vs I/O accesses. 75 76 (*) Where are memory barriers needed? 77 78 - Interprocessor interaction. 79 - Atomic operations. 80 - Accessing devices. 81 - Interrupts. 82 83 (*) Kernel I/O barrier effects. 84 85 (*) Assumed minimum execution ordering model. 86 87 (*) The effects of the cpu cache. 88 89 - Cache coherency. 90 - Cache coherency vs DMA. 91 - Cache coherency vs MMIO. 92 93 (*) The things CPUs get up to. 94 95 - And then there's the Alpha. 96 - Virtual Machine Guests. 97 98 (*) Example uses. 99 100 - Circular buffers. 101 102 (*) References. 103 104 105============================ 106ABSTRACT MEMORY ACCESS MODEL 107============================ 108 109Consider the following abstract model of the system: 110 111 : : 112 : : 113 : : 114 +-------+ : +--------+ : +-------+ 115 | | : | | : | | 116 | | : | | : | | 117 | CPU 1 |<----->| Memory |<----->| CPU 2 | 118 | | : | | : | | 119 | | : | | : | | 120 +-------+ : +--------+ : +-------+ 121 ^ : ^ : ^ 122 | : | : | 123 | : | : | 124 | : v : | 125 | : +--------+ : | 126 | : | | : | 127 | : | | : | 128 +---------->| Device |<----------+ 129 : | | : 130 : | | : 131 : +--------+ : 132 : : 133 134Each CPU executes a program that generates memory access operations. In the 135abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 136perform the memory operations in any order it likes, provided program causality 137appears to be maintained. Similarly, the compiler may also arrange the 138instructions it emits in any order it likes, provided it doesn't affect the 139apparent operation of the program. 140 141So in the above diagram, the effects of the memory operations performed by a 142CPU are perceived by the rest of the system as the operations cross the 143interface between the CPU and rest of the system (the dotted lines). 144 145 146For example, consider the following sequence of events: 147 148 CPU 1 CPU 2 149 =============== =============== 150 { A == 1; B == 2 } 151 A = 3; x = B; 152 B = 4; y = A; 153 154The set of accesses as seen by the memory system in the middle can be arranged 155in 24 different combinations: 156 157 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 158 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 159 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 160 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 161 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 162 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 163 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 164 STORE B=4, ... 165 ... 166 167and can thus result in four different combinations of values: 168 169 x == 2, y == 1 170 x == 2, y == 3 171 x == 4, y == 1 172 x == 4, y == 3 173 174 175Furthermore, the stores committed by a CPU to the memory system may not be 176perceived by the loads made by another CPU in the same order as the stores were 177committed. 178 179 180As a further example, consider this sequence of events: 181 182 CPU 1 CPU 2 183 =============== =============== 184 { A == 1, B == 2, C == 3, P == &A, Q == &C } 185 B = 4; Q = P; 186 P = &B D = *Q; 187 188There is an obvious data dependency here, as the value loaded into D depends on 189the address retrieved from P by CPU 2. At the end of the sequence, any of the 190following results are possible: 191 192 (Q == &A) and (D == 1) 193 (Q == &B) and (D == 2) 194 (Q == &B) and (D == 4) 195 196Note that CPU 2 will never try and load C into D because the CPU will load P 197into Q before issuing the load of *Q. 198 199 200DEVICE OPERATIONS 201----------------- 202 203Some devices present their control interfaces as collections of memory 204locations, but the order in which the control registers are accessed is very 205important. For instance, imagine an ethernet card with a set of internal 206registers that are accessed through an address port register (A) and a data 207port register (D). To read internal register 5, the following code might then 208be used: 209 210 *A = 5; 211 x = *D; 212 213but this might show up as either of the following two sequences: 214 215 STORE *A = 5, x = LOAD *D 216 x = LOAD *D, STORE *A = 5 217 218the second of which will almost certainly result in a malfunction, since it set 219the address _after_ attempting to read the register. 220 221 222GUARANTEES 223---------- 224 225There are some minimal guarantees that may be expected of a CPU: 226 227 (*) On any given CPU, dependent memory accesses will be issued in order, with 228 respect to itself. This means that for: 229 230 Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q); 231 232 the CPU will issue the following memory operations: 233 234 Q = LOAD P, D = LOAD *Q 235 236 and always in that order. On most systems, smp_read_barrier_depends() 237 does nothing, but it is required for DEC Alpha. The READ_ONCE() 238 is required to prevent compiler mischief. Please note that you 239 should normally use something like rcu_dereference() instead of 240 open-coding smp_read_barrier_depends(). 241 242 (*) Overlapping loads and stores within a particular CPU will appear to be 243 ordered within that CPU. This means that for: 244 245 a = READ_ONCE(*X); WRITE_ONCE(*X, b); 246 247 the CPU will only issue the following sequence of memory operations: 248 249 a = LOAD *X, STORE *X = b 250 251 And for: 252 253 WRITE_ONCE(*X, c); d = READ_ONCE(*X); 254 255 the CPU will only issue: 256 257 STORE *X = c, d = LOAD *X 258 259 (Loads and stores overlap if they are targeted at overlapping pieces of 260 memory). 261 262And there are a number of things that _must_ or _must_not_ be assumed: 263 264 (*) It _must_not_ be assumed that the compiler will do what you want 265 with memory references that are not protected by READ_ONCE() and 266 WRITE_ONCE(). Without them, the compiler is within its rights to 267 do all sorts of "creative" transformations, which are covered in 268 the COMPILER BARRIER section. 269 270 (*) It _must_not_ be assumed that independent loads and stores will be issued 271 in the order given. This means that for: 272 273 X = *A; Y = *B; *D = Z; 274 275 we may get any of the following sequences: 276 277 X = LOAD *A, Y = LOAD *B, STORE *D = Z 278 X = LOAD *A, STORE *D = Z, Y = LOAD *B 279 Y = LOAD *B, X = LOAD *A, STORE *D = Z 280 Y = LOAD *B, STORE *D = Z, X = LOAD *A 281 STORE *D = Z, X = LOAD *A, Y = LOAD *B 282 STORE *D = Z, Y = LOAD *B, X = LOAD *A 283 284 (*) It _must_ be assumed that overlapping memory accesses may be merged or 285 discarded. This means that for: 286 287 X = *A; Y = *(A + 4); 288 289 we may get any one of the following sequences: 290 291 X = LOAD *A; Y = LOAD *(A + 4); 292 Y = LOAD *(A + 4); X = LOAD *A; 293 {X, Y} = LOAD {*A, *(A + 4) }; 294 295 And for: 296 297 *A = X; *(A + 4) = Y; 298 299 we may get any of: 300 301 STORE *A = X; STORE *(A + 4) = Y; 302 STORE *(A + 4) = Y; STORE *A = X; 303 STORE {*A, *(A + 4) } = {X, Y}; 304 305And there are anti-guarantees: 306 307 (*) These guarantees do not apply to bitfields, because compilers often 308 generate code to modify these using non-atomic read-modify-write 309 sequences. Do not attempt to use bitfields to synchronize parallel 310 algorithms. 311 312 (*) Even in cases where bitfields are protected by locks, all fields 313 in a given bitfield must be protected by one lock. If two fields 314 in a given bitfield are protected by different locks, the compiler's 315 non-atomic read-modify-write sequences can cause an update to one 316 field to corrupt the value of an adjacent field. 317 318 (*) These guarantees apply only to properly aligned and sized scalar 319 variables. "Properly sized" currently means variables that are 320 the same size as "char", "short", "int" and "long". "Properly 321 aligned" means the natural alignment, thus no constraints for 322 "char", two-byte alignment for "short", four-byte alignment for 323 "int", and either four-byte or eight-byte alignment for "long", 324 on 32-bit and 64-bit systems, respectively. Note that these 325 guarantees were introduced into the C11 standard, so beware when 326 using older pre-C11 compilers (for example, gcc 4.6). The portion 327 of the standard containing this guarantee is Section 3.14, which 328 defines "memory location" as follows: 329 330 memory location 331 either an object of scalar type, or a maximal sequence 332 of adjacent bit-fields all having nonzero width 333 334 NOTE 1: Two threads of execution can update and access 335 separate memory locations without interfering with 336 each other. 337 338 NOTE 2: A bit-field and an adjacent non-bit-field member 339 are in separate memory locations. The same applies 340 to two bit-fields, if one is declared inside a nested 341 structure declaration and the other is not, or if the two 342 are separated by a zero-length bit-field declaration, 343 or if they are separated by a non-bit-field member 344 declaration. It is not safe to concurrently update two 345 bit-fields in the same structure if all members declared 346 between them are also bit-fields, no matter what the 347 sizes of those intervening bit-fields happen to be. 348 349 350========================= 351WHAT ARE MEMORY BARRIERS? 352========================= 353 354As can be seen above, independent memory operations are effectively performed 355in random order, but this can be a problem for CPU-CPU interaction and for I/O. 356What is required is some way of intervening to instruct the compiler and the 357CPU to restrict the order. 358 359Memory barriers are such interventions. They impose a perceived partial 360ordering over the memory operations on either side of the barrier. 361 362Such enforcement is important because the CPUs and other devices in a system 363can use a variety of tricks to improve performance, including reordering, 364deferral and combination of memory operations; speculative loads; speculative 365branch prediction and various types of caching. Memory barriers are used to 366override or suppress these tricks, allowing the code to sanely control the 367interaction of multiple CPUs and/or devices. 368 369 370VARIETIES OF MEMORY BARRIER 371--------------------------- 372 373Memory barriers come in four basic varieties: 374 375 (1) Write (or store) memory barriers. 376 377 A write memory barrier gives a guarantee that all the STORE operations 378 specified before the barrier will appear to happen before all the STORE 379 operations specified after the barrier with respect to the other 380 components of the system. 381 382 A write barrier is a partial ordering on stores only; it is not required 383 to have any effect on loads. 384 385 A CPU can be viewed as committing a sequence of store operations to the 386 memory system as time progresses. All stores _before_ a write barrier 387 will occur _before_ all the stores after the write barrier. 388 389 [!] Note that write barriers should normally be paired with read or data 390 dependency barriers; see the "SMP barrier pairing" subsection. 391 392 393 (2) Data dependency barriers. 394 395 A data dependency barrier is a weaker form of read barrier. In the case 396 where two loads are performed such that the second depends on the result 397 of the first (eg: the first load retrieves the address to which the second 398 load will be directed), a data dependency barrier would be required to 399 make sure that the target of the second load is updated before the address 400 obtained by the first load is accessed. 401 402 A data dependency barrier is a partial ordering on interdependent loads 403 only; it is not required to have any effect on stores, independent loads 404 or overlapping loads. 405 406 As mentioned in (1), the other CPUs in the system can be viewed as 407 committing sequences of stores to the memory system that the CPU being 408 considered can then perceive. A data dependency barrier issued by the CPU 409 under consideration guarantees that for any load preceding it, if that 410 load touches one of a sequence of stores from another CPU, then by the 411 time the barrier completes, the effects of all the stores prior to that 412 touched by the load will be perceptible to any loads issued after the data 413 dependency barrier. 414 415 See the "Examples of memory barrier sequences" subsection for diagrams 416 showing the ordering constraints. 417 418 [!] Note that the first load really has to have a _data_ dependency and 419 not a control dependency. If the address for the second load is dependent 420 on the first load, but the dependency is through a conditional rather than 421 actually loading the address itself, then it's a _control_ dependency and 422 a full read barrier or better is required. See the "Control dependencies" 423 subsection for more information. 424 425 [!] Note that data dependency barriers should normally be paired with 426 write barriers; see the "SMP barrier pairing" subsection. 427 428 429 (3) Read (or load) memory barriers. 430 431 A read barrier is a data dependency barrier plus a guarantee that all the 432 LOAD operations specified before the barrier will appear to happen before 433 all the LOAD operations specified after the barrier with respect to the 434 other components of the system. 435 436 A read barrier is a partial ordering on loads only; it is not required to 437 have any effect on stores. 438 439 Read memory barriers imply data dependency barriers, and so can substitute 440 for them. 441 442 [!] Note that read barriers should normally be paired with write barriers; 443 see the "SMP barrier pairing" subsection. 444 445 446 (4) General memory barriers. 447 448 A general memory barrier gives a guarantee that all the LOAD and STORE 449 operations specified before the barrier will appear to happen before all 450 the LOAD and STORE operations specified after the barrier with respect to 451 the other components of the system. 452 453 A general memory barrier is a partial ordering over both loads and stores. 454 455 General memory barriers imply both read and write memory barriers, and so 456 can substitute for either. 457 458 459And a couple of implicit varieties: 460 461 (5) ACQUIRE operations. 462 463 This acts as a one-way permeable barrier. It guarantees that all memory 464 operations after the ACQUIRE operation will appear to happen after the 465 ACQUIRE operation with respect to the other components of the system. 466 ACQUIRE operations include LOCK operations and both smp_load_acquire() 467 and smp_cond_acquire() operations. The later builds the necessary ACQUIRE 468 semantics from relying on a control dependency and smp_rmb(). 469 470 Memory operations that occur before an ACQUIRE operation may appear to 471 happen after it completes. 472 473 An ACQUIRE operation should almost always be paired with a RELEASE 474 operation. 475 476 477 (6) RELEASE operations. 478 479 This also acts as a one-way permeable barrier. It guarantees that all 480 memory operations before the RELEASE operation will appear to happen 481 before the RELEASE operation with respect to the other components of the 482 system. RELEASE operations include UNLOCK operations and 483 smp_store_release() operations. 484 485 Memory operations that occur after a RELEASE operation may appear to 486 happen before it completes. 487 488 The use of ACQUIRE and RELEASE operations generally precludes the need 489 for other sorts of memory barrier (but note the exceptions mentioned in 490 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE 491 pair is -not- guaranteed to act as a full memory barrier. However, after 492 an ACQUIRE on a given variable, all memory accesses preceding any prior 493 RELEASE on that same variable are guaranteed to be visible. In other 494 words, within a given variable's critical section, all accesses of all 495 previous critical sections for that variable are guaranteed to have 496 completed. 497 498 This means that ACQUIRE acts as a minimal "acquire" operation and 499 RELEASE acts as a minimal "release" operation. 500 501A subset of the atomic operations described in atomic_t.txt have ACQUIRE and 502RELEASE variants in addition to fully-ordered and relaxed (no barrier 503semantics) definitions. For compound atomics performing both a load and a 504store, ACQUIRE semantics apply only to the load and RELEASE semantics apply 505only to the store portion of the operation. 506 507Memory barriers are only required where there's a possibility of interaction 508between two CPUs or between a CPU and a device. If it can be guaranteed that 509there won't be any such interaction in any particular piece of code, then 510memory barriers are unnecessary in that piece of code. 511 512 513Note that these are the _minimum_ guarantees. Different architectures may give 514more substantial guarantees, but they may _not_ be relied upon outside of arch 515specific code. 516 517 518WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 519---------------------------------------------- 520 521There are certain things that the Linux kernel memory barriers do not guarantee: 522 523 (*) There is no guarantee that any of the memory accesses specified before a 524 memory barrier will be _complete_ by the completion of a memory barrier 525 instruction; the barrier can be considered to draw a line in that CPU's 526 access queue that accesses of the appropriate type may not cross. 527 528 (*) There is no guarantee that issuing a memory barrier on one CPU will have 529 any direct effect on another CPU or any other hardware in the system. The 530 indirect effect will be the order in which the second CPU sees the effects 531 of the first CPU's accesses occur, but see the next point: 532 533 (*) There is no guarantee that a CPU will see the correct order of effects 534 from a second CPU's accesses, even _if_ the second CPU uses a memory 535 barrier, unless the first CPU _also_ uses a matching memory barrier (see 536 the subsection on "SMP Barrier Pairing"). 537 538 (*) There is no guarantee that some intervening piece of off-the-CPU 539 hardware[*] will not reorder the memory accesses. CPU cache coherency 540 mechanisms should propagate the indirect effects of a memory barrier 541 between CPUs, but might not do so in order. 542 543 [*] For information on bus mastering DMA and coherency please read: 544 545 Documentation/PCI/pci.txt 546 Documentation/DMA-API-HOWTO.txt 547 Documentation/DMA-API.txt 548 549 550DATA DEPENDENCY BARRIERS 551------------------------ 552 553The usage requirements of data dependency barriers are a little subtle, and 554it's not always obvious that they're needed. To illustrate, consider the 555following sequence of events: 556 557 CPU 1 CPU 2 558 =============== =============== 559 { A == 1, B == 2, C == 3, P == &A, Q == &C } 560 B = 4; 561 <write barrier> 562 WRITE_ONCE(P, &B) 563 Q = READ_ONCE(P); 564 D = *Q; 565 566There's a clear data dependency here, and it would seem that by the end of the 567sequence, Q must be either &A or &B, and that: 568 569 (Q == &A) implies (D == 1) 570 (Q == &B) implies (D == 4) 571 572But! CPU 2's perception of P may be updated _before_ its perception of B, thus 573leading to the following situation: 574 575 (Q == &B) and (D == 2) ???? 576 577Whilst this may seem like a failure of coherency or causality maintenance, it 578isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 579Alpha). 580 581To deal with this, a data dependency barrier or better must be inserted 582between the address load and the data load: 583 584 CPU 1 CPU 2 585 =============== =============== 586 { A == 1, B == 2, C == 3, P == &A, Q == &C } 587 B = 4; 588 <write barrier> 589 WRITE_ONCE(P, &B); 590 Q = READ_ONCE(P); 591 <data dependency barrier> 592 D = *Q; 593 594This enforces the occurrence of one of the two implications, and prevents the 595third possibility from arising. 596 597 598[!] Note that this extremely counterintuitive situation arises most easily on 599machines with split caches, so that, for example, one cache bank processes 600even-numbered cache lines and the other bank processes odd-numbered cache 601lines. The pointer P might be stored in an odd-numbered cache line, and the 602variable B might be stored in an even-numbered cache line. Then, if the 603even-numbered bank of the reading CPU's cache is extremely busy while the 604odd-numbered bank is idle, one can see the new value of the pointer P (&B), 605but the old value of the variable B (2). 606 607 608A data-dependency barrier is not required to order dependent writes 609because the CPUs that the Linux kernel supports don't do writes 610until they are certain (1) that the write will actually happen, (2) 611of the location of the write, and (3) of the value to be written. 612But please carefully read the "CONTROL DEPENDENCIES" section and the 613Documentation/RCU/rcu_dereference.txt file: The compiler can and does 614break dependencies in a great many highly creative ways. 615 616 CPU 1 CPU 2 617 =============== =============== 618 { A == 1, B == 2, C = 3, P == &A, Q == &C } 619 B = 4; 620 <write barrier> 621 WRITE_ONCE(P, &B); 622 Q = READ_ONCE(P); 623 WRITE_ONCE(*Q, 5); 624 625Therefore, no data-dependency barrier is required to order the read into 626Q with the store into *Q. In other words, this outcome is prohibited, 627even without a data-dependency barrier: 628 629 (Q == &B) && (B == 4) 630 631Please note that this pattern should be rare. After all, the whole point 632of dependency ordering is to -prevent- writes to the data structure, along 633with the expensive cache misses associated with those writes. This pattern 634can be used to record rare error conditions and the like, and the CPUs' 635naturally occurring ordering prevents such records from being lost. 636 637 638Note well that the ordering provided by a data dependency is local to 639the CPU containing it. See the section on "Multicopy atomicity" for 640more information. 641 642 643The data dependency barrier is very important to the RCU system, 644for example. See rcu_assign_pointer() and rcu_dereference() in 645include/linux/rcupdate.h. This permits the current target of an RCU'd 646pointer to be replaced with a new modified target, without the replacement 647target appearing to be incompletely initialised. 648 649See also the subsection on "Cache Coherency" for a more thorough example. 650 651 652CONTROL DEPENDENCIES 653-------------------- 654 655Control dependencies can be a bit tricky because current compilers do 656not understand them. The purpose of this section is to help you prevent 657the compiler's ignorance from breaking your code. 658 659A load-load control dependency requires a full read memory barrier, not 660simply a data dependency barrier to make it work correctly. Consider the 661following bit of code: 662 663 q = READ_ONCE(a); 664 if (q) { 665 <data dependency barrier> /* BUG: No data dependency!!! */ 666 p = READ_ONCE(b); 667 } 668 669This will not have the desired effect because there is no actual data 670dependency, but rather a control dependency that the CPU may short-circuit 671by attempting to predict the outcome in advance, so that other CPUs see 672the load from b as having happened before the load from a. In such a 673case what's actually required is: 674 675 q = READ_ONCE(a); 676 if (q) { 677 <read barrier> 678 p = READ_ONCE(b); 679 } 680 681However, stores are not speculated. This means that ordering -is- provided 682for load-store control dependencies, as in the following example: 683 684 q = READ_ONCE(a); 685 if (q) { 686 WRITE_ONCE(b, 1); 687 } 688 689Control dependencies pair normally with other types of barriers. 690That said, please note that neither READ_ONCE() nor WRITE_ONCE() 691are optional! Without the READ_ONCE(), the compiler might combine the 692load from 'a' with other loads from 'a'. Without the WRITE_ONCE(), 693the compiler might combine the store to 'b' with other stores to 'b'. 694Either can result in highly counterintuitive effects on ordering. 695 696Worse yet, if the compiler is able to prove (say) that the value of 697variable 'a' is always non-zero, it would be well within its rights 698to optimize the original example by eliminating the "if" statement 699as follows: 700 701 q = a; 702 b = 1; /* BUG: Compiler and CPU can both reorder!!! */ 703 704So don't leave out the READ_ONCE(). 705 706It is tempting to try to enforce ordering on identical stores on both 707branches of the "if" statement as follows: 708 709 q = READ_ONCE(a); 710 if (q) { 711 barrier(); 712 WRITE_ONCE(b, 1); 713 do_something(); 714 } else { 715 barrier(); 716 WRITE_ONCE(b, 1); 717 do_something_else(); 718 } 719 720Unfortunately, current compilers will transform this as follows at high 721optimization levels: 722 723 q = READ_ONCE(a); 724 barrier(); 725 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ 726 if (q) { 727 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ 728 do_something(); 729 } else { 730 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ 731 do_something_else(); 732 } 733 734Now there is no conditional between the load from 'a' and the store to 735'b', which means that the CPU is within its rights to reorder them: 736The conditional is absolutely required, and must be present in the 737assembly code even after all compiler optimizations have been applied. 738Therefore, if you need ordering in this example, you need explicit 739memory barriers, for example, smp_store_release(): 740 741 q = READ_ONCE(a); 742 if (q) { 743 smp_store_release(&b, 1); 744 do_something(); 745 } else { 746 smp_store_release(&b, 1); 747 do_something_else(); 748 } 749 750In contrast, without explicit memory barriers, two-legged-if control 751ordering is guaranteed only when the stores differ, for example: 752 753 q = READ_ONCE(a); 754 if (q) { 755 WRITE_ONCE(b, 1); 756 do_something(); 757 } else { 758 WRITE_ONCE(b, 2); 759 do_something_else(); 760 } 761 762The initial READ_ONCE() is still required to prevent the compiler from 763proving the value of 'a'. 764 765In addition, you need to be careful what you do with the local variable 'q', 766otherwise the compiler might be able to guess the value and again remove 767the needed conditional. For example: 768 769 q = READ_ONCE(a); 770 if (q % MAX) { 771 WRITE_ONCE(b, 1); 772 do_something(); 773 } else { 774 WRITE_ONCE(b, 2); 775 do_something_else(); 776 } 777 778If MAX is defined to be 1, then the compiler knows that (q % MAX) is 779equal to zero, in which case the compiler is within its rights to 780transform the above code into the following: 781 782 q = READ_ONCE(a); 783 WRITE_ONCE(b, 2); 784 do_something_else(); 785 786Given this transformation, the CPU is not required to respect the ordering 787between the load from variable 'a' and the store to variable 'b'. It is 788tempting to add a barrier(), but this does not help. The conditional 789is gone, and the barrier won't bring it back. Therefore, if you are 790relying on this ordering, you should make sure that MAX is greater than 791one, perhaps as follows: 792 793 q = READ_ONCE(a); 794 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 795 if (q % MAX) { 796 WRITE_ONCE(b, 1); 797 do_something(); 798 } else { 799 WRITE_ONCE(b, 2); 800 do_something_else(); 801 } 802 803Please note once again that the stores to 'b' differ. If they were 804identical, as noted earlier, the compiler could pull this store outside 805of the 'if' statement. 806 807You must also be careful not to rely too much on boolean short-circuit 808evaluation. Consider this example: 809 810 q = READ_ONCE(a); 811 if (q || 1 > 0) 812 WRITE_ONCE(b, 1); 813 814Because the first condition cannot fault and the second condition is 815always true, the compiler can transform this example as following, 816defeating control dependency: 817 818 q = READ_ONCE(a); 819 WRITE_ONCE(b, 1); 820 821This example underscores the need to ensure that the compiler cannot 822out-guess your code. More generally, although READ_ONCE() does force 823the compiler to actually emit code for a given load, it does not force 824the compiler to use the results. 825 826In addition, control dependencies apply only to the then-clause and 827else-clause of the if-statement in question. In particular, it does 828not necessarily apply to code following the if-statement: 829 830 q = READ_ONCE(a); 831 if (q) { 832 WRITE_ONCE(b, 1); 833 } else { 834 WRITE_ONCE(b, 2); 835 } 836 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */ 837 838It is tempting to argue that there in fact is ordering because the 839compiler cannot reorder volatile accesses and also cannot reorder 840the writes to 'b' with the condition. Unfortunately for this line 841of reasoning, the compiler might compile the two writes to 'b' as 842conditional-move instructions, as in this fanciful pseudo-assembly 843language: 844 845 ld r1,a 846 cmp r1,$0 847 cmov,ne r4,$1 848 cmov,eq r4,$2 849 st r4,b 850 st $1,c 851 852A weakly ordered CPU would have no dependency of any sort between the load 853from 'a' and the store to 'c'. The control dependencies would extend 854only to the pair of cmov instructions and the store depending on them. 855In short, control dependencies apply only to the stores in the then-clause 856and else-clause of the if-statement in question (including functions 857invoked by those two clauses), not to code following that if-statement. 858 859 860Note well that the ordering provided by a control dependency is local 861to the CPU containing it. See the section on "Multicopy atomicity" 862for more information. 863 864 865In summary: 866 867 (*) Control dependencies can order prior loads against later stores. 868 However, they do -not- guarantee any other sort of ordering: 869 Not prior loads against later loads, nor prior stores against 870 later anything. If you need these other forms of ordering, 871 use smp_rmb(), smp_wmb(), or, in the case of prior stores and 872 later loads, smp_mb(). 873 874 (*) If both legs of the "if" statement begin with identical stores to 875 the same variable, then those stores must be ordered, either by 876 preceding both of them with smp_mb() or by using smp_store_release() 877 to carry out the stores. Please note that it is -not- sufficient 878 to use barrier() at beginning of each leg of the "if" statement 879 because, as shown by the example above, optimizing compilers can 880 destroy the control dependency while respecting the letter of the 881 barrier() law. 882 883 (*) Control dependencies require at least one run-time conditional 884 between the prior load and the subsequent store, and this 885 conditional must involve the prior load. If the compiler is able 886 to optimize the conditional away, it will have also optimized 887 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() 888 can help to preserve the needed conditional. 889 890 (*) Control dependencies require that the compiler avoid reordering the 891 dependency into nonexistence. Careful use of READ_ONCE() or 892 atomic{,64}_read() can help to preserve your control dependency. 893 Please see the COMPILER BARRIER section for more information. 894 895 (*) Control dependencies apply only to the then-clause and else-clause 896 of the if-statement containing the control dependency, including 897 any functions that these two clauses call. Control dependencies 898 do -not- apply to code following the if-statement containing the 899 control dependency. 900 901 (*) Control dependencies pair normally with other types of barriers. 902 903 (*) Control dependencies do -not- provide multicopy atomicity. If you 904 need all the CPUs to see a given store at the same time, use smp_mb(). 905 906 (*) Compilers do not understand control dependencies. It is therefore 907 your job to ensure that they do not break your code. 908 909 910SMP BARRIER PAIRING 911------------------- 912 913When dealing with CPU-CPU interactions, certain types of memory barrier should 914always be paired. A lack of appropriate pairing is almost certainly an error. 915 916General barriers pair with each other, though they also pair with most 917other types of barriers, albeit without multicopy atomicity. An acquire 918barrier pairs with a release barrier, but both may also pair with other 919barriers, including of course general barriers. A write barrier pairs 920with a data dependency barrier, a control dependency, an acquire barrier, 921a release barrier, a read barrier, or a general barrier. Similarly a 922read barrier, control dependency, or a data dependency barrier pairs 923with a write barrier, an acquire barrier, a release barrier, or a 924general barrier: 925 926 CPU 1 CPU 2 927 =============== =============== 928 WRITE_ONCE(a, 1); 929 <write barrier> 930 WRITE_ONCE(b, 2); x = READ_ONCE(b); 931 <read barrier> 932 y = READ_ONCE(a); 933 934Or: 935 936 CPU 1 CPU 2 937 =============== =============================== 938 a = 1; 939 <write barrier> 940 WRITE_ONCE(b, &a); x = READ_ONCE(b); 941 <data dependency barrier> 942 y = *x; 943 944Or even: 945 946 CPU 1 CPU 2 947 =============== =============================== 948 r1 = READ_ONCE(y); 949 <general barrier> 950 WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) { 951 <implicit control dependency> 952 WRITE_ONCE(y, 1); 953 } 954 955 assert(r1 == 0 || r2 == 0); 956 957Basically, the read barrier always has to be there, even though it can be of 958the "weaker" type. 959 960[!] Note that the stores before the write barrier would normally be expected to 961match the loads after the read barrier or the data dependency barrier, and vice 962versa: 963 964 CPU 1 CPU 2 965 =================== =================== 966 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); 967 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); 968 <write barrier> \ <read barrier> 969 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); 970 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); 971 972 973EXAMPLES OF MEMORY BARRIER SEQUENCES 974------------------------------------ 975 976Firstly, write barriers act as partial orderings on store operations. 977Consider the following sequence of events: 978 979 CPU 1 980 ======================= 981 STORE A = 1 982 STORE B = 2 983 STORE C = 3 984 <write barrier> 985 STORE D = 4 986 STORE E = 5 987 988This sequence of events is committed to the memory coherence system in an order 989that the rest of the system might perceive as the unordered set of { STORE A, 990STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 991}: 992 993 +-------+ : : 994 | | +------+ 995 | |------>| C=3 | } /\ 996 | | : +------+ }----- \ -----> Events perceptible to 997 | | : | A=1 | } \/ the rest of the system 998 | | : +------+ } 999 | CPU 1 | : | B=2 | } 1000 | | +------+ } 1001 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 1002 | | +------+ } requires all stores prior to the 1003 | | : | E=5 | } barrier to be committed before 1004 | | : +------+ } further stores may take place 1005 | |------>| D=4 | } 1006 | | +------+ 1007 +-------+ : : 1008 | 1009 | Sequence in which stores are committed to the 1010 | memory system by CPU 1 1011 V 1012 1013 1014Secondly, data dependency barriers act as partial orderings on data-dependent 1015loads. Consider the following sequence of events: 1016 1017 CPU 1 CPU 2 1018 ======================= ======================= 1019 { B = 7; X = 9; Y = 8; C = &Y } 1020 STORE A = 1 1021 STORE B = 2 1022 <write barrier> 1023 STORE C = &B LOAD X 1024 STORE D = 4 LOAD C (gets &B) 1025 LOAD *C (reads B) 1026 1027Without intervention, CPU 2 may perceive the events on CPU 1 in some 1028effectively random order, despite the write barrier issued by CPU 1: 1029 1030 +-------+ : : : : 1031 | | +------+ +-------+ | Sequence of update 1032 | |------>| B=2 |----- --->| Y->8 | | of perception on 1033 | | : +------+ \ +-------+ | CPU 2 1034 | CPU 1 | : | A=1 | \ --->| C->&Y | V 1035 | | +------+ | +-------+ 1036 | | wwwwwwwwwwwwwwww | : : 1037 | | +------+ | : : 1038 | | : | C=&B |--- | : : +-------+ 1039 | | : +------+ \ | +-------+ | | 1040 | |------>| D=4 | ----------->| C->&B |------>| | 1041 | | +------+ | +-------+ | | 1042 +-------+ : : | : : | | 1043 | : : | | 1044 | : : | CPU 2 | 1045 | +-------+ | | 1046 Apparently incorrect ---> | | B->7 |------>| | 1047 perception of B (!) | +-------+ | | 1048 | : : | | 1049 | +-------+ | | 1050 The load of X holds ---> \ | X->9 |------>| | 1051 up the maintenance \ +-------+ | | 1052 of coherence of B ----->| B->2 | +-------+ 1053 +-------+ 1054 : : 1055 1056 1057In the above example, CPU 2 perceives that B is 7, despite the load of *C 1058(which would be B) coming after the LOAD of C. 1059 1060If, however, a data dependency barrier were to be placed between the load of C 1061and the load of *C (ie: B) on CPU 2: 1062 1063 CPU 1 CPU 2 1064 ======================= ======================= 1065 { B = 7; X = 9; Y = 8; C = &Y } 1066 STORE A = 1 1067 STORE B = 2 1068 <write barrier> 1069 STORE C = &B LOAD X 1070 STORE D = 4 LOAD C (gets &B) 1071 <data dependency barrier> 1072 LOAD *C (reads B) 1073 1074then the following will occur: 1075 1076 +-------+ : : : : 1077 | | +------+ +-------+ 1078 | |------>| B=2 |----- --->| Y->8 | 1079 | | : +------+ \ +-------+ 1080 | CPU 1 | : | A=1 | \ --->| C->&Y | 1081 | | +------+ | +-------+ 1082 | | wwwwwwwwwwwwwwww | : : 1083 | | +------+ | : : 1084 | | : | C=&B |--- | : : +-------+ 1085 | | : +------+ \ | +-------+ | | 1086 | |------>| D=4 | ----------->| C->&B |------>| | 1087 | | +------+ | +-------+ | | 1088 +-------+ : : | : : | | 1089 | : : | | 1090 | : : | CPU 2 | 1091 | +-------+ | | 1092 | | X->9 |------>| | 1093 | +-------+ | | 1094 Makes sure all effects ---> \ ddddddddddddddddd | | 1095 prior to the store of C \ +-------+ | | 1096 are perceptible to ----->| B->2 |------>| | 1097 subsequent loads +-------+ | | 1098 : : +-------+ 1099 1100 1101And thirdly, a read barrier acts as a partial order on loads. Consider the 1102following sequence of events: 1103 1104 CPU 1 CPU 2 1105 ======================= ======================= 1106 { A = 0, B = 9 } 1107 STORE A=1 1108 <write barrier> 1109 STORE B=2 1110 LOAD B 1111 LOAD A 1112 1113Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 1114some effectively random order, despite the write barrier issued by CPU 1: 1115 1116 +-------+ : : : : 1117 | | +------+ +-------+ 1118 | |------>| A=1 |------ --->| A->0 | 1119 | | +------+ \ +-------+ 1120 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1121 | | +------+ | +-------+ 1122 | |------>| B=2 |--- | : : 1123 | | +------+ \ | : : +-------+ 1124 +-------+ : : \ | +-------+ | | 1125 ---------->| B->2 |------>| | 1126 | +-------+ | CPU 2 | 1127 | | A->0 |------>| | 1128 | +-------+ | | 1129 | : : +-------+ 1130 \ : : 1131 \ +-------+ 1132 ---->| A->1 | 1133 +-------+ 1134 : : 1135 1136 1137If, however, a read barrier were to be placed between the load of B and the 1138load of A on CPU 2: 1139 1140 CPU 1 CPU 2 1141 ======================= ======================= 1142 { A = 0, B = 9 } 1143 STORE A=1 1144 <write barrier> 1145 STORE B=2 1146 LOAD B 1147 <read barrier> 1148 LOAD A 1149 1150then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 11512: 1152 1153 +-------+ : : : : 1154 | | +------+ +-------+ 1155 | |------>| A=1 |------ --->| A->0 | 1156 | | +------+ \ +-------+ 1157 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1158 | | +------+ | +-------+ 1159 | |------>| B=2 |--- | : : 1160 | | +------+ \ | : : +-------+ 1161 +-------+ : : \ | +-------+ | | 1162 ---------->| B->2 |------>| | 1163 | +-------+ | CPU 2 | 1164 | : : | | 1165 | : : | | 1166 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1167 barrier causes all effects \ +-------+ | | 1168 prior to the storage of B ---->| A->1 |------>| | 1169 to be perceptible to CPU 2 +-------+ | | 1170 : : +-------+ 1171 1172 1173To illustrate this more completely, consider what could happen if the code 1174contained a load of A either side of the read barrier: 1175 1176 CPU 1 CPU 2 1177 ======================= ======================= 1178 { A = 0, B = 9 } 1179 STORE A=1 1180 <write barrier> 1181 STORE B=2 1182 LOAD B 1183 LOAD A [first load of A] 1184 <read barrier> 1185 LOAD A [second load of A] 1186 1187Even though the two loads of A both occur after the load of B, they may both 1188come up with different values: 1189 1190 +-------+ : : : : 1191 | | +------+ +-------+ 1192 | |------>| A=1 |------ --->| A->0 | 1193 | | +------+ \ +-------+ 1194 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1195 | | +------+ | +-------+ 1196 | |------>| B=2 |--- | : : 1197 | | +------+ \ | : : +-------+ 1198 +-------+ : : \ | +-------+ | | 1199 ---------->| B->2 |------>| | 1200 | +-------+ | CPU 2 | 1201 | : : | | 1202 | : : | | 1203 | +-------+ | | 1204 | | A->0 |------>| 1st | 1205 | +-------+ | | 1206 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1207 barrier causes all effects \ +-------+ | | 1208 prior to the storage of B ---->| A->1 |------>| 2nd | 1209 to be perceptible to CPU 2 +-------+ | | 1210 : : +-------+ 1211 1212 1213But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 1214before the read barrier completes anyway: 1215 1216 +-------+ : : : : 1217 | | +------+ +-------+ 1218 | |------>| A=1 |------ --->| A->0 | 1219 | | +------+ \ +-------+ 1220 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1221 | | +------+ | +-------+ 1222 | |------>| B=2 |--- | : : 1223 | | +------+ \ | : : +-------+ 1224 +-------+ : : \ | +-------+ | | 1225 ---------->| B->2 |------>| | 1226 | +-------+ | CPU 2 | 1227 | : : | | 1228 \ : : | | 1229 \ +-------+ | | 1230 ---->| A->1 |------>| 1st | 1231 +-------+ | | 1232 rrrrrrrrrrrrrrrrr | | 1233 +-------+ | | 1234 | A->1 |------>| 2nd | 1235 +-------+ | | 1236 : : +-------+ 1237 1238 1239The guarantee is that the second load will always come up with A == 1 if the 1240load of B came up with B == 2. No such guarantee exists for the first load of 1241A; that may come up with either A == 0 or A == 1. 1242 1243 1244READ MEMORY BARRIERS VS LOAD SPECULATION 1245---------------------------------------- 1246 1247Many CPUs speculate with loads: that is they see that they will need to load an 1248item from memory, and they find a time where they're not using the bus for any 1249other loads, and so do the load in advance - even though they haven't actually 1250got to that point in the instruction execution flow yet. This permits the 1251actual load instruction to potentially complete immediately because the CPU 1252already has the value to hand. 1253 1254It may turn out that the CPU didn't actually need the value - perhaps because a 1255branch circumvented the load - in which case it can discard the value or just 1256cache it for later use. 1257 1258Consider: 1259 1260 CPU 1 CPU 2 1261 ======================= ======================= 1262 LOAD B 1263 DIVIDE } Divide instructions generally 1264 DIVIDE } take a long time to perform 1265 LOAD A 1266 1267Which might appear as this: 1268 1269 : : +-------+ 1270 +-------+ | | 1271 --->| B->2 |------>| | 1272 +-------+ | CPU 2 | 1273 : :DIVIDE | | 1274 +-------+ | | 1275 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1276 division speculates on the +-------+ ~ | | 1277 LOAD of A : : ~ | | 1278 : :DIVIDE | | 1279 : : ~ | | 1280 Once the divisions are complete --> : : ~-->| | 1281 the CPU can then perform the : : | | 1282 LOAD with immediate effect : : +-------+ 1283 1284 1285Placing a read barrier or a data dependency barrier just before the second 1286load: 1287 1288 CPU 1 CPU 2 1289 ======================= ======================= 1290 LOAD B 1291 DIVIDE 1292 DIVIDE 1293 <read barrier> 1294 LOAD A 1295 1296will force any value speculatively obtained to be reconsidered to an extent 1297dependent on the type of barrier used. If there was no change made to the 1298speculated memory location, then the speculated value will just be used: 1299 1300 : : +-------+ 1301 +-------+ | | 1302 --->| B->2 |------>| | 1303 +-------+ | CPU 2 | 1304 : :DIVIDE | | 1305 +-------+ | | 1306 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1307 division speculates on the +-------+ ~ | | 1308 LOAD of A : : ~ | | 1309 : :DIVIDE | | 1310 : : ~ | | 1311 : : ~ | | 1312 rrrrrrrrrrrrrrrr~ | | 1313 : : ~ | | 1314 : : ~-->| | 1315 : : | | 1316 : : +-------+ 1317 1318 1319but if there was an update or an invalidation from another CPU pending, then 1320the speculation will be cancelled and the value reloaded: 1321 1322 : : +-------+ 1323 +-------+ | | 1324 --->| B->2 |------>| | 1325 +-------+ | CPU 2 | 1326 : :DIVIDE | | 1327 +-------+ | | 1328 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1329 division speculates on the +-------+ ~ | | 1330 LOAD of A : : ~ | | 1331 : :DIVIDE | | 1332 : : ~ | | 1333 : : ~ | | 1334 rrrrrrrrrrrrrrrrr | | 1335 +-------+ | | 1336 The speculation is discarded ---> --->| A->1 |------>| | 1337 and an updated value is +-------+ | | 1338 retrieved : : +-------+ 1339 1340 1341MULTICOPY ATOMICITY 1342-------------------- 1343 1344Multicopy atomicity is a deeply intuitive notion about ordering that is 1345not always provided by real computer systems, namely that a given store 1346becomes visible at the same time to all CPUs, or, alternatively, that all 1347CPUs agree on the order in which all stores become visible. However, 1348support of full multicopy atomicity would rule out valuable hardware 1349optimizations, so a weaker form called ``other multicopy atomicity'' 1350instead guarantees only that a given store becomes visible at the same 1351time to all -other- CPUs. The remainder of this document discusses this 1352weaker form, but for brevity will call it simply ``multicopy atomicity''. 1353 1354The following example demonstrates multicopy atomicity: 1355 1356 CPU 1 CPU 2 CPU 3 1357 ======================= ======================= ======================= 1358 { X = 0, Y = 0 } 1359 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) 1360 <general barrier> <read barrier> 1361 STORE Y=r1 LOAD X 1362 1363Suppose that CPU 2's load from X returns 1, which it then stores to Y, 1364and CPU 3's load from Y returns 1. This indicates that CPU 1's store 1365to X precedes CPU 2's load from X and that CPU 2's store to Y precedes 1366CPU 3's load from Y. In addition, the memory barriers guarantee that 1367CPU 2 executes its load before its store, and CPU 3 loads from Y before 1368it loads from X. The question is then "Can CPU 3's load from X return 0?" 1369 1370Because CPU 3's load from X in some sense comes after CPU 2's load, it 1371is natural to expect that CPU 3's load from X must therefore return 1. 1372This expectation follows from multicopy atomicity: if a load executing 1373on CPU B follows a load from the same variable executing on CPU A (and 1374CPU A did not originally store the value which it read), then on 1375multicopy-atomic systems, CPU B's load must return either the same value 1376that CPU A's load did or some later value. However, the Linux kernel 1377does not require systems to be multicopy atomic. 1378 1379The use of a general memory barrier in the example above compensates 1380for any lack of multicopy atomicity. In the example, if CPU 2's load 1381from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load 1382from X must indeed also return 1. 1383 1384However, dependencies, read barriers, and write barriers are not always 1385able to compensate for non-multicopy atomicity. For example, suppose 1386that CPU 2's general barrier is removed from the above example, leaving 1387only the data dependency shown below: 1388 1389 CPU 1 CPU 2 CPU 3 1390 ======================= ======================= ======================= 1391 { X = 0, Y = 0 } 1392 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) 1393 <data dependency> <read barrier> 1394 STORE Y=r1 LOAD X (reads 0) 1395 1396This substitution allows non-multicopy atomicity to run rampant: in 1397this example, it is perfectly legal for CPU 2's load from X to return 1, 1398CPU 3's load from Y to return 1, and its load from X to return 0. 1399 1400The key point is that although CPU 2's data dependency orders its load 1401and store, it does not guarantee to order CPU 1's store. Thus, if this 1402example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a 1403store buffer or a level of cache, CPU 2 might have early access to CPU 1's 1404writes. General barriers are therefore required to ensure that all CPUs 1405agree on the combined order of multiple accesses. 1406 1407General barriers can compensate not only for non-multicopy atomicity, 1408but can also generate additional ordering that can ensure that -all- 1409CPUs will perceive the same order of -all- operations. In contrast, a 1410chain of release-acquire pairs do not provide this additional ordering, 1411which means that only those CPUs on the chain are guaranteed to agree 1412on the combined order of the accesses. For example, switching to C code 1413in deference to the ghost of Herman Hollerith: 1414 1415 int u, v, x, y, z; 1416 1417 void cpu0(void) 1418 { 1419 r0 = smp_load_acquire(&x); 1420 WRITE_ONCE(u, 1); 1421 smp_store_release(&y, 1); 1422 } 1423 1424 void cpu1(void) 1425 { 1426 r1 = smp_load_acquire(&y); 1427 r4 = READ_ONCE(v); 1428 r5 = READ_ONCE(u); 1429 smp_store_release(&z, 1); 1430 } 1431 1432 void cpu2(void) 1433 { 1434 r2 = smp_load_acquire(&z); 1435 smp_store_release(&x, 1); 1436 } 1437 1438 void cpu3(void) 1439 { 1440 WRITE_ONCE(v, 1); 1441 smp_mb(); 1442 r3 = READ_ONCE(u); 1443 } 1444 1445Because cpu0(), cpu1(), and cpu2() participate in a chain of 1446smp_store_release()/smp_load_acquire() pairs, the following outcome 1447is prohibited: 1448 1449 r0 == 1 && r1 == 1 && r2 == 1 1450 1451Furthermore, because of the release-acquire relationship between cpu0() 1452and cpu1(), cpu1() must see cpu0()'s writes, so that the following 1453outcome is prohibited: 1454 1455 r1 == 1 && r5 == 0 1456 1457However, the ordering provided by a release-acquire chain is local 1458to the CPUs participating in that chain and does not apply to cpu3(), 1459at least aside from stores. Therefore, the following outcome is possible: 1460 1461 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 1462 1463As an aside, the following outcome is also possible: 1464 1465 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1 1466 1467Although cpu0(), cpu1(), and cpu2() will see their respective reads and 1468writes in order, CPUs not involved in the release-acquire chain might 1469well disagree on the order. This disagreement stems from the fact that 1470the weak memory-barrier instructions used to implement smp_load_acquire() 1471and smp_store_release() are not required to order prior stores against 1472subsequent loads in all cases. This means that cpu3() can see cpu0()'s 1473store to u as happening -after- cpu1()'s load from v, even though 1474both cpu0() and cpu1() agree that these two operations occurred in the 1475intended order. 1476 1477However, please keep in mind that smp_load_acquire() is not magic. 1478In particular, it simply reads from its argument with ordering. It does 1479-not- ensure that any particular value will be read. Therefore, the 1480following outcome is possible: 1481 1482 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0 1483 1484Note that this outcome can happen even on a mythical sequentially 1485consistent system where nothing is ever reordered. 1486 1487To reiterate, if your code requires full ordering of all operations, 1488use general barriers throughout. 1489 1490 1491======================== 1492EXPLICIT KERNEL BARRIERS 1493======================== 1494 1495The Linux kernel has a variety of different barriers that act at different 1496levels: 1497 1498 (*) Compiler barrier. 1499 1500 (*) CPU memory barriers. 1501 1502 (*) MMIO write barrier. 1503 1504 1505COMPILER BARRIER 1506---------------- 1507 1508The Linux kernel has an explicit compiler barrier function that prevents the 1509compiler from moving the memory accesses either side of it to the other side: 1510 1511 barrier(); 1512 1513This is a general barrier -- there are no read-read or write-write 1514variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be 1515thought of as weak forms of barrier() that affect only the specific 1516accesses flagged by the READ_ONCE() or WRITE_ONCE(). 1517 1518The barrier() function has the following effects: 1519 1520 (*) Prevents the compiler from reordering accesses following the 1521 barrier() to precede any accesses preceding the barrier(). 1522 One example use for this property is to ease communication between 1523 interrupt-handler code and the code that was interrupted. 1524 1525 (*) Within a loop, forces the compiler to load the variables used 1526 in that loop's conditional on each pass through that loop. 1527 1528The READ_ONCE() and WRITE_ONCE() functions can prevent any number of 1529optimizations that, while perfectly safe in single-threaded code, can 1530be fatal in concurrent code. Here are some examples of these sorts 1531of optimizations: 1532 1533 (*) The compiler is within its rights to reorder loads and stores 1534 to the same variable, and in some cases, the CPU is within its 1535 rights to reorder loads to the same variable. This means that 1536 the following code: 1537 1538 a[0] = x; 1539 a[1] = x; 1540 1541 Might result in an older value of x stored in a[1] than in a[0]. 1542 Prevent both the compiler and the CPU from doing this as follows: 1543 1544 a[0] = READ_ONCE(x); 1545 a[1] = READ_ONCE(x); 1546 1547 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for 1548 accesses from multiple CPUs to a single variable. 1549 1550 (*) The compiler is within its rights to merge successive loads from 1551 the same variable. Such merging can cause the compiler to "optimize" 1552 the following code: 1553 1554 while (tmp = a) 1555 do_something_with(tmp); 1556 1557 into the following code, which, although in some sense legitimate 1558 for single-threaded code, is almost certainly not what the developer 1559 intended: 1560 1561 if (tmp = a) 1562 for (;;) 1563 do_something_with(tmp); 1564 1565 Use READ_ONCE() to prevent the compiler from doing this to you: 1566 1567 while (tmp = READ_ONCE(a)) 1568 do_something_with(tmp); 1569 1570 (*) The compiler is within its rights to reload a variable, for example, 1571 in cases where high register pressure prevents the compiler from 1572 keeping all data of interest in registers. The compiler might 1573 therefore optimize the variable 'tmp' out of our previous example: 1574 1575 while (tmp = a) 1576 do_something_with(tmp); 1577 1578 This could result in the following code, which is perfectly safe in 1579 single-threaded code, but can be fatal in concurrent code: 1580 1581 while (a) 1582 do_something_with(a); 1583 1584 For example, the optimized version of this code could result in 1585 passing a zero to do_something_with() in the case where the variable 1586 a was modified by some other CPU between the "while" statement and 1587 the call to do_something_with(). 1588 1589 Again, use READ_ONCE() to prevent the compiler from doing this: 1590 1591 while (tmp = READ_ONCE(a)) 1592 do_something_with(tmp); 1593 1594 Note that if the compiler runs short of registers, it might save 1595 tmp onto the stack. The overhead of this saving and later restoring 1596 is why compilers reload variables. Doing so is perfectly safe for 1597 single-threaded code, so you need to tell the compiler about cases 1598 where it is not safe. 1599 1600 (*) The compiler is within its rights to omit a load entirely if it knows 1601 what the value will be. For example, if the compiler can prove that 1602 the value of variable 'a' is always zero, it can optimize this code: 1603 1604 while (tmp = a) 1605 do_something_with(tmp); 1606 1607 Into this: 1608 1609 do { } while (0); 1610 1611 This transformation is a win for single-threaded code because it 1612 gets rid of a load and a branch. The problem is that the compiler 1613 will carry out its proof assuming that the current CPU is the only 1614 one updating variable 'a'. If variable 'a' is shared, then the 1615 compiler's proof will be erroneous. Use READ_ONCE() to tell the 1616 compiler that it doesn't know as much as it thinks it does: 1617 1618 while (tmp = READ_ONCE(a)) 1619 do_something_with(tmp); 1620 1621 But please note that the compiler is also closely watching what you 1622 do with the value after the READ_ONCE(). For example, suppose you 1623 do the following and MAX is a preprocessor macro with the value 1: 1624 1625 while ((tmp = READ_ONCE(a)) % MAX) 1626 do_something_with(tmp); 1627 1628 Then the compiler knows that the result of the "%" operator applied 1629 to MAX will always be zero, again allowing the compiler to optimize 1630 the code into near-nonexistence. (It will still load from the 1631 variable 'a'.) 1632 1633 (*) Similarly, the compiler is within its rights to omit a store entirely 1634 if it knows that the variable already has the value being stored. 1635 Again, the compiler assumes that the current CPU is the only one 1636 storing into the variable, which can cause the compiler to do the 1637 wrong thing for shared variables. For example, suppose you have 1638 the following: 1639 1640 a = 0; 1641 ... Code that does not store to variable a ... 1642 a = 0; 1643 1644 The compiler sees that the value of variable 'a' is already zero, so 1645 it might well omit the second store. This would come as a fatal 1646 surprise if some other CPU might have stored to variable 'a' in the 1647 meantime. 1648 1649 Use WRITE_ONCE() to prevent the compiler from making this sort of 1650 wrong guess: 1651 1652 WRITE_ONCE(a, 0); 1653 ... Code that does not store to variable a ... 1654 WRITE_ONCE(a, 0); 1655 1656 (*) The compiler is within its rights to reorder memory accesses unless 1657 you tell it not to. For example, consider the following interaction 1658 between process-level code and an interrupt handler: 1659 1660 void process_level(void) 1661 { 1662 msg = get_message(); 1663 flag = true; 1664 } 1665 1666 void interrupt_handler(void) 1667 { 1668 if (flag) 1669 process_message(msg); 1670 } 1671 1672 There is nothing to prevent the compiler from transforming 1673 process_level() to the following, in fact, this might well be a 1674 win for single-threaded code: 1675 1676 void process_level(void) 1677 { 1678 flag = true; 1679 msg = get_message(); 1680 } 1681 1682 If the interrupt occurs between these two statement, then 1683 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() 1684 to prevent this as follows: 1685 1686 void process_level(void) 1687 { 1688 WRITE_ONCE(msg, get_message()); 1689 WRITE_ONCE(flag, true); 1690 } 1691 1692 void interrupt_handler(void) 1693 { 1694 if (READ_ONCE(flag)) 1695 process_message(READ_ONCE(msg)); 1696 } 1697 1698 Note that the READ_ONCE() and WRITE_ONCE() wrappers in 1699 interrupt_handler() are needed if this interrupt handler can itself 1700 be interrupted by something that also accesses 'flag' and 'msg', 1701 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() 1702 and WRITE_ONCE() are not needed in interrupt_handler() other than 1703 for documentation purposes. (Note also that nested interrupts 1704 do not typically occur in modern Linux kernels, in fact, if an 1705 interrupt handler returns with interrupts enabled, you will get a 1706 WARN_ONCE() splat.) 1707 1708 You should assume that the compiler can move READ_ONCE() and 1709 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), 1710 barrier(), or similar primitives. 1711 1712 This effect could also be achieved using barrier(), but READ_ONCE() 1713 and WRITE_ONCE() are more selective: With READ_ONCE() and 1714 WRITE_ONCE(), the compiler need only forget the contents of the 1715 indicated memory locations, while with barrier() the compiler must 1716 discard the value of all memory locations that it has currented 1717 cached in any machine registers. Of course, the compiler must also 1718 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, 1719 though the CPU of course need not do so. 1720 1721 (*) The compiler is within its rights to invent stores to a variable, 1722 as in the following example: 1723 1724 if (a) 1725 b = a; 1726 else 1727 b = 42; 1728 1729 The compiler might save a branch by optimizing this as follows: 1730 1731 b = 42; 1732 if (a) 1733 b = a; 1734 1735 In single-threaded code, this is not only safe, but also saves 1736 a branch. Unfortunately, in concurrent code, this optimization 1737 could cause some other CPU to see a spurious value of 42 -- even 1738 if variable 'a' was never zero -- when loading variable 'b'. 1739 Use WRITE_ONCE() to prevent this as follows: 1740 1741 if (a) 1742 WRITE_ONCE(b, a); 1743 else 1744 WRITE_ONCE(b, 42); 1745 1746 The compiler can also invent loads. These are usually less 1747 damaging, but they can result in cache-line bouncing and thus in 1748 poor performance and scalability. Use READ_ONCE() to prevent 1749 invented loads. 1750 1751 (*) For aligned memory locations whose size allows them to be accessed 1752 with a single memory-reference instruction, prevents "load tearing" 1753 and "store tearing," in which a single large access is replaced by 1754 multiple smaller accesses. For example, given an architecture having 1755 16-bit store instructions with 7-bit immediate fields, the compiler 1756 might be tempted to use two 16-bit store-immediate instructions to 1757 implement the following 32-bit store: 1758 1759 p = 0x00010002; 1760 1761 Please note that GCC really does use this sort of optimization, 1762 which is not surprising given that it would likely take more 1763 than two instructions to build the constant and then store it. 1764 This optimization can therefore be a win in single-threaded code. 1765 In fact, a recent bug (since fixed) caused GCC to incorrectly use 1766 this optimization in a volatile store. In the absence of such bugs, 1767 use of WRITE_ONCE() prevents store tearing in the following example: 1768 1769 WRITE_ONCE(p, 0x00010002); 1770 1771 Use of packed structures can also result in load and store tearing, 1772 as in this example: 1773 1774 struct __attribute__((__packed__)) foo { 1775 short a; 1776 int b; 1777 short c; 1778 }; 1779 struct foo foo1, foo2; 1780 ... 1781 1782 foo2.a = foo1.a; 1783 foo2.b = foo1.b; 1784 foo2.c = foo1.c; 1785 1786 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no 1787 volatile markings, the compiler would be well within its rights to 1788 implement these three assignment statements as a pair of 32-bit 1789 loads followed by a pair of 32-bit stores. This would result in 1790 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() 1791 and WRITE_ONCE() again prevent tearing in this example: 1792 1793 foo2.a = foo1.a; 1794 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); 1795 foo2.c = foo1.c; 1796 1797All that aside, it is never necessary to use READ_ONCE() and 1798WRITE_ONCE() on a variable that has been marked volatile. For example, 1799because 'jiffies' is marked volatile, it is never necessary to 1800say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and 1801WRITE_ONCE() are implemented as volatile casts, which has no effect when 1802its argument is already marked volatile. 1803 1804Please note that these compiler barriers have no direct effect on the CPU, 1805which may then reorder things however it wishes. 1806 1807 1808CPU MEMORY BARRIERS 1809------------------- 1810 1811The Linux kernel has eight basic CPU memory barriers: 1812 1813 TYPE MANDATORY SMP CONDITIONAL 1814 =============== ======================= =========================== 1815 GENERAL mb() smp_mb() 1816 WRITE wmb() smp_wmb() 1817 READ rmb() smp_rmb() 1818 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() 1819 1820 1821All memory barriers except the data dependency barriers imply a compiler 1822barrier. Data dependencies do not impose any additional compiler ordering. 1823 1824Aside: In the case of data dependencies, the compiler would be expected 1825to issue the loads in the correct order (eg. `a[b]` would have to load 1826the value of b before loading a[b]), however there is no guarantee in 1827the C specification that the compiler may not speculate the value of b 1828(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1) 1829tmp = a[b]; ). There is also the problem of a compiler reloading b after 1830having loaded a[b], thus having a newer copy of b than a[b]. A consensus 1831has not yet been reached about these problems, however the READ_ONCE() 1832macro is a good place to start looking. 1833 1834SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1835systems because it is assumed that a CPU will appear to be self-consistent, 1836and will order overlapping accesses correctly with respect to itself. 1837However, see the subsection on "Virtual Machine Guests" below. 1838 1839[!] Note that SMP memory barriers _must_ be used to control the ordering of 1840references to shared memory on SMP systems, though the use of locking instead 1841is sufficient. 1842 1843Mandatory barriers should not be used to control SMP effects, since mandatory 1844barriers impose unnecessary overhead on both SMP and UP systems. They may, 1845however, be used to control MMIO effects on accesses through relaxed memory I/O 1846windows. These barriers are required even on non-SMP systems as they affect 1847the order in which memory operations appear to a device by prohibiting both the 1848compiler and the CPU from reordering them. 1849 1850 1851There are some more advanced barrier functions: 1852 1853 (*) smp_store_mb(var, value) 1854 1855 This assigns the value to the variable and then inserts a full memory 1856 barrier after it. It isn't guaranteed to insert anything more than a 1857 compiler barrier in a UP compilation. 1858 1859 1860 (*) smp_mb__before_atomic(); 1861 (*) smp_mb__after_atomic(); 1862 1863 These are for use with atomic (such as add, subtract, increment and 1864 decrement) functions that don't return a value, especially when used for 1865 reference counting. These functions do not imply memory barriers. 1866 1867 These are also used for atomic bitop functions that do not return a 1868 value (such as set_bit and clear_bit). 1869 1870 As an example, consider a piece of code that marks an object as being dead 1871 and then decrements the object's reference count: 1872 1873 obj->dead = 1; 1874 smp_mb__before_atomic(); 1875 atomic_dec(&obj->ref_count); 1876 1877 This makes sure that the death mark on the object is perceived to be set 1878 *before* the reference counter is decremented. 1879 1880 See Documentation/atomic_{t,bitops}.txt for more information. 1881 1882 1883 (*) dma_wmb(); 1884 (*) dma_rmb(); 1885 1886 These are for use with consistent memory to guarantee the ordering 1887 of writes or reads of shared memory accessible to both the CPU and a 1888 DMA capable device. 1889 1890 For example, consider a device driver that shares memory with a device 1891 and uses a descriptor status value to indicate if the descriptor belongs 1892 to the device or the CPU, and a doorbell to notify it when new 1893 descriptors are available: 1894 1895 if (desc->status != DEVICE_OWN) { 1896 /* do not read data until we own descriptor */ 1897 dma_rmb(); 1898 1899 /* read/modify data */ 1900 read_data = desc->data; 1901 desc->data = write_data; 1902 1903 /* flush modifications before status update */ 1904 dma_wmb(); 1905 1906 /* assign ownership */ 1907 desc->status = DEVICE_OWN; 1908 1909 /* force memory to sync before notifying device via MMIO */ 1910 wmb(); 1911 1912 /* notify device of new descriptors */ 1913 writel(DESC_NOTIFY, doorbell); 1914 } 1915 1916 The dma_rmb() allows us guarantee the device has released ownership 1917 before we read the data from the descriptor, and the dma_wmb() allows 1918 us to guarantee the data is written to the descriptor before the device 1919 can see it now has ownership. The wmb() is needed to guarantee that the 1920 cache coherent memory writes have completed before attempting a write to 1921 the cache incoherent MMIO region. 1922 1923 See Documentation/DMA-API.txt for more information on consistent memory. 1924 1925 1926MMIO WRITE BARRIER 1927------------------ 1928 1929The Linux kernel also has a special barrier for use with memory-mapped I/O 1930writes: 1931 1932 mmiowb(); 1933 1934This is a variation on the mandatory write barrier that causes writes to weakly 1935ordered I/O regions to be partially ordered. Its effects may go beyond the 1936CPU->Hardware interface and actually affect the hardware at some level. 1937 1938See the subsection "Acquires vs I/O accesses" for more information. 1939 1940 1941=============================== 1942IMPLICIT KERNEL MEMORY BARRIERS 1943=============================== 1944 1945Some of the other functions in the linux kernel imply memory barriers, amongst 1946which are locking and scheduling functions. 1947 1948This specification is a _minimum_ guarantee; any particular architecture may 1949provide more substantial guarantees, but these may not be relied upon outside 1950of arch specific code. 1951 1952 1953LOCK ACQUISITION FUNCTIONS 1954-------------------------- 1955 1956The Linux kernel has a number of locking constructs: 1957 1958 (*) spin locks 1959 (*) R/W spin locks 1960 (*) mutexes 1961 (*) semaphores 1962 (*) R/W semaphores 1963 1964In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations 1965for each construct. These operations all imply certain barriers: 1966 1967 (1) ACQUIRE operation implication: 1968 1969 Memory operations issued after the ACQUIRE will be completed after the 1970 ACQUIRE operation has completed. 1971 1972 Memory operations issued before the ACQUIRE may be completed after 1973 the ACQUIRE operation has completed. 1974 1975 (2) RELEASE operation implication: 1976 1977 Memory operations issued before the RELEASE will be completed before the 1978 RELEASE operation has completed. 1979 1980 Memory operations issued after the RELEASE may be completed before the 1981 RELEASE operation has completed. 1982 1983 (3) ACQUIRE vs ACQUIRE implication: 1984 1985 All ACQUIRE operations issued before another ACQUIRE operation will be 1986 completed before that ACQUIRE operation. 1987 1988 (4) ACQUIRE vs RELEASE implication: 1989 1990 All ACQUIRE operations issued before a RELEASE operation will be 1991 completed before the RELEASE operation. 1992 1993 (5) Failed conditional ACQUIRE implication: 1994 1995 Certain locking variants of the ACQUIRE operation may fail, either due to 1996 being unable to get the lock immediately, or due to receiving an unblocked 1997 signal whilst asleep waiting for the lock to become available. Failed 1998 locks do not imply any sort of barrier. 1999 2000[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only 2001one-way barriers is that the effects of instructions outside of a critical 2002section may seep into the inside of the critical section. 2003 2004An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier 2005because it is possible for an access preceding the ACQUIRE to happen after the 2006ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and 2007the two accesses can themselves then cross: 2008 2009 *A = a; 2010 ACQUIRE M 2011 RELEASE M 2012 *B = b; 2013 2014may occur as: 2015 2016 ACQUIRE M, STORE *B, STORE *A, RELEASE M 2017 2018When the ACQUIRE and RELEASE are a lock acquisition and release, 2019respectively, this same reordering can occur if the lock's ACQUIRE and 2020RELEASE are to the same lock variable, but only from the perspective of 2021another CPU not holding that lock. In short, a ACQUIRE followed by an 2022RELEASE may -not- be assumed to be a full memory barrier. 2023 2024Similarly, the reverse case of a RELEASE followed by an ACQUIRE does 2025not imply a full memory barrier. Therefore, the CPU's execution of the 2026critical sections corresponding to the RELEASE and the ACQUIRE can cross, 2027so that: 2028 2029 *A = a; 2030 RELEASE M 2031 ACQUIRE N 2032 *B = b; 2033 2034could occur as: 2035 2036 ACQUIRE N, STORE *B, STORE *A, RELEASE M 2037 2038It might appear that this reordering could introduce a deadlock. 2039However, this cannot happen because if such a deadlock threatened, 2040the RELEASE would simply complete, thereby avoiding the deadlock. 2041 2042 Why does this work? 2043 2044 One key point is that we are only talking about the CPU doing 2045 the reordering, not the compiler. If the compiler (or, for 2046 that matter, the developer) switched the operations, deadlock 2047 -could- occur. 2048 2049 But suppose the CPU reordered the operations. In this case, 2050 the unlock precedes the lock in the assembly code. The CPU 2051 simply elected to try executing the later lock operation first. 2052 If there is a deadlock, this lock operation will simply spin (or 2053 try to sleep, but more on that later). The CPU will eventually 2054 execute the unlock operation (which preceded the lock operation 2055 in the assembly code), which will unravel the potential deadlock, 2056 allowing the lock operation to succeed. 2057 2058 But what if the lock is a sleeplock? In that case, the code will 2059 try to enter the scheduler, where it will eventually encounter 2060 a memory barrier, which will force the earlier unlock operation 2061 to complete, again unraveling the deadlock. There might be 2062 a sleep-unlock race, but the locking primitive needs to resolve 2063 such races properly in any case. 2064 2065Locks and semaphores may not provide any guarantee of ordering on UP compiled 2066systems, and so cannot be counted on in such a situation to actually achieve 2067anything at all - especially with respect to I/O accesses - unless combined 2068with interrupt disabling operations. 2069 2070See also the section on "Inter-CPU acquiring barrier effects". 2071 2072 2073As an example, consider the following: 2074 2075 *A = a; 2076 *B = b; 2077 ACQUIRE 2078 *C = c; 2079 *D = d; 2080 RELEASE 2081 *E = e; 2082 *F = f; 2083 2084The following sequence of events is acceptable: 2085 2086 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE 2087 2088 [+] Note that {*F,*A} indicates a combined access. 2089 2090But none of the following are: 2091 2092 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E 2093 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F 2094 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F 2095 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E 2096 2097 2098 2099INTERRUPT DISABLING FUNCTIONS 2100----------------------------- 2101 2102Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts 2103(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O 2104barriers are required in such a situation, they must be provided from some 2105other means. 2106 2107 2108SLEEP AND WAKE-UP FUNCTIONS 2109--------------------------- 2110 2111Sleeping and waking on an event flagged in global data can be viewed as an 2112interaction between two pieces of data: the task state of the task waiting for 2113the event and the global data used to indicate the event. To make sure that 2114these appear to happen in the right order, the primitives to begin the process 2115of going to sleep, and the primitives to initiate a wake up imply certain 2116barriers. 2117 2118Firstly, the sleeper normally follows something like this sequence of events: 2119 2120 for (;;) { 2121 set_current_state(TASK_UNINTERRUPTIBLE); 2122 if (event_indicated) 2123 break; 2124 schedule(); 2125 } 2126 2127A general memory barrier is interpolated automatically by set_current_state() 2128after it has altered the task state: 2129 2130 CPU 1 2131 =============================== 2132 set_current_state(); 2133 smp_store_mb(); 2134 STORE current->state 2135 <general barrier> 2136 LOAD event_indicated 2137 2138set_current_state() may be wrapped by: 2139 2140 prepare_to_wait(); 2141 prepare_to_wait_exclusive(); 2142 2143which therefore also imply a general memory barrier after setting the state. 2144The whole sequence above is available in various canned forms, all of which 2145interpolate the memory barrier in the right place: 2146 2147 wait_event(); 2148 wait_event_interruptible(); 2149 wait_event_interruptible_exclusive(); 2150 wait_event_interruptible_timeout(); 2151 wait_event_killable(); 2152 wait_event_timeout(); 2153 wait_on_bit(); 2154 wait_on_bit_lock(); 2155 2156 2157Secondly, code that performs a wake up normally follows something like this: 2158 2159 event_indicated = 1; 2160 wake_up(&event_wait_queue); 2161 2162or: 2163 2164 event_indicated = 1; 2165 wake_up_process(event_daemon); 2166 2167A write memory barrier is implied by wake_up() and co. if and only if they 2168wake something up. The barrier occurs before the task state is cleared, and so 2169sits between the STORE to indicate the event and the STORE to set TASK_RUNNING: 2170 2171 CPU 1 CPU 2 2172 =============================== =============================== 2173 set_current_state(); STORE event_indicated 2174 smp_store_mb(); wake_up(); 2175 STORE current->state <write barrier> 2176 <general barrier> STORE current->state 2177 LOAD event_indicated 2178 2179To repeat, this write memory barrier is present if and only if something 2180is actually awakened. To see this, consider the following sequence of 2181events, where X and Y are both initially zero: 2182 2183 CPU 1 CPU 2 2184 =============================== =============================== 2185 X = 1; STORE event_indicated 2186 smp_mb(); wake_up(); 2187 Y = 1; wait_event(wq, Y == 1); 2188 wake_up(); load from Y sees 1, no memory barrier 2189 load from X might see 0 2190 2191In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed 2192to see 1. 2193 2194The available waker functions include: 2195 2196 complete(); 2197 wake_up(); 2198 wake_up_all(); 2199 wake_up_bit(); 2200 wake_up_interruptible(); 2201 wake_up_interruptible_all(); 2202 wake_up_interruptible_nr(); 2203 wake_up_interruptible_poll(); 2204 wake_up_interruptible_sync(); 2205 wake_up_interruptible_sync_poll(); 2206 wake_up_locked(); 2207 wake_up_locked_poll(); 2208 wake_up_nr(); 2209 wake_up_poll(); 2210 wake_up_process(); 2211 2212 2213[!] Note that the memory barriers implied by the sleeper and the waker do _not_ 2214order multiple stores before the wake-up with respect to loads of those stored 2215values after the sleeper has called set_current_state(). For instance, if the 2216sleeper does: 2217 2218 set_current_state(TASK_INTERRUPTIBLE); 2219 if (event_indicated) 2220 break; 2221 __set_current_state(TASK_RUNNING); 2222 do_something(my_data); 2223 2224and the waker does: 2225 2226 my_data = value; 2227 event_indicated = 1; 2228 wake_up(&event_wait_queue); 2229 2230there's no guarantee that the change to event_indicated will be perceived by 2231the sleeper as coming after the change to my_data. In such a circumstance, the 2232code on both sides must interpolate its own memory barriers between the 2233separate data accesses. Thus the above sleeper ought to do: 2234 2235 set_current_state(TASK_INTERRUPTIBLE); 2236 if (event_indicated) { 2237 smp_rmb(); 2238 do_something(my_data); 2239 } 2240 2241and the waker should do: 2242 2243 my_data = value; 2244 smp_wmb(); 2245 event_indicated = 1; 2246 wake_up(&event_wait_queue); 2247 2248 2249MISCELLANEOUS FUNCTIONS 2250----------------------- 2251 2252Other functions that imply barriers: 2253 2254 (*) schedule() and similar imply full memory barriers. 2255 2256 2257=================================== 2258INTER-CPU ACQUIRING BARRIER EFFECTS 2259=================================== 2260 2261On SMP systems locking primitives give a more substantial form of barrier: one 2262that does affect memory access ordering on other CPUs, within the context of 2263conflict on any particular lock. 2264 2265 2266ACQUIRES VS MEMORY ACCESSES 2267--------------------------- 2268 2269Consider the following: the system has a pair of spinlocks (M) and (Q), and 2270three CPUs; then should the following sequence of events occur: 2271 2272 CPU 1 CPU 2 2273 =============================== =============================== 2274 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); 2275 ACQUIRE M ACQUIRE Q 2276 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); 2277 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); 2278 RELEASE M RELEASE Q 2279 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); 2280 2281Then there is no guarantee as to what order CPU 3 will see the accesses to *A 2282through *H occur in, other than the constraints imposed by the separate locks 2283on the separate CPUs. It might, for example, see: 2284 2285 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M 2286 2287But it won't see any of: 2288 2289 *B, *C or *D preceding ACQUIRE M 2290 *A, *B or *C following RELEASE M 2291 *F, *G or *H preceding ACQUIRE Q 2292 *E, *F or *G following RELEASE Q 2293 2294 2295 2296ACQUIRES VS I/O ACCESSES 2297------------------------ 2298 2299Under certain circumstances (especially involving NUMA), I/O accesses within 2300two spinlocked sections on two different CPUs may be seen as interleaved by the 2301PCI bridge, because the PCI bridge does not necessarily participate in the 2302cache-coherence protocol, and is therefore incapable of issuing the required 2303read memory barriers. 2304 2305For example: 2306 2307 CPU 1 CPU 2 2308 =============================== =============================== 2309 spin_lock(Q) 2310 writel(0, ADDR) 2311 writel(1, DATA); 2312 spin_unlock(Q); 2313 spin_lock(Q); 2314 writel(4, ADDR); 2315 writel(5, DATA); 2316 spin_unlock(Q); 2317 2318may be seen by the PCI bridge as follows: 2319 2320 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 2321 2322which would probably cause the hardware to malfunction. 2323 2324 2325What is necessary here is to intervene with an mmiowb() before dropping the 2326spinlock, for example: 2327 2328 CPU 1 CPU 2 2329 =============================== =============================== 2330 spin_lock(Q) 2331 writel(0, ADDR) 2332 writel(1, DATA); 2333 mmiowb(); 2334 spin_unlock(Q); 2335 spin_lock(Q); 2336 writel(4, ADDR); 2337 writel(5, DATA); 2338 mmiowb(); 2339 spin_unlock(Q); 2340 2341this will ensure that the two stores issued on CPU 1 appear at the PCI bridge 2342before either of the stores issued on CPU 2. 2343 2344 2345Furthermore, following a store by a load from the same device obviates the need 2346for the mmiowb(), because the load forces the store to complete before the load 2347is performed: 2348 2349 CPU 1 CPU 2 2350 =============================== =============================== 2351 spin_lock(Q) 2352 writel(0, ADDR) 2353 a = readl(DATA); 2354 spin_unlock(Q); 2355 spin_lock(Q); 2356 writel(4, ADDR); 2357 b = readl(DATA); 2358 spin_unlock(Q); 2359 2360 2361See Documentation/driver-api/device-io.rst for more information. 2362 2363 2364================================= 2365WHERE ARE MEMORY BARRIERS NEEDED? 2366================================= 2367 2368Under normal operation, memory operation reordering is generally not going to 2369be a problem as a single-threaded linear piece of code will still appear to 2370work correctly, even if it's in an SMP kernel. There are, however, four 2371circumstances in which reordering definitely _could_ be a problem: 2372 2373 (*) Interprocessor interaction. 2374 2375 (*) Atomic operations. 2376 2377 (*) Accessing devices. 2378 2379 (*) Interrupts. 2380 2381 2382INTERPROCESSOR INTERACTION 2383-------------------------- 2384 2385When there's a system with more than one processor, more than one CPU in the 2386system may be working on the same data set at the same time. This can cause 2387synchronisation problems, and the usual way of dealing with them is to use 2388locks. Locks, however, are quite expensive, and so it may be preferable to 2389operate without the use of a lock if at all possible. In such a case 2390operations that affect both CPUs may have to be carefully ordered to prevent 2391a malfunction. 2392 2393Consider, for example, the R/W semaphore slow path. Here a waiting process is 2394queued on the semaphore, by virtue of it having a piece of its stack linked to 2395the semaphore's list of waiting processes: 2396 2397 struct rw_semaphore { 2398 ... 2399 spinlock_t lock; 2400 struct list_head waiters; 2401 }; 2402 2403 struct rwsem_waiter { 2404 struct list_head list; 2405 struct task_struct *task; 2406 }; 2407 2408To wake up a particular waiter, the up_read() or up_write() functions have to: 2409 2410 (1) read the next pointer from this waiter's record to know as to where the 2411 next waiter record is; 2412 2413 (2) read the pointer to the waiter's task structure; 2414 2415 (3) clear the task pointer to tell the waiter it has been given the semaphore; 2416 2417 (4) call wake_up_process() on the task; and 2418 2419 (5) release the reference held on the waiter's task struct. 2420 2421In other words, it has to perform this sequence of events: 2422 2423 LOAD waiter->list.next; 2424 LOAD waiter->task; 2425 STORE waiter->task; 2426 CALL wakeup 2427 RELEASE task 2428 2429and if any of these steps occur out of order, then the whole thing may 2430malfunction. 2431 2432Once it has queued itself and dropped the semaphore lock, the waiter does not 2433get the lock again; it instead just waits for its task pointer to be cleared 2434before proceeding. Since the record is on the waiter's stack, this means that 2435if the task pointer is cleared _before_ the next pointer in the list is read, 2436another CPU might start processing the waiter and might clobber the waiter's 2437stack before the up*() function has a chance to read the next pointer. 2438 2439Consider then what might happen to the above sequence of events: 2440 2441 CPU 1 CPU 2 2442 =============================== =============================== 2443 down_xxx() 2444 Queue waiter 2445 Sleep 2446 up_yyy() 2447 LOAD waiter->task; 2448 STORE waiter->task; 2449 Woken up by other event 2450 <preempt> 2451 Resume processing 2452 down_xxx() returns 2453 call foo() 2454 foo() clobbers *waiter 2455 </preempt> 2456 LOAD waiter->list.next; 2457 --- OOPS --- 2458 2459This could be dealt with using the semaphore lock, but then the down_xxx() 2460function has to needlessly get the spinlock again after being woken up. 2461 2462The way to deal with this is to insert a general SMP memory barrier: 2463 2464 LOAD waiter->list.next; 2465 LOAD waiter->task; 2466 smp_mb(); 2467 STORE waiter->task; 2468 CALL wakeup 2469 RELEASE task 2470 2471In this case, the barrier makes a guarantee that all memory accesses before the 2472barrier will appear to happen before all the memory accesses after the barrier 2473with respect to the other CPUs on the system. It does _not_ guarantee that all 2474the memory accesses before the barrier will be complete by the time the barrier 2475instruction itself is complete. 2476 2477On a UP system - where this wouldn't be a problem - the smp_mb() is just a 2478compiler barrier, thus making sure the compiler emits the instructions in the 2479right order without actually intervening in the CPU. Since there's only one 2480CPU, that CPU's dependency ordering logic will take care of everything else. 2481 2482 2483ATOMIC OPERATIONS 2484----------------- 2485 2486Whilst they are technically interprocessor interaction considerations, atomic 2487operations are noted specially as some of them imply full memory barriers and 2488some don't, but they're very heavily relied on as a group throughout the 2489kernel. 2490 2491See Documentation/atomic_t.txt for more information. 2492 2493 2494ACCESSING DEVICES 2495----------------- 2496 2497Many devices can be memory mapped, and so appear to the CPU as if they're just 2498a set of memory locations. To control such a device, the driver usually has to 2499make the right memory accesses in exactly the right order. 2500 2501However, having a clever CPU or a clever compiler creates a potential problem 2502in that the carefully sequenced accesses in the driver code won't reach the 2503device in the requisite order if the CPU or the compiler thinks it is more 2504efficient to reorder, combine or merge accesses - something that would cause 2505the device to malfunction. 2506 2507Inside of the Linux kernel, I/O should be done through the appropriate accessor 2508routines - such as inb() or writel() - which know how to make such accesses 2509appropriately sequential. Whilst this, for the most part, renders the explicit 2510use of memory barriers unnecessary, there are a couple of situations where they 2511might be needed: 2512 2513 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and 2514 so for _all_ general drivers locks should be used and mmiowb() must be 2515 issued prior to unlocking the critical section. 2516 2517 (2) If the accessor functions are used to refer to an I/O memory window with 2518 relaxed memory access properties, then _mandatory_ memory barriers are 2519 required to enforce ordering. 2520 2521See Documentation/driver-api/device-io.rst for more information. 2522 2523 2524INTERRUPTS 2525---------- 2526 2527A driver may be interrupted by its own interrupt service routine, and thus the 2528two parts of the driver may interfere with each other's attempts to control or 2529access the device. 2530 2531This may be alleviated - at least in part - by disabling local interrupts (a 2532form of locking), such that the critical operations are all contained within 2533the interrupt-disabled section in the driver. Whilst the driver's interrupt 2534routine is executing, the driver's core may not run on the same CPU, and its 2535interrupt is not permitted to happen again until the current interrupt has been 2536handled, thus the interrupt handler does not need to lock against that. 2537 2538However, consider a driver that was talking to an ethernet card that sports an 2539address register and a data register. If that driver's core talks to the card 2540under interrupt-disablement and then the driver's interrupt handler is invoked: 2541 2542 LOCAL IRQ DISABLE 2543 writew(ADDR, 3); 2544 writew(DATA, y); 2545 LOCAL IRQ ENABLE 2546 <interrupt> 2547 writew(ADDR, 4); 2548 q = readw(DATA); 2549 </interrupt> 2550 2551The store to the data register might happen after the second store to the 2552address register if ordering rules are sufficiently relaxed: 2553 2554 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 2555 2556 2557If ordering rules are relaxed, it must be assumed that accesses done inside an 2558interrupt disabled section may leak outside of it and may interleave with 2559accesses performed in an interrupt - and vice versa - unless implicit or 2560explicit barriers are used. 2561 2562Normally this won't be a problem because the I/O accesses done inside such 2563sections will include synchronous load operations on strictly ordered I/O 2564registers that form implicit I/O barriers. If this isn't sufficient then an 2565mmiowb() may need to be used explicitly. 2566 2567 2568A similar situation may occur between an interrupt routine and two routines 2569running on separate CPUs that communicate with each other. If such a case is 2570likely, then interrupt-disabling locks should be used to guarantee ordering. 2571 2572 2573========================== 2574KERNEL I/O BARRIER EFFECTS 2575========================== 2576 2577When accessing I/O memory, drivers should use the appropriate accessor 2578functions: 2579 2580 (*) inX(), outX(): 2581 2582 These are intended to talk to I/O space rather than memory space, but 2583 that's primarily a CPU-specific concept. The i386 and x86_64 processors 2584 do indeed have special I/O space access cycles and instructions, but many 2585 CPUs don't have such a concept. 2586 2587 The PCI bus, amongst others, defines an I/O space concept which - on such 2588 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O 2589 space. However, it may also be mapped as a virtual I/O space in the CPU's 2590 memory map, particularly on those CPUs that don't support alternate I/O 2591 spaces. 2592 2593 Accesses to this space may be fully synchronous (as on i386), but 2594 intermediary bridges (such as the PCI host bridge) may not fully honour 2595 that. 2596 2597 They are guaranteed to be fully ordered with respect to each other. 2598 2599 They are not guaranteed to be fully ordered with respect to other types of 2600 memory and I/O operation. 2601 2602 (*) readX(), writeX(): 2603 2604 Whether these are guaranteed to be fully ordered and uncombined with 2605 respect to each other on the issuing CPU depends on the characteristics 2606 defined for the memory window through which they're accessing. On later 2607 i386 architecture machines, for example, this is controlled by way of the 2608 MTRR registers. 2609 2610 Ordinarily, these will be guaranteed to be fully ordered and uncombined, 2611 provided they're not accessing a prefetchable device. 2612 2613 However, intermediary hardware (such as a PCI bridge) may indulge in 2614 deferral if it so wishes; to flush a store, a load from the same location 2615 is preferred[*], but a load from the same device or from configuration 2616 space should suffice for PCI. 2617 2618 [*] NOTE! attempting to load from the same location as was written to may 2619 cause a malfunction - consider the 16550 Rx/Tx serial registers for 2620 example. 2621 2622 Used with prefetchable I/O memory, an mmiowb() barrier may be required to 2623 force stores to be ordered. 2624 2625 Please refer to the PCI specification for more information on interactions 2626 between PCI transactions. 2627 2628 (*) readX_relaxed(), writeX_relaxed() 2629 2630 These are similar to readX() and writeX(), but provide weaker memory 2631 ordering guarantees. Specifically, they do not guarantee ordering with 2632 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee 2633 ordering with respect to LOCK or UNLOCK operations. If the latter is 2634 required, an mmiowb() barrier can be used. Note that relaxed accesses to 2635 the same peripheral are guaranteed to be ordered with respect to each 2636 other. 2637 2638 (*) ioreadX(), iowriteX() 2639 2640 These will perform appropriately for the type of access they're actually 2641 doing, be it inX()/outX() or readX()/writeX(). 2642 2643 2644======================================== 2645ASSUMED MINIMUM EXECUTION ORDERING MODEL 2646======================================== 2647 2648It has to be assumed that the conceptual CPU is weakly-ordered but that it will 2649maintain the appearance of program causality with respect to itself. Some CPUs 2650(such as i386 or x86_64) are more constrained than others (such as powerpc or 2651frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 2652of arch-specific code. 2653 2654This means that it must be considered that the CPU will execute its instruction 2655stream in any order it feels like - or even in parallel - provided that if an 2656instruction in the stream depends on an earlier instruction, then that 2657earlier instruction must be sufficiently complete[*] before the later 2658instruction may proceed; in other words: provided that the appearance of 2659causality is maintained. 2660 2661 [*] Some instructions have more than one effect - such as changing the 2662 condition codes, changing registers or changing memory - and different 2663 instructions may depend on different effects. 2664 2665A CPU may also discard any instruction sequence that winds up having no 2666ultimate effect. For example, if two adjacent instructions both load an 2667immediate value into the same register, the first may be discarded. 2668 2669 2670Similarly, it has to be assumed that compiler might reorder the instruction 2671stream in any way it sees fit, again provided the appearance of causality is 2672maintained. 2673 2674 2675============================ 2676THE EFFECTS OF THE CPU CACHE 2677============================ 2678 2679The way cached memory operations are perceived across the system is affected to 2680a certain extent by the caches that lie between CPUs and memory, and by the 2681memory coherence system that maintains the consistency of state in the system. 2682 2683As far as the way a CPU interacts with another part of the system through the 2684caches goes, the memory system has to include the CPU's caches, and memory 2685barriers for the most part act at the interface between the CPU and its cache 2686(memory barriers logically act on the dotted line in the following diagram): 2687 2688 <--- CPU ---> : <----------- Memory -----------> 2689 : 2690 +--------+ +--------+ : +--------+ +-----------+ 2691 | | | | : | | | | +--------+ 2692 | CPU | | Memory | : | CPU | | | | | 2693 | Core |--->| Access |----->| Cache |<-->| | | | 2694 | | | Queue | : | | | |--->| Memory | 2695 | | | | : | | | | | | 2696 +--------+ +--------+ : +--------+ | | | | 2697 : | Cache | +--------+ 2698 : | Coherency | 2699 : | Mechanism | +--------+ 2700 +--------+ +--------+ : +--------+ | | | | 2701 | | | | : | | | | | | 2702 | CPU | | Memory | : | CPU | | |--->| Device | 2703 | Core |--->| Access |----->| Cache |<-->| | | | 2704 | | | Queue | : | | | | | | 2705 | | | | : | | | | +--------+ 2706 +--------+ +--------+ : +--------+ +-----------+ 2707 : 2708 : 2709 2710Although any particular load or store may not actually appear outside of the 2711CPU that issued it since it may have been satisfied within the CPU's own cache, 2712it will still appear as if the full memory access had taken place as far as the 2713other CPUs are concerned since the cache coherency mechanisms will migrate the 2714cacheline over to the accessing CPU and propagate the effects upon conflict. 2715 2716The CPU core may execute instructions in any order it deems fit, provided the 2717expected program causality appears to be maintained. Some of the instructions 2718generate load and store operations which then go into the queue of memory 2719accesses to be performed. The core may place these in the queue in any order 2720it wishes, and continue execution until it is forced to wait for an instruction 2721to complete. 2722 2723What memory barriers are concerned with is controlling the order in which 2724accesses cross from the CPU side of things to the memory side of things, and 2725the order in which the effects are perceived to happen by the other observers 2726in the system. 2727 2728[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 2729their own loads and stores as if they had happened in program order. 2730 2731[!] MMIO or other device accesses may bypass the cache system. This depends on 2732the properties of the memory window through which devices are accessed and/or 2733the use of any special device communication instructions the CPU may have. 2734 2735 2736CACHE COHERENCY 2737--------------- 2738 2739Life isn't quite as simple as it may appear above, however: for while the 2740caches are expected to be coherent, there's no guarantee that that coherency 2741will be ordered. This means that whilst changes made on one CPU will 2742eventually become visible on all CPUs, there's no guarantee that they will 2743become apparent in the same order on those other CPUs. 2744 2745 2746Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 2747has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 2748 2749 : 2750 : +--------+ 2751 : +---------+ | | 2752 +--------+ : +--->| Cache A |<------->| | 2753 | | : | +---------+ | | 2754 | CPU 1 |<---+ | | 2755 | | : | +---------+ | | 2756 +--------+ : +--->| Cache B |<------->| | 2757 : +---------+ | | 2758 : | Memory | 2759 : +---------+ | System | 2760 +--------+ : +--->| Cache C |<------->| | 2761 | | : | +---------+ | | 2762 | CPU 2 |<---+ | | 2763 | | : | +---------+ | | 2764 +--------+ : +--->| Cache D |<------->| | 2765 : +---------+ | | 2766 : +--------+ 2767 : 2768 2769Imagine the system has the following properties: 2770 2771 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 2772 resident in memory; 2773 2774 (*) an even-numbered cache line may be in cache B, cache D or it may still be 2775 resident in memory; 2776 2777 (*) whilst the CPU core is interrogating one cache, the other cache may be 2778 making use of the bus to access the rest of the system - perhaps to 2779 displace a dirty cacheline or to do a speculative load; 2780 2781 (*) each cache has a queue of operations that need to be applied to that cache 2782 to maintain coherency with the rest of the system; 2783 2784 (*) the coherency queue is not flushed by normal loads to lines already 2785 present in the cache, even though the contents of the queue may 2786 potentially affect those loads. 2787 2788Imagine, then, that two writes are made on the first CPU, with a write barrier 2789between them to guarantee that they will appear to reach that CPU's caches in 2790the requisite order: 2791 2792 CPU 1 CPU 2 COMMENT 2793 =============== =============== ======================================= 2794 u == 0, v == 1 and p == &u, q == &u 2795 v = 2; 2796 smp_wmb(); Make sure change to v is visible before 2797 change to p 2798 <A:modify v=2> v is now in cache A exclusively 2799 p = &v; 2800 <B:modify p=&v> p is now in cache B exclusively 2801 2802The write memory barrier forces the other CPUs in the system to perceive that 2803the local CPU's caches have apparently been updated in the correct order. But 2804now imagine that the second CPU wants to read those values: 2805 2806 CPU 1 CPU 2 COMMENT 2807 =============== =============== ======================================= 2808 ... 2809 q = p; 2810 x = *q; 2811 2812The above pair of reads may then fail to happen in the expected order, as the 2813cacheline holding p may get updated in one of the second CPU's caches whilst 2814the update to the cacheline holding v is delayed in the other of the second 2815CPU's caches by some other cache event: 2816 2817 CPU 1 CPU 2 COMMENT 2818 =============== =============== ======================================= 2819 u == 0, v == 1 and p == &u, q == &u 2820 v = 2; 2821 smp_wmb(); 2822 <A:modify v=2> <C:busy> 2823 <C:queue v=2> 2824 p = &v; q = p; 2825 <D:request p> 2826 <B:modify p=&v> <D:commit p=&v> 2827 <D:read p> 2828 x = *q; 2829 <C:read *q> Reads from v before v updated in cache 2830 <C:unbusy> 2831 <C:commit v=2> 2832 2833Basically, whilst both cachelines will be updated on CPU 2 eventually, there's 2834no guarantee that, without intervention, the order of update will be the same 2835as that committed on CPU 1. 2836 2837 2838To intervene, we need to interpolate a data dependency barrier or a read 2839barrier between the loads. This will force the cache to commit its coherency 2840queue before processing any further requests: 2841 2842 CPU 1 CPU 2 COMMENT 2843 =============== =============== ======================================= 2844 u == 0, v == 1 and p == &u, q == &u 2845 v = 2; 2846 smp_wmb(); 2847 <A:modify v=2> <C:busy> 2848 <C:queue v=2> 2849 p = &v; q = p; 2850 <D:request p> 2851 <B:modify p=&v> <D:commit p=&v> 2852 <D:read p> 2853 smp_read_barrier_depends() 2854 <C:unbusy> 2855 <C:commit v=2> 2856 x = *q; 2857 <C:read *q> Reads from v after v updated in cache 2858 2859 2860This sort of problem can be encountered on DEC Alpha processors as they have a 2861split cache that improves performance by making better use of the data bus. 2862Whilst most CPUs do imply a data dependency barrier on the read when a memory 2863access depends on a read, not all do, so it may not be relied on. 2864 2865Other CPUs may also have split caches, but must coordinate between the various 2866cachelets for normal memory accesses. The semantics of the Alpha removes the 2867need for coordination in the absence of memory barriers. 2868 2869 2870CACHE COHERENCY VS DMA 2871---------------------- 2872 2873Not all systems maintain cache coherency with respect to devices doing DMA. In 2874such cases, a device attempting DMA may obtain stale data from RAM because 2875dirty cache lines may be resident in the caches of various CPUs, and may not 2876have been written back to RAM yet. To deal with this, the appropriate part of 2877the kernel must flush the overlapping bits of cache on each CPU (and maybe 2878invalidate them as well). 2879 2880In addition, the data DMA'd to RAM by a device may be overwritten by dirty 2881cache lines being written back to RAM from a CPU's cache after the device has 2882installed its own data, or cache lines present in the CPU's cache may simply 2883obscure the fact that RAM has been updated, until at such time as the cacheline 2884is discarded from the CPU's cache and reloaded. To deal with this, the 2885appropriate part of the kernel must invalidate the overlapping bits of the 2886cache on each CPU. 2887 2888See Documentation/cachetlb.txt for more information on cache management. 2889 2890 2891CACHE COHERENCY VS MMIO 2892----------------------- 2893 2894Memory mapped I/O usually takes place through memory locations that are part of 2895a window in the CPU's memory space that has different properties assigned than 2896the usual RAM directed window. 2897 2898Amongst these properties is usually the fact that such accesses bypass the 2899caching entirely and go directly to the device buses. This means MMIO accesses 2900may, in effect, overtake accesses to cached memory that were emitted earlier. 2901A memory barrier isn't sufficient in such a case, but rather the cache must be 2902flushed between the cached memory write and the MMIO access if the two are in 2903any way dependent. 2904 2905 2906========================= 2907THE THINGS CPUS GET UP TO 2908========================= 2909 2910A programmer might take it for granted that the CPU will perform memory 2911operations in exactly the order specified, so that if the CPU is, for example, 2912given the following piece of code to execute: 2913 2914 a = READ_ONCE(*A); 2915 WRITE_ONCE(*B, b); 2916 c = READ_ONCE(*C); 2917 d = READ_ONCE(*D); 2918 WRITE_ONCE(*E, e); 2919 2920they would then expect that the CPU will complete the memory operation for each 2921instruction before moving on to the next one, leading to a definite sequence of 2922operations as seen by external observers in the system: 2923 2924 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 2925 2926 2927Reality is, of course, much messier. With many CPUs and compilers, the above 2928assumption doesn't hold because: 2929 2930 (*) loads are more likely to need to be completed immediately to permit 2931 execution progress, whereas stores can often be deferred without a 2932 problem; 2933 2934 (*) loads may be done speculatively, and the result discarded should it prove 2935 to have been unnecessary; 2936 2937 (*) loads may be done speculatively, leading to the result having been fetched 2938 at the wrong time in the expected sequence of events; 2939 2940 (*) the order of the memory accesses may be rearranged to promote better use 2941 of the CPU buses and caches; 2942 2943 (*) loads and stores may be combined to improve performance when talking to 2944 memory or I/O hardware that can do batched accesses of adjacent locations, 2945 thus cutting down on transaction setup costs (memory and PCI devices may 2946 both be able to do this); and 2947 2948 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency 2949 mechanisms may alleviate this - once the store has actually hit the cache 2950 - there's no guarantee that the coherency management will be propagated in 2951 order to other CPUs. 2952 2953So what another CPU, say, might actually observe from the above piece of code 2954is: 2955 2956 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2957 2958 (Where "LOAD {*C,*D}" is a combined load) 2959 2960 2961However, it is guaranteed that a CPU will be self-consistent: it will see its 2962_own_ accesses appear to be correctly ordered, without the need for a memory 2963barrier. For instance with the following code: 2964 2965 U = READ_ONCE(*A); 2966 WRITE_ONCE(*A, V); 2967 WRITE_ONCE(*A, W); 2968 X = READ_ONCE(*A); 2969 WRITE_ONCE(*A, Y); 2970 Z = READ_ONCE(*A); 2971 2972and assuming no intervention by an external influence, it can be assumed that 2973the final result will appear to be: 2974 2975 U == the original value of *A 2976 X == W 2977 Z == Y 2978 *A == Y 2979 2980The code above may cause the CPU to generate the full sequence of memory 2981accesses: 2982 2983 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2984 2985in that order, but, without intervention, the sequence may have almost any 2986combination of elements combined or discarded, provided the program's view 2987of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() 2988are -not- optional in the above example, as there are architectures 2989where a given CPU might reorder successive loads to the same location. 2990On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is 2991necessary to prevent this, for example, on Itanium the volatile casts 2992used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq 2993and st.rel instructions (respectively) that prevent such reordering. 2994 2995The compiler may also combine, discard or defer elements of the sequence before 2996the CPU even sees them. 2997 2998For instance: 2999 3000 *A = V; 3001 *A = W; 3002 3003may be reduced to: 3004 3005 *A = W; 3006 3007since, without either a write barrier or an WRITE_ONCE(), it can be 3008assumed that the effect of the storage of V to *A is lost. Similarly: 3009 3010 *A = Y; 3011 Z = *A; 3012 3013may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be 3014reduced to: 3015 3016 *A = Y; 3017 Z = Y; 3018 3019and the LOAD operation never appear outside of the CPU. 3020 3021 3022AND THEN THERE'S THE ALPHA 3023-------------------------- 3024 3025The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 3026some versions of the Alpha CPU have a split data cache, permitting them to have 3027two semantically-related cache lines updated at separate times. This is where 3028the data dependency barrier really becomes necessary as this synchronises both 3029caches with the memory coherence system, thus making it seem like pointer 3030changes vs new data occur in the right order. 3031 3032The Alpha defines the Linux kernel's memory barrier model. 3033 3034See the subsection on "Cache Coherency" above. 3035 3036 3037VIRTUAL MACHINE GUESTS 3038---------------------- 3039 3040Guests running within virtual machines might be affected by SMP effects even if 3041the guest itself is compiled without SMP support. This is an artifact of 3042interfacing with an SMP host while running an UP kernel. Using mandatory 3043barriers for this use-case would be possible but is often suboptimal. 3044 3045To handle this case optimally, low-level virt_mb() etc macros are available. 3046These have the same effect as smp_mb() etc when SMP is enabled, but generate 3047identical code for SMP and non-SMP systems. For example, virtual machine guests 3048should use virt_mb() rather than smp_mb() when synchronizing against a 3049(possibly SMP) host. 3050 3051These are equivalent to smp_mb() etc counterparts in all other respects, 3052in particular, they do not control MMIO effects: to control 3053MMIO effects, use mandatory barriers. 3054 3055 3056============ 3057EXAMPLE USES 3058============ 3059 3060CIRCULAR BUFFERS 3061---------------- 3062 3063Memory barriers can be used to implement circular buffering without the need 3064of a lock to serialise the producer with the consumer. See: 3065 3066 Documentation/circular-buffers.txt 3067 3068for details. 3069 3070 3071========== 3072REFERENCES 3073========== 3074 3075Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 3076Digital Press) 3077 Chapter 5.2: Physical Address Space Characteristics 3078 Chapter 5.4: Caches and Write Buffers 3079 Chapter 5.5: Data Sharing 3080 Chapter 5.6: Read/Write Ordering 3081 3082AMD64 Architecture Programmer's Manual Volume 2: System Programming 3083 Chapter 7.1: Memory-Access Ordering 3084 Chapter 7.4: Buffering and Combining Memory Writes 3085 3086ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile) 3087 Chapter B2: The AArch64 Application Level Memory Model 3088 3089IA-32 Intel Architecture Software Developer's Manual, Volume 3: 3090System Programming Guide 3091 Chapter 7.1: Locked Atomic Operations 3092 Chapter 7.2: Memory Ordering 3093 Chapter 7.4: Serializing Instructions 3094 3095The SPARC Architecture Manual, Version 9 3096 Chapter 8: Memory Models 3097 Appendix D: Formal Specification of the Memory Models 3098 Appendix J: Programming with the Memory Models 3099 3100Storage in the PowerPC (Stone and Fitzgerald) 3101 3102UltraSPARC Programmer Reference Manual 3103 Chapter 5: Memory Accesses and Cacheability 3104 Chapter 15: Sparc-V9 Memory Models 3105 3106UltraSPARC III Cu User's Manual 3107 Chapter 9: Memory Models 3108 3109UltraSPARC IIIi Processor User's Manual 3110 Chapter 8: Memory Models 3111 3112UltraSPARC Architecture 2005 3113 Chapter 9: Memory 3114 Appendix D: Formal Specifications of the Memory Models 3115 3116UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 3117 Chapter 8: Memory Models 3118 Appendix F: Caches and Cache Coherency 3119 3120Solaris Internals, Core Kernel Architecture, p63-68: 3121 Chapter 3.3: Hardware Considerations for Locks and 3122 Synchronization 3123 3124Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 3125for Kernel Programmers: 3126 Chapter 13: Other Memory Models 3127 3128Intel Itanium Architecture Software Developer's Manual: Volume 1: 3129 Section 2.6: Speculation 3130 Section 4.4: Memory Access 3131