1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 Paul E. McKenney <paulmck@linux.vnet.ibm.com> 7 Will Deacon <will.deacon@arm.com> 8 Peter Zijlstra <peterz@infradead.org> 9 10========== 11DISCLAIMER 12========== 13 14This document is not a specification; it is intentionally (for the sake of 15brevity) and unintentionally (due to being human) incomplete. This document is 16meant as a guide to using the various memory barriers provided by Linux, but 17in case of any doubt (and there are many) please ask. Some doubts may be 18resolved by referring to the formal memory consistency model and related 19documentation at tools/memory-model/. Nevertheless, even this memory 20model should be viewed as the collective opinion of its maintainers rather 21than as an infallible oracle. 22 23To repeat, this document is not a specification of what Linux expects from 24hardware. 25 26The purpose of this document is twofold: 27 28 (1) to specify the minimum functionality that one can rely on for any 29 particular barrier, and 30 31 (2) to provide a guide as to how to use the barriers that are available. 32 33Note that an architecture can provide more than the minimum requirement 34for any particular barrier, but if the architecture provides less than 35that, that architecture is incorrect. 36 37Note also that it is possible that a barrier may be a no-op for an 38architecture because the way that arch works renders an explicit barrier 39unnecessary in that case. 40 41 42======== 43CONTENTS 44======== 45 46 (*) Abstract memory access model. 47 48 - Device operations. 49 - Guarantees. 50 51 (*) What are memory barriers? 52 53 - Varieties of memory barrier. 54 - What may not be assumed about memory barriers? 55 - Data dependency barriers (historical). 56 - Control dependencies. 57 - SMP barrier pairing. 58 - Examples of memory barrier sequences. 59 - Read memory barriers vs load speculation. 60 - Multicopy atomicity. 61 62 (*) Explicit kernel barriers. 63 64 - Compiler barrier. 65 - CPU memory barriers. 66 - MMIO write barrier. 67 68 (*) Implicit kernel memory barriers. 69 70 - Lock acquisition functions. 71 - Interrupt disabling functions. 72 - Sleep and wake-up functions. 73 - Miscellaneous functions. 74 75 (*) Inter-CPU acquiring barrier effects. 76 77 - Acquires vs memory accesses. 78 - Acquires vs I/O accesses. 79 80 (*) Where are memory barriers needed? 81 82 - Interprocessor interaction. 83 - Atomic operations. 84 - Accessing devices. 85 - Interrupts. 86 87 (*) Kernel I/O barrier effects. 88 89 (*) Assumed minimum execution ordering model. 90 91 (*) The effects of the cpu cache. 92 93 - Cache coherency. 94 - Cache coherency vs DMA. 95 - Cache coherency vs MMIO. 96 97 (*) The things CPUs get up to. 98 99 - And then there's the Alpha. 100 - Virtual Machine Guests. 101 102 (*) Example uses. 103 104 - Circular buffers. 105 106 (*) References. 107 108 109============================ 110ABSTRACT MEMORY ACCESS MODEL 111============================ 112 113Consider the following abstract model of the system: 114 115 : : 116 : : 117 : : 118 +-------+ : +--------+ : +-------+ 119 | | : | | : | | 120 | | : | | : | | 121 | CPU 1 |<----->| Memory |<----->| CPU 2 | 122 | | : | | : | | 123 | | : | | : | | 124 +-------+ : +--------+ : +-------+ 125 ^ : ^ : ^ 126 | : | : | 127 | : | : | 128 | : v : | 129 | : +--------+ : | 130 | : | | : | 131 | : | | : | 132 +---------->| Device |<----------+ 133 : | | : 134 : | | : 135 : +--------+ : 136 : : 137 138Each CPU executes a program that generates memory access operations. In the 139abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 140perform the memory operations in any order it likes, provided program causality 141appears to be maintained. Similarly, the compiler may also arrange the 142instructions it emits in any order it likes, provided it doesn't affect the 143apparent operation of the program. 144 145So in the above diagram, the effects of the memory operations performed by a 146CPU are perceived by the rest of the system as the operations cross the 147interface between the CPU and rest of the system (the dotted lines). 148 149 150For example, consider the following sequence of events: 151 152 CPU 1 CPU 2 153 =============== =============== 154 { A == 1; B == 2 } 155 A = 3; x = B; 156 B = 4; y = A; 157 158The set of accesses as seen by the memory system in the middle can be arranged 159in 24 different combinations: 160 161 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 162 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 163 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 164 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 165 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 166 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 167 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 168 STORE B=4, ... 169 ... 170 171and can thus result in four different combinations of values: 172 173 x == 2, y == 1 174 x == 2, y == 3 175 x == 4, y == 1 176 x == 4, y == 3 177 178 179Furthermore, the stores committed by a CPU to the memory system may not be 180perceived by the loads made by another CPU in the same order as the stores were 181committed. 182 183 184As a further example, consider this sequence of events: 185 186 CPU 1 CPU 2 187 =============== =============== 188 { A == 1, B == 2, C == 3, P == &A, Q == &C } 189 B = 4; Q = P; 190 P = &B D = *Q; 191 192There is an obvious data dependency here, as the value loaded into D depends on 193the address retrieved from P by CPU 2. At the end of the sequence, any of the 194following results are possible: 195 196 (Q == &A) and (D == 1) 197 (Q == &B) and (D == 2) 198 (Q == &B) and (D == 4) 199 200Note that CPU 2 will never try and load C into D because the CPU will load P 201into Q before issuing the load of *Q. 202 203 204DEVICE OPERATIONS 205----------------- 206 207Some devices present their control interfaces as collections of memory 208locations, but the order in which the control registers are accessed is very 209important. For instance, imagine an ethernet card with a set of internal 210registers that are accessed through an address port register (A) and a data 211port register (D). To read internal register 5, the following code might then 212be used: 213 214 *A = 5; 215 x = *D; 216 217but this might show up as either of the following two sequences: 218 219 STORE *A = 5, x = LOAD *D 220 x = LOAD *D, STORE *A = 5 221 222the second of which will almost certainly result in a malfunction, since it set 223the address _after_ attempting to read the register. 224 225 226GUARANTEES 227---------- 228 229There are some minimal guarantees that may be expected of a CPU: 230 231 (*) On any given CPU, dependent memory accesses will be issued in order, with 232 respect to itself. This means that for: 233 234 Q = READ_ONCE(P); D = READ_ONCE(*Q); 235 236 the CPU will issue the following memory operations: 237 238 Q = LOAD P, D = LOAD *Q 239 240 and always in that order. However, on DEC Alpha, READ_ONCE() also 241 emits a memory-barrier instruction, so that a DEC Alpha CPU will 242 instead issue the following memory operations: 243 244 Q = LOAD P, MEMORY_BARRIER, D = LOAD *Q, MEMORY_BARRIER 245 246 Whether on DEC Alpha or not, the READ_ONCE() also prevents compiler 247 mischief. 248 249 (*) Overlapping loads and stores within a particular CPU will appear to be 250 ordered within that CPU. This means that for: 251 252 a = READ_ONCE(*X); WRITE_ONCE(*X, b); 253 254 the CPU will only issue the following sequence of memory operations: 255 256 a = LOAD *X, STORE *X = b 257 258 And for: 259 260 WRITE_ONCE(*X, c); d = READ_ONCE(*X); 261 262 the CPU will only issue: 263 264 STORE *X = c, d = LOAD *X 265 266 (Loads and stores overlap if they are targeted at overlapping pieces of 267 memory). 268 269And there are a number of things that _must_ or _must_not_ be assumed: 270 271 (*) It _must_not_ be assumed that the compiler will do what you want 272 with memory references that are not protected by READ_ONCE() and 273 WRITE_ONCE(). Without them, the compiler is within its rights to 274 do all sorts of "creative" transformations, which are covered in 275 the COMPILER BARRIER section. 276 277 (*) It _must_not_ be assumed that independent loads and stores will be issued 278 in the order given. This means that for: 279 280 X = *A; Y = *B; *D = Z; 281 282 we may get any of the following sequences: 283 284 X = LOAD *A, Y = LOAD *B, STORE *D = Z 285 X = LOAD *A, STORE *D = Z, Y = LOAD *B 286 Y = LOAD *B, X = LOAD *A, STORE *D = Z 287 Y = LOAD *B, STORE *D = Z, X = LOAD *A 288 STORE *D = Z, X = LOAD *A, Y = LOAD *B 289 STORE *D = Z, Y = LOAD *B, X = LOAD *A 290 291 (*) It _must_ be assumed that overlapping memory accesses may be merged or 292 discarded. This means that for: 293 294 X = *A; Y = *(A + 4); 295 296 we may get any one of the following sequences: 297 298 X = LOAD *A; Y = LOAD *(A + 4); 299 Y = LOAD *(A + 4); X = LOAD *A; 300 {X, Y} = LOAD {*A, *(A + 4) }; 301 302 And for: 303 304 *A = X; *(A + 4) = Y; 305 306 we may get any of: 307 308 STORE *A = X; STORE *(A + 4) = Y; 309 STORE *(A + 4) = Y; STORE *A = X; 310 STORE {*A, *(A + 4) } = {X, Y}; 311 312And there are anti-guarantees: 313 314 (*) These guarantees do not apply to bitfields, because compilers often 315 generate code to modify these using non-atomic read-modify-write 316 sequences. Do not attempt to use bitfields to synchronize parallel 317 algorithms. 318 319 (*) Even in cases where bitfields are protected by locks, all fields 320 in a given bitfield must be protected by one lock. If two fields 321 in a given bitfield are protected by different locks, the compiler's 322 non-atomic read-modify-write sequences can cause an update to one 323 field to corrupt the value of an adjacent field. 324 325 (*) These guarantees apply only to properly aligned and sized scalar 326 variables. "Properly sized" currently means variables that are 327 the same size as "char", "short", "int" and "long". "Properly 328 aligned" means the natural alignment, thus no constraints for 329 "char", two-byte alignment for "short", four-byte alignment for 330 "int", and either four-byte or eight-byte alignment for "long", 331 on 32-bit and 64-bit systems, respectively. Note that these 332 guarantees were introduced into the C11 standard, so beware when 333 using older pre-C11 compilers (for example, gcc 4.6). The portion 334 of the standard containing this guarantee is Section 3.14, which 335 defines "memory location" as follows: 336 337 memory location 338 either an object of scalar type, or a maximal sequence 339 of adjacent bit-fields all having nonzero width 340 341 NOTE 1: Two threads of execution can update and access 342 separate memory locations without interfering with 343 each other. 344 345 NOTE 2: A bit-field and an adjacent non-bit-field member 346 are in separate memory locations. The same applies 347 to two bit-fields, if one is declared inside a nested 348 structure declaration and the other is not, or if the two 349 are separated by a zero-length bit-field declaration, 350 or if they are separated by a non-bit-field member 351 declaration. It is not safe to concurrently update two 352 bit-fields in the same structure if all members declared 353 between them are also bit-fields, no matter what the 354 sizes of those intervening bit-fields happen to be. 355 356 357========================= 358WHAT ARE MEMORY BARRIERS? 359========================= 360 361As can be seen above, independent memory operations are effectively performed 362in random order, but this can be a problem for CPU-CPU interaction and for I/O. 363What is required is some way of intervening to instruct the compiler and the 364CPU to restrict the order. 365 366Memory barriers are such interventions. They impose a perceived partial 367ordering over the memory operations on either side of the barrier. 368 369Such enforcement is important because the CPUs and other devices in a system 370can use a variety of tricks to improve performance, including reordering, 371deferral and combination of memory operations; speculative loads; speculative 372branch prediction and various types of caching. Memory barriers are used to 373override or suppress these tricks, allowing the code to sanely control the 374interaction of multiple CPUs and/or devices. 375 376 377VARIETIES OF MEMORY BARRIER 378--------------------------- 379 380Memory barriers come in four basic varieties: 381 382 (1) Write (or store) memory barriers. 383 384 A write memory barrier gives a guarantee that all the STORE operations 385 specified before the barrier will appear to happen before all the STORE 386 operations specified after the barrier with respect to the other 387 components of the system. 388 389 A write barrier is a partial ordering on stores only; it is not required 390 to have any effect on loads. 391 392 A CPU can be viewed as committing a sequence of store operations to the 393 memory system as time progresses. All stores _before_ a write barrier 394 will occur _before_ all the stores after the write barrier. 395 396 [!] Note that write barriers should normally be paired with read or data 397 dependency barriers; see the "SMP barrier pairing" subsection. 398 399 400 (2) Data dependency barriers. 401 402 A data dependency barrier is a weaker form of read barrier. In the case 403 where two loads are performed such that the second depends on the result 404 of the first (eg: the first load retrieves the address to which the second 405 load will be directed), a data dependency barrier would be required to 406 make sure that the target of the second load is updated after the address 407 obtained by the first load is accessed. 408 409 A data dependency barrier is a partial ordering on interdependent loads 410 only; it is not required to have any effect on stores, independent loads 411 or overlapping loads. 412 413 As mentioned in (1), the other CPUs in the system can be viewed as 414 committing sequences of stores to the memory system that the CPU being 415 considered can then perceive. A data dependency barrier issued by the CPU 416 under consideration guarantees that for any load preceding it, if that 417 load touches one of a sequence of stores from another CPU, then by the 418 time the barrier completes, the effects of all the stores prior to that 419 touched by the load will be perceptible to any loads issued after the data 420 dependency barrier. 421 422 See the "Examples of memory barrier sequences" subsection for diagrams 423 showing the ordering constraints. 424 425 [!] Note that the first load really has to have a _data_ dependency and 426 not a control dependency. If the address for the second load is dependent 427 on the first load, but the dependency is through a conditional rather than 428 actually loading the address itself, then it's a _control_ dependency and 429 a full read barrier or better is required. See the "Control dependencies" 430 subsection for more information. 431 432 [!] Note that data dependency barriers should normally be paired with 433 write barriers; see the "SMP barrier pairing" subsection. 434 435 436 (3) Read (or load) memory barriers. 437 438 A read barrier is a data dependency barrier plus a guarantee that all the 439 LOAD operations specified before the barrier will appear to happen before 440 all the LOAD operations specified after the barrier with respect to the 441 other components of the system. 442 443 A read barrier is a partial ordering on loads only; it is not required to 444 have any effect on stores. 445 446 Read memory barriers imply data dependency barriers, and so can substitute 447 for them. 448 449 [!] Note that read barriers should normally be paired with write barriers; 450 see the "SMP barrier pairing" subsection. 451 452 453 (4) General memory barriers. 454 455 A general memory barrier gives a guarantee that all the LOAD and STORE 456 operations specified before the barrier will appear to happen before all 457 the LOAD and STORE operations specified after the barrier with respect to 458 the other components of the system. 459 460 A general memory barrier is a partial ordering over both loads and stores. 461 462 General memory barriers imply both read and write memory barriers, and so 463 can substitute for either. 464 465 466And a couple of implicit varieties: 467 468 (5) ACQUIRE operations. 469 470 This acts as a one-way permeable barrier. It guarantees that all memory 471 operations after the ACQUIRE operation will appear to happen after the 472 ACQUIRE operation with respect to the other components of the system. 473 ACQUIRE operations include LOCK operations and both smp_load_acquire() 474 and smp_cond_load_acquire() operations. 475 476 Memory operations that occur before an ACQUIRE operation may appear to 477 happen after it completes. 478 479 An ACQUIRE operation should almost always be paired with a RELEASE 480 operation. 481 482 483 (6) RELEASE operations. 484 485 This also acts as a one-way permeable barrier. It guarantees that all 486 memory operations before the RELEASE operation will appear to happen 487 before the RELEASE operation with respect to the other components of the 488 system. RELEASE operations include UNLOCK operations and 489 smp_store_release() operations. 490 491 Memory operations that occur after a RELEASE operation may appear to 492 happen before it completes. 493 494 The use of ACQUIRE and RELEASE operations generally precludes the need 495 for other sorts of memory barrier (but note the exceptions mentioned in 496 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE 497 pair is -not- guaranteed to act as a full memory barrier. However, after 498 an ACQUIRE on a given variable, all memory accesses preceding any prior 499 RELEASE on that same variable are guaranteed to be visible. In other 500 words, within a given variable's critical section, all accesses of all 501 previous critical sections for that variable are guaranteed to have 502 completed. 503 504 This means that ACQUIRE acts as a minimal "acquire" operation and 505 RELEASE acts as a minimal "release" operation. 506 507A subset of the atomic operations described in atomic_t.txt have ACQUIRE and 508RELEASE variants in addition to fully-ordered and relaxed (no barrier 509semantics) definitions. For compound atomics performing both a load and a 510store, ACQUIRE semantics apply only to the load and RELEASE semantics apply 511only to the store portion of the operation. 512 513Memory barriers are only required where there's a possibility of interaction 514between two CPUs or between a CPU and a device. If it can be guaranteed that 515there won't be any such interaction in any particular piece of code, then 516memory barriers are unnecessary in that piece of code. 517 518 519Note that these are the _minimum_ guarantees. Different architectures may give 520more substantial guarantees, but they may _not_ be relied upon outside of arch 521specific code. 522 523 524WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 525---------------------------------------------- 526 527There are certain things that the Linux kernel memory barriers do not guarantee: 528 529 (*) There is no guarantee that any of the memory accesses specified before a 530 memory barrier will be _complete_ by the completion of a memory barrier 531 instruction; the barrier can be considered to draw a line in that CPU's 532 access queue that accesses of the appropriate type may not cross. 533 534 (*) There is no guarantee that issuing a memory barrier on one CPU will have 535 any direct effect on another CPU or any other hardware in the system. The 536 indirect effect will be the order in which the second CPU sees the effects 537 of the first CPU's accesses occur, but see the next point: 538 539 (*) There is no guarantee that a CPU will see the correct order of effects 540 from a second CPU's accesses, even _if_ the second CPU uses a memory 541 barrier, unless the first CPU _also_ uses a matching memory barrier (see 542 the subsection on "SMP Barrier Pairing"). 543 544 (*) There is no guarantee that some intervening piece of off-the-CPU 545 hardware[*] will not reorder the memory accesses. CPU cache coherency 546 mechanisms should propagate the indirect effects of a memory barrier 547 between CPUs, but might not do so in order. 548 549 [*] For information on bus mastering DMA and coherency please read: 550 551 Documentation/PCI/pci.txt 552 Documentation/DMA-API-HOWTO.txt 553 Documentation/DMA-API.txt 554 555 556DATA DEPENDENCY BARRIERS (HISTORICAL) 557------------------------------------- 558 559As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was 560added to READ_ONCE(), which means that about the only people who 561need to pay attention to this section are those working on DEC Alpha 562architecture-specific code and those working on READ_ONCE() itself. 563For those who need it, and for those who are interested in the history, 564here is the story of data-dependency barriers. 565 566The usage requirements of data dependency barriers are a little subtle, and 567it's not always obvious that they're needed. To illustrate, consider the 568following sequence of events: 569 570 CPU 1 CPU 2 571 =============== =============== 572 { A == 1, B == 2, C == 3, P == &A, Q == &C } 573 B = 4; 574 <write barrier> 575 WRITE_ONCE(P, &B) 576 Q = READ_ONCE(P); 577 D = *Q; 578 579There's a clear data dependency here, and it would seem that by the end of the 580sequence, Q must be either &A or &B, and that: 581 582 (Q == &A) implies (D == 1) 583 (Q == &B) implies (D == 4) 584 585But! CPU 2's perception of P may be updated _before_ its perception of B, thus 586leading to the following situation: 587 588 (Q == &B) and (D == 2) ???? 589 590While this may seem like a failure of coherency or causality maintenance, it 591isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 592Alpha). 593 594To deal with this, a data dependency barrier or better must be inserted 595between the address load and the data load: 596 597 CPU 1 CPU 2 598 =============== =============== 599 { A == 1, B == 2, C == 3, P == &A, Q == &C } 600 B = 4; 601 <write barrier> 602 WRITE_ONCE(P, &B); 603 Q = READ_ONCE(P); 604 <data dependency barrier> 605 D = *Q; 606 607This enforces the occurrence of one of the two implications, and prevents the 608third possibility from arising. 609 610 611[!] Note that this extremely counterintuitive situation arises most easily on 612machines with split caches, so that, for example, one cache bank processes 613even-numbered cache lines and the other bank processes odd-numbered cache 614lines. The pointer P might be stored in an odd-numbered cache line, and the 615variable B might be stored in an even-numbered cache line. Then, if the 616even-numbered bank of the reading CPU's cache is extremely busy while the 617odd-numbered bank is idle, one can see the new value of the pointer P (&B), 618but the old value of the variable B (2). 619 620 621A data-dependency barrier is not required to order dependent writes 622because the CPUs that the Linux kernel supports don't do writes 623until they are certain (1) that the write will actually happen, (2) 624of the location of the write, and (3) of the value to be written. 625But please carefully read the "CONTROL DEPENDENCIES" section and the 626Documentation/RCU/rcu_dereference.txt file: The compiler can and does 627break dependencies in a great many highly creative ways. 628 629 CPU 1 CPU 2 630 =============== =============== 631 { A == 1, B == 2, C = 3, P == &A, Q == &C } 632 B = 4; 633 <write barrier> 634 WRITE_ONCE(P, &B); 635 Q = READ_ONCE(P); 636 WRITE_ONCE(*Q, 5); 637 638Therefore, no data-dependency barrier is required to order the read into 639Q with the store into *Q. In other words, this outcome is prohibited, 640even without a data-dependency barrier: 641 642 (Q == &B) && (B == 4) 643 644Please note that this pattern should be rare. After all, the whole point 645of dependency ordering is to -prevent- writes to the data structure, along 646with the expensive cache misses associated with those writes. This pattern 647can be used to record rare error conditions and the like, and the CPUs' 648naturally occurring ordering prevents such records from being lost. 649 650 651Note well that the ordering provided by a data dependency is local to 652the CPU containing it. See the section on "Multicopy atomicity" for 653more information. 654 655 656The data dependency barrier is very important to the RCU system, 657for example. See rcu_assign_pointer() and rcu_dereference() in 658include/linux/rcupdate.h. This permits the current target of an RCU'd 659pointer to be replaced with a new modified target, without the replacement 660target appearing to be incompletely initialised. 661 662See also the subsection on "Cache Coherency" for a more thorough example. 663 664 665CONTROL DEPENDENCIES 666-------------------- 667 668Control dependencies can be a bit tricky because current compilers do 669not understand them. The purpose of this section is to help you prevent 670the compiler's ignorance from breaking your code. 671 672A load-load control dependency requires a full read memory barrier, not 673simply a data dependency barrier to make it work correctly. Consider the 674following bit of code: 675 676 q = READ_ONCE(a); 677 if (q) { 678 <data dependency barrier> /* BUG: No data dependency!!! */ 679 p = READ_ONCE(b); 680 } 681 682This will not have the desired effect because there is no actual data 683dependency, but rather a control dependency that the CPU may short-circuit 684by attempting to predict the outcome in advance, so that other CPUs see 685the load from b as having happened before the load from a. In such a 686case what's actually required is: 687 688 q = READ_ONCE(a); 689 if (q) { 690 <read barrier> 691 p = READ_ONCE(b); 692 } 693 694However, stores are not speculated. This means that ordering -is- provided 695for load-store control dependencies, as in the following example: 696 697 q = READ_ONCE(a); 698 if (q) { 699 WRITE_ONCE(b, 1); 700 } 701 702Control dependencies pair normally with other types of barriers. 703That said, please note that neither READ_ONCE() nor WRITE_ONCE() 704are optional! Without the READ_ONCE(), the compiler might combine the 705load from 'a' with other loads from 'a'. Without the WRITE_ONCE(), 706the compiler might combine the store to 'b' with other stores to 'b'. 707Either can result in highly counterintuitive effects on ordering. 708 709Worse yet, if the compiler is able to prove (say) that the value of 710variable 'a' is always non-zero, it would be well within its rights 711to optimize the original example by eliminating the "if" statement 712as follows: 713 714 q = a; 715 b = 1; /* BUG: Compiler and CPU can both reorder!!! */ 716 717So don't leave out the READ_ONCE(). 718 719It is tempting to try to enforce ordering on identical stores on both 720branches of the "if" statement as follows: 721 722 q = READ_ONCE(a); 723 if (q) { 724 barrier(); 725 WRITE_ONCE(b, 1); 726 do_something(); 727 } else { 728 barrier(); 729 WRITE_ONCE(b, 1); 730 do_something_else(); 731 } 732 733Unfortunately, current compilers will transform this as follows at high 734optimization levels: 735 736 q = READ_ONCE(a); 737 barrier(); 738 WRITE_ONCE(b, 1); /* BUG: No ordering vs. load from a!!! */ 739 if (q) { 740 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ 741 do_something(); 742 } else { 743 /* WRITE_ONCE(b, 1); -- moved up, BUG!!! */ 744 do_something_else(); 745 } 746 747Now there is no conditional between the load from 'a' and the store to 748'b', which means that the CPU is within its rights to reorder them: 749The conditional is absolutely required, and must be present in the 750assembly code even after all compiler optimizations have been applied. 751Therefore, if you need ordering in this example, you need explicit 752memory barriers, for example, smp_store_release(): 753 754 q = READ_ONCE(a); 755 if (q) { 756 smp_store_release(&b, 1); 757 do_something(); 758 } else { 759 smp_store_release(&b, 1); 760 do_something_else(); 761 } 762 763In contrast, without explicit memory barriers, two-legged-if control 764ordering is guaranteed only when the stores differ, for example: 765 766 q = READ_ONCE(a); 767 if (q) { 768 WRITE_ONCE(b, 1); 769 do_something(); 770 } else { 771 WRITE_ONCE(b, 2); 772 do_something_else(); 773 } 774 775The initial READ_ONCE() is still required to prevent the compiler from 776proving the value of 'a'. 777 778In addition, you need to be careful what you do with the local variable 'q', 779otherwise the compiler might be able to guess the value and again remove 780the needed conditional. For example: 781 782 q = READ_ONCE(a); 783 if (q % MAX) { 784 WRITE_ONCE(b, 1); 785 do_something(); 786 } else { 787 WRITE_ONCE(b, 2); 788 do_something_else(); 789 } 790 791If MAX is defined to be 1, then the compiler knows that (q % MAX) is 792equal to zero, in which case the compiler is within its rights to 793transform the above code into the following: 794 795 q = READ_ONCE(a); 796 WRITE_ONCE(b, 2); 797 do_something_else(); 798 799Given this transformation, the CPU is not required to respect the ordering 800between the load from variable 'a' and the store to variable 'b'. It is 801tempting to add a barrier(), but this does not help. The conditional 802is gone, and the barrier won't bring it back. Therefore, if you are 803relying on this ordering, you should make sure that MAX is greater than 804one, perhaps as follows: 805 806 q = READ_ONCE(a); 807 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 808 if (q % MAX) { 809 WRITE_ONCE(b, 1); 810 do_something(); 811 } else { 812 WRITE_ONCE(b, 2); 813 do_something_else(); 814 } 815 816Please note once again that the stores to 'b' differ. If they were 817identical, as noted earlier, the compiler could pull this store outside 818of the 'if' statement. 819 820You must also be careful not to rely too much on boolean short-circuit 821evaluation. Consider this example: 822 823 q = READ_ONCE(a); 824 if (q || 1 > 0) 825 WRITE_ONCE(b, 1); 826 827Because the first condition cannot fault and the second condition is 828always true, the compiler can transform this example as following, 829defeating control dependency: 830 831 q = READ_ONCE(a); 832 WRITE_ONCE(b, 1); 833 834This example underscores the need to ensure that the compiler cannot 835out-guess your code. More generally, although READ_ONCE() does force 836the compiler to actually emit code for a given load, it does not force 837the compiler to use the results. 838 839In addition, control dependencies apply only to the then-clause and 840else-clause of the if-statement in question. In particular, it does 841not necessarily apply to code following the if-statement: 842 843 q = READ_ONCE(a); 844 if (q) { 845 WRITE_ONCE(b, 1); 846 } else { 847 WRITE_ONCE(b, 2); 848 } 849 WRITE_ONCE(c, 1); /* BUG: No ordering against the read from 'a'. */ 850 851It is tempting to argue that there in fact is ordering because the 852compiler cannot reorder volatile accesses and also cannot reorder 853the writes to 'b' with the condition. Unfortunately for this line 854of reasoning, the compiler might compile the two writes to 'b' as 855conditional-move instructions, as in this fanciful pseudo-assembly 856language: 857 858 ld r1,a 859 cmp r1,$0 860 cmov,ne r4,$1 861 cmov,eq r4,$2 862 st r4,b 863 st $1,c 864 865A weakly ordered CPU would have no dependency of any sort between the load 866from 'a' and the store to 'c'. The control dependencies would extend 867only to the pair of cmov instructions and the store depending on them. 868In short, control dependencies apply only to the stores in the then-clause 869and else-clause of the if-statement in question (including functions 870invoked by those two clauses), not to code following that if-statement. 871 872 873Note well that the ordering provided by a control dependency is local 874to the CPU containing it. See the section on "Multicopy atomicity" 875for more information. 876 877 878In summary: 879 880 (*) Control dependencies can order prior loads against later stores. 881 However, they do -not- guarantee any other sort of ordering: 882 Not prior loads against later loads, nor prior stores against 883 later anything. If you need these other forms of ordering, 884 use smp_rmb(), smp_wmb(), or, in the case of prior stores and 885 later loads, smp_mb(). 886 887 (*) If both legs of the "if" statement begin with identical stores to 888 the same variable, then those stores must be ordered, either by 889 preceding both of them with smp_mb() or by using smp_store_release() 890 to carry out the stores. Please note that it is -not- sufficient 891 to use barrier() at beginning of each leg of the "if" statement 892 because, as shown by the example above, optimizing compilers can 893 destroy the control dependency while respecting the letter of the 894 barrier() law. 895 896 (*) Control dependencies require at least one run-time conditional 897 between the prior load and the subsequent store, and this 898 conditional must involve the prior load. If the compiler is able 899 to optimize the conditional away, it will have also optimized 900 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() 901 can help to preserve the needed conditional. 902 903 (*) Control dependencies require that the compiler avoid reordering the 904 dependency into nonexistence. Careful use of READ_ONCE() or 905 atomic{,64}_read() can help to preserve your control dependency. 906 Please see the COMPILER BARRIER section for more information. 907 908 (*) Control dependencies apply only to the then-clause and else-clause 909 of the if-statement containing the control dependency, including 910 any functions that these two clauses call. Control dependencies 911 do -not- apply to code following the if-statement containing the 912 control dependency. 913 914 (*) Control dependencies pair normally with other types of barriers. 915 916 (*) Control dependencies do -not- provide multicopy atomicity. If you 917 need all the CPUs to see a given store at the same time, use smp_mb(). 918 919 (*) Compilers do not understand control dependencies. It is therefore 920 your job to ensure that they do not break your code. 921 922 923SMP BARRIER PAIRING 924------------------- 925 926When dealing with CPU-CPU interactions, certain types of memory barrier should 927always be paired. A lack of appropriate pairing is almost certainly an error. 928 929General barriers pair with each other, though they also pair with most 930other types of barriers, albeit without multicopy atomicity. An acquire 931barrier pairs with a release barrier, but both may also pair with other 932barriers, including of course general barriers. A write barrier pairs 933with a data dependency barrier, a control dependency, an acquire barrier, 934a release barrier, a read barrier, or a general barrier. Similarly a 935read barrier, control dependency, or a data dependency barrier pairs 936with a write barrier, an acquire barrier, a release barrier, or a 937general barrier: 938 939 CPU 1 CPU 2 940 =============== =============== 941 WRITE_ONCE(a, 1); 942 <write barrier> 943 WRITE_ONCE(b, 2); x = READ_ONCE(b); 944 <read barrier> 945 y = READ_ONCE(a); 946 947Or: 948 949 CPU 1 CPU 2 950 =============== =============================== 951 a = 1; 952 <write barrier> 953 WRITE_ONCE(b, &a); x = READ_ONCE(b); 954 <data dependency barrier> 955 y = *x; 956 957Or even: 958 959 CPU 1 CPU 2 960 =============== =============================== 961 r1 = READ_ONCE(y); 962 <general barrier> 963 WRITE_ONCE(x, 1); if (r2 = READ_ONCE(x)) { 964 <implicit control dependency> 965 WRITE_ONCE(y, 1); 966 } 967 968 assert(r1 == 0 || r2 == 0); 969 970Basically, the read barrier always has to be there, even though it can be of 971the "weaker" type. 972 973[!] Note that the stores before the write barrier would normally be expected to 974match the loads after the read barrier or the data dependency barrier, and vice 975versa: 976 977 CPU 1 CPU 2 978 =================== =================== 979 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); 980 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); 981 <write barrier> \ <read barrier> 982 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); 983 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); 984 985 986EXAMPLES OF MEMORY BARRIER SEQUENCES 987------------------------------------ 988 989Firstly, write barriers act as partial orderings on store operations. 990Consider the following sequence of events: 991 992 CPU 1 993 ======================= 994 STORE A = 1 995 STORE B = 2 996 STORE C = 3 997 <write barrier> 998 STORE D = 4 999 STORE E = 5 1000 1001This sequence of events is committed to the memory coherence system in an order 1002that the rest of the system might perceive as the unordered set of { STORE A, 1003STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 1004}: 1005 1006 +-------+ : : 1007 | | +------+ 1008 | |------>| C=3 | } /\ 1009 | | : +------+ }----- \ -----> Events perceptible to 1010 | | : | A=1 | } \/ the rest of the system 1011 | | : +------+ } 1012 | CPU 1 | : | B=2 | } 1013 | | +------+ } 1014 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 1015 | | +------+ } requires all stores prior to the 1016 | | : | E=5 | } barrier to be committed before 1017 | | : +------+ } further stores may take place 1018 | |------>| D=4 | } 1019 | | +------+ 1020 +-------+ : : 1021 | 1022 | Sequence in which stores are committed to the 1023 | memory system by CPU 1 1024 V 1025 1026 1027Secondly, data dependency barriers act as partial orderings on data-dependent 1028loads. Consider the following sequence of events: 1029 1030 CPU 1 CPU 2 1031 ======================= ======================= 1032 { B = 7; X = 9; Y = 8; C = &Y } 1033 STORE A = 1 1034 STORE B = 2 1035 <write barrier> 1036 STORE C = &B LOAD X 1037 STORE D = 4 LOAD C (gets &B) 1038 LOAD *C (reads B) 1039 1040Without intervention, CPU 2 may perceive the events on CPU 1 in some 1041effectively random order, despite the write barrier issued by CPU 1: 1042 1043 +-------+ : : : : 1044 | | +------+ +-------+ | Sequence of update 1045 | |------>| B=2 |----- --->| Y->8 | | of perception on 1046 | | : +------+ \ +-------+ | CPU 2 1047 | CPU 1 | : | A=1 | \ --->| C->&Y | V 1048 | | +------+ | +-------+ 1049 | | wwwwwwwwwwwwwwww | : : 1050 | | +------+ | : : 1051 | | : | C=&B |--- | : : +-------+ 1052 | | : +------+ \ | +-------+ | | 1053 | |------>| D=4 | ----------->| C->&B |------>| | 1054 | | +------+ | +-------+ | | 1055 +-------+ : : | : : | | 1056 | : : | | 1057 | : : | CPU 2 | 1058 | +-------+ | | 1059 Apparently incorrect ---> | | B->7 |------>| | 1060 perception of B (!) | +-------+ | | 1061 | : : | | 1062 | +-------+ | | 1063 The load of X holds ---> \ | X->9 |------>| | 1064 up the maintenance \ +-------+ | | 1065 of coherence of B ----->| B->2 | +-------+ 1066 +-------+ 1067 : : 1068 1069 1070In the above example, CPU 2 perceives that B is 7, despite the load of *C 1071(which would be B) coming after the LOAD of C. 1072 1073If, however, a data dependency barrier were to be placed between the load of C 1074and the load of *C (ie: B) on CPU 2: 1075 1076 CPU 1 CPU 2 1077 ======================= ======================= 1078 { B = 7; X = 9; Y = 8; C = &Y } 1079 STORE A = 1 1080 STORE B = 2 1081 <write barrier> 1082 STORE C = &B LOAD X 1083 STORE D = 4 LOAD C (gets &B) 1084 <data dependency barrier> 1085 LOAD *C (reads B) 1086 1087then the following will occur: 1088 1089 +-------+ : : : : 1090 | | +------+ +-------+ 1091 | |------>| B=2 |----- --->| Y->8 | 1092 | | : +------+ \ +-------+ 1093 | CPU 1 | : | A=1 | \ --->| C->&Y | 1094 | | +------+ | +-------+ 1095 | | wwwwwwwwwwwwwwww | : : 1096 | | +------+ | : : 1097 | | : | C=&B |--- | : : +-------+ 1098 | | : +------+ \ | +-------+ | | 1099 | |------>| D=4 | ----------->| C->&B |------>| | 1100 | | +------+ | +-------+ | | 1101 +-------+ : : | : : | | 1102 | : : | | 1103 | : : | CPU 2 | 1104 | +-------+ | | 1105 | | X->9 |------>| | 1106 | +-------+ | | 1107 Makes sure all effects ---> \ ddddddddddddddddd | | 1108 prior to the store of C \ +-------+ | | 1109 are perceptible to ----->| B->2 |------>| | 1110 subsequent loads +-------+ | | 1111 : : +-------+ 1112 1113 1114And thirdly, a read barrier acts as a partial order on loads. Consider the 1115following sequence of events: 1116 1117 CPU 1 CPU 2 1118 ======================= ======================= 1119 { A = 0, B = 9 } 1120 STORE A=1 1121 <write barrier> 1122 STORE B=2 1123 LOAD B 1124 LOAD A 1125 1126Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 1127some effectively random order, despite the write barrier issued by CPU 1: 1128 1129 +-------+ : : : : 1130 | | +------+ +-------+ 1131 | |------>| A=1 |------ --->| A->0 | 1132 | | +------+ \ +-------+ 1133 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1134 | | +------+ | +-------+ 1135 | |------>| B=2 |--- | : : 1136 | | +------+ \ | : : +-------+ 1137 +-------+ : : \ | +-------+ | | 1138 ---------->| B->2 |------>| | 1139 | +-------+ | CPU 2 | 1140 | | A->0 |------>| | 1141 | +-------+ | | 1142 | : : +-------+ 1143 \ : : 1144 \ +-------+ 1145 ---->| A->1 | 1146 +-------+ 1147 : : 1148 1149 1150If, however, a read barrier were to be placed between the load of B and the 1151load of A on CPU 2: 1152 1153 CPU 1 CPU 2 1154 ======================= ======================= 1155 { A = 0, B = 9 } 1156 STORE A=1 1157 <write barrier> 1158 STORE B=2 1159 LOAD B 1160 <read barrier> 1161 LOAD A 1162 1163then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 11642: 1165 1166 +-------+ : : : : 1167 | | +------+ +-------+ 1168 | |------>| A=1 |------ --->| A->0 | 1169 | | +------+ \ +-------+ 1170 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1171 | | +------+ | +-------+ 1172 | |------>| B=2 |--- | : : 1173 | | +------+ \ | : : +-------+ 1174 +-------+ : : \ | +-------+ | | 1175 ---------->| B->2 |------>| | 1176 | +-------+ | CPU 2 | 1177 | : : | | 1178 | : : | | 1179 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1180 barrier causes all effects \ +-------+ | | 1181 prior to the storage of B ---->| A->1 |------>| | 1182 to be perceptible to CPU 2 +-------+ | | 1183 : : +-------+ 1184 1185 1186To illustrate this more completely, consider what could happen if the code 1187contained a load of A either side of the read barrier: 1188 1189 CPU 1 CPU 2 1190 ======================= ======================= 1191 { A = 0, B = 9 } 1192 STORE A=1 1193 <write barrier> 1194 STORE B=2 1195 LOAD B 1196 LOAD A [first load of A] 1197 <read barrier> 1198 LOAD A [second load of A] 1199 1200Even though the two loads of A both occur after the load of B, they may both 1201come up with different values: 1202 1203 +-------+ : : : : 1204 | | +------+ +-------+ 1205 | |------>| A=1 |------ --->| A->0 | 1206 | | +------+ \ +-------+ 1207 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1208 | | +------+ | +-------+ 1209 | |------>| B=2 |--- | : : 1210 | | +------+ \ | : : +-------+ 1211 +-------+ : : \ | +-------+ | | 1212 ---------->| B->2 |------>| | 1213 | +-------+ | CPU 2 | 1214 | : : | | 1215 | : : | | 1216 | +-------+ | | 1217 | | A->0 |------>| 1st | 1218 | +-------+ | | 1219 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1220 barrier causes all effects \ +-------+ | | 1221 prior to the storage of B ---->| A->1 |------>| 2nd | 1222 to be perceptible to CPU 2 +-------+ | | 1223 : : +-------+ 1224 1225 1226But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 1227before the read barrier completes anyway: 1228 1229 +-------+ : : : : 1230 | | +------+ +-------+ 1231 | |------>| A=1 |------ --->| A->0 | 1232 | | +------+ \ +-------+ 1233 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1234 | | +------+ | +-------+ 1235 | |------>| B=2 |--- | : : 1236 | | +------+ \ | : : +-------+ 1237 +-------+ : : \ | +-------+ | | 1238 ---------->| B->2 |------>| | 1239 | +-------+ | CPU 2 | 1240 | : : | | 1241 \ : : | | 1242 \ +-------+ | | 1243 ---->| A->1 |------>| 1st | 1244 +-------+ | | 1245 rrrrrrrrrrrrrrrrr | | 1246 +-------+ | | 1247 | A->1 |------>| 2nd | 1248 +-------+ | | 1249 : : +-------+ 1250 1251 1252The guarantee is that the second load will always come up with A == 1 if the 1253load of B came up with B == 2. No such guarantee exists for the first load of 1254A; that may come up with either A == 0 or A == 1. 1255 1256 1257READ MEMORY BARRIERS VS LOAD SPECULATION 1258---------------------------------------- 1259 1260Many CPUs speculate with loads: that is they see that they will need to load an 1261item from memory, and they find a time where they're not using the bus for any 1262other loads, and so do the load in advance - even though they haven't actually 1263got to that point in the instruction execution flow yet. This permits the 1264actual load instruction to potentially complete immediately because the CPU 1265already has the value to hand. 1266 1267It may turn out that the CPU didn't actually need the value - perhaps because a 1268branch circumvented the load - in which case it can discard the value or just 1269cache it for later use. 1270 1271Consider: 1272 1273 CPU 1 CPU 2 1274 ======================= ======================= 1275 LOAD B 1276 DIVIDE } Divide instructions generally 1277 DIVIDE } take a long time to perform 1278 LOAD A 1279 1280Which might appear as this: 1281 1282 : : +-------+ 1283 +-------+ | | 1284 --->| B->2 |------>| | 1285 +-------+ | CPU 2 | 1286 : :DIVIDE | | 1287 +-------+ | | 1288 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1289 division speculates on the +-------+ ~ | | 1290 LOAD of A : : ~ | | 1291 : :DIVIDE | | 1292 : : ~ | | 1293 Once the divisions are complete --> : : ~-->| | 1294 the CPU can then perform the : : | | 1295 LOAD with immediate effect : : +-------+ 1296 1297 1298Placing a read barrier or a data dependency barrier just before the second 1299load: 1300 1301 CPU 1 CPU 2 1302 ======================= ======================= 1303 LOAD B 1304 DIVIDE 1305 DIVIDE 1306 <read barrier> 1307 LOAD A 1308 1309will force any value speculatively obtained to be reconsidered to an extent 1310dependent on the type of barrier used. If there was no change made to the 1311speculated memory location, then the speculated value will just be used: 1312 1313 : : +-------+ 1314 +-------+ | | 1315 --->| B->2 |------>| | 1316 +-------+ | CPU 2 | 1317 : :DIVIDE | | 1318 +-------+ | | 1319 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1320 division speculates on the +-------+ ~ | | 1321 LOAD of A : : ~ | | 1322 : :DIVIDE | | 1323 : : ~ | | 1324 : : ~ | | 1325 rrrrrrrrrrrrrrrr~ | | 1326 : : ~ | | 1327 : : ~-->| | 1328 : : | | 1329 : : +-------+ 1330 1331 1332but if there was an update or an invalidation from another CPU pending, then 1333the speculation will be cancelled and the value reloaded: 1334 1335 : : +-------+ 1336 +-------+ | | 1337 --->| B->2 |------>| | 1338 +-------+ | CPU 2 | 1339 : :DIVIDE | | 1340 +-------+ | | 1341 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1342 division speculates on the +-------+ ~ | | 1343 LOAD of A : : ~ | | 1344 : :DIVIDE | | 1345 : : ~ | | 1346 : : ~ | | 1347 rrrrrrrrrrrrrrrrr | | 1348 +-------+ | | 1349 The speculation is discarded ---> --->| A->1 |------>| | 1350 and an updated value is +-------+ | | 1351 retrieved : : +-------+ 1352 1353 1354MULTICOPY ATOMICITY 1355-------------------- 1356 1357Multicopy atomicity is a deeply intuitive notion about ordering that is 1358not always provided by real computer systems, namely that a given store 1359becomes visible at the same time to all CPUs, or, alternatively, that all 1360CPUs agree on the order in which all stores become visible. However, 1361support of full multicopy atomicity would rule out valuable hardware 1362optimizations, so a weaker form called ``other multicopy atomicity'' 1363instead guarantees only that a given store becomes visible at the same 1364time to all -other- CPUs. The remainder of this document discusses this 1365weaker form, but for brevity will call it simply ``multicopy atomicity''. 1366 1367The following example demonstrates multicopy atomicity: 1368 1369 CPU 1 CPU 2 CPU 3 1370 ======================= ======================= ======================= 1371 { X = 0, Y = 0 } 1372 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) 1373 <general barrier> <read barrier> 1374 STORE Y=r1 LOAD X 1375 1376Suppose that CPU 2's load from X returns 1, which it then stores to Y, 1377and CPU 3's load from Y returns 1. This indicates that CPU 1's store 1378to X precedes CPU 2's load from X and that CPU 2's store to Y precedes 1379CPU 3's load from Y. In addition, the memory barriers guarantee that 1380CPU 2 executes its load before its store, and CPU 3 loads from Y before 1381it loads from X. The question is then "Can CPU 3's load from X return 0?" 1382 1383Because CPU 3's load from X in some sense comes after CPU 2's load, it 1384is natural to expect that CPU 3's load from X must therefore return 1. 1385This expectation follows from multicopy atomicity: if a load executing 1386on CPU B follows a load from the same variable executing on CPU A (and 1387CPU A did not originally store the value which it read), then on 1388multicopy-atomic systems, CPU B's load must return either the same value 1389that CPU A's load did or some later value. However, the Linux kernel 1390does not require systems to be multicopy atomic. 1391 1392The use of a general memory barrier in the example above compensates 1393for any lack of multicopy atomicity. In the example, if CPU 2's load 1394from X returns 1 and CPU 3's load from Y returns 1, then CPU 3's load 1395from X must indeed also return 1. 1396 1397However, dependencies, read barriers, and write barriers are not always 1398able to compensate for non-multicopy atomicity. For example, suppose 1399that CPU 2's general barrier is removed from the above example, leaving 1400only the data dependency shown below: 1401 1402 CPU 1 CPU 2 CPU 3 1403 ======================= ======================= ======================= 1404 { X = 0, Y = 0 } 1405 STORE X=1 r1=LOAD X (reads 1) LOAD Y (reads 1) 1406 <data dependency> <read barrier> 1407 STORE Y=r1 LOAD X (reads 0) 1408 1409This substitution allows non-multicopy atomicity to run rampant: in 1410this example, it is perfectly legal for CPU 2's load from X to return 1, 1411CPU 3's load from Y to return 1, and its load from X to return 0. 1412 1413The key point is that although CPU 2's data dependency orders its load 1414and store, it does not guarantee to order CPU 1's store. Thus, if this 1415example runs on a non-multicopy-atomic system where CPUs 1 and 2 share a 1416store buffer or a level of cache, CPU 2 might have early access to CPU 1's 1417writes. General barriers are therefore required to ensure that all CPUs 1418agree on the combined order of multiple accesses. 1419 1420General barriers can compensate not only for non-multicopy atomicity, 1421but can also generate additional ordering that can ensure that -all- 1422CPUs will perceive the same order of -all- operations. In contrast, a 1423chain of release-acquire pairs do not provide this additional ordering, 1424which means that only those CPUs on the chain are guaranteed to agree 1425on the combined order of the accesses. For example, switching to C code 1426in deference to the ghost of Herman Hollerith: 1427 1428 int u, v, x, y, z; 1429 1430 void cpu0(void) 1431 { 1432 r0 = smp_load_acquire(&x); 1433 WRITE_ONCE(u, 1); 1434 smp_store_release(&y, 1); 1435 } 1436 1437 void cpu1(void) 1438 { 1439 r1 = smp_load_acquire(&y); 1440 r4 = READ_ONCE(v); 1441 r5 = READ_ONCE(u); 1442 smp_store_release(&z, 1); 1443 } 1444 1445 void cpu2(void) 1446 { 1447 r2 = smp_load_acquire(&z); 1448 smp_store_release(&x, 1); 1449 } 1450 1451 void cpu3(void) 1452 { 1453 WRITE_ONCE(v, 1); 1454 smp_mb(); 1455 r3 = READ_ONCE(u); 1456 } 1457 1458Because cpu0(), cpu1(), and cpu2() participate in a chain of 1459smp_store_release()/smp_load_acquire() pairs, the following outcome 1460is prohibited: 1461 1462 r0 == 1 && r1 == 1 && r2 == 1 1463 1464Furthermore, because of the release-acquire relationship between cpu0() 1465and cpu1(), cpu1() must see cpu0()'s writes, so that the following 1466outcome is prohibited: 1467 1468 r1 == 1 && r5 == 0 1469 1470However, the ordering provided by a release-acquire chain is local 1471to the CPUs participating in that chain and does not apply to cpu3(), 1472at least aside from stores. Therefore, the following outcome is possible: 1473 1474 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 1475 1476As an aside, the following outcome is also possible: 1477 1478 r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1 1479 1480Although cpu0(), cpu1(), and cpu2() will see their respective reads and 1481writes in order, CPUs not involved in the release-acquire chain might 1482well disagree on the order. This disagreement stems from the fact that 1483the weak memory-barrier instructions used to implement smp_load_acquire() 1484and smp_store_release() are not required to order prior stores against 1485subsequent loads in all cases. This means that cpu3() can see cpu0()'s 1486store to u as happening -after- cpu1()'s load from v, even though 1487both cpu0() and cpu1() agree that these two operations occurred in the 1488intended order. 1489 1490However, please keep in mind that smp_load_acquire() is not magic. 1491In particular, it simply reads from its argument with ordering. It does 1492-not- ensure that any particular value will be read. Therefore, the 1493following outcome is possible: 1494 1495 r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0 1496 1497Note that this outcome can happen even on a mythical sequentially 1498consistent system where nothing is ever reordered. 1499 1500To reiterate, if your code requires full ordering of all operations, 1501use general barriers throughout. 1502 1503 1504======================== 1505EXPLICIT KERNEL BARRIERS 1506======================== 1507 1508The Linux kernel has a variety of different barriers that act at different 1509levels: 1510 1511 (*) Compiler barrier. 1512 1513 (*) CPU memory barriers. 1514 1515 (*) MMIO write barrier. 1516 1517 1518COMPILER BARRIER 1519---------------- 1520 1521The Linux kernel has an explicit compiler barrier function that prevents the 1522compiler from moving the memory accesses either side of it to the other side: 1523 1524 barrier(); 1525 1526This is a general barrier -- there are no read-read or write-write 1527variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be 1528thought of as weak forms of barrier() that affect only the specific 1529accesses flagged by the READ_ONCE() or WRITE_ONCE(). 1530 1531The barrier() function has the following effects: 1532 1533 (*) Prevents the compiler from reordering accesses following the 1534 barrier() to precede any accesses preceding the barrier(). 1535 One example use for this property is to ease communication between 1536 interrupt-handler code and the code that was interrupted. 1537 1538 (*) Within a loop, forces the compiler to load the variables used 1539 in that loop's conditional on each pass through that loop. 1540 1541The READ_ONCE() and WRITE_ONCE() functions can prevent any number of 1542optimizations that, while perfectly safe in single-threaded code, can 1543be fatal in concurrent code. Here are some examples of these sorts 1544of optimizations: 1545 1546 (*) The compiler is within its rights to reorder loads and stores 1547 to the same variable, and in some cases, the CPU is within its 1548 rights to reorder loads to the same variable. This means that 1549 the following code: 1550 1551 a[0] = x; 1552 a[1] = x; 1553 1554 Might result in an older value of x stored in a[1] than in a[0]. 1555 Prevent both the compiler and the CPU from doing this as follows: 1556 1557 a[0] = READ_ONCE(x); 1558 a[1] = READ_ONCE(x); 1559 1560 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for 1561 accesses from multiple CPUs to a single variable. 1562 1563 (*) The compiler is within its rights to merge successive loads from 1564 the same variable. Such merging can cause the compiler to "optimize" 1565 the following code: 1566 1567 while (tmp = a) 1568 do_something_with(tmp); 1569 1570 into the following code, which, although in some sense legitimate 1571 for single-threaded code, is almost certainly not what the developer 1572 intended: 1573 1574 if (tmp = a) 1575 for (;;) 1576 do_something_with(tmp); 1577 1578 Use READ_ONCE() to prevent the compiler from doing this to you: 1579 1580 while (tmp = READ_ONCE(a)) 1581 do_something_with(tmp); 1582 1583 (*) The compiler is within its rights to reload a variable, for example, 1584 in cases where high register pressure prevents the compiler from 1585 keeping all data of interest in registers. The compiler might 1586 therefore optimize the variable 'tmp' out of our previous example: 1587 1588 while (tmp = a) 1589 do_something_with(tmp); 1590 1591 This could result in the following code, which is perfectly safe in 1592 single-threaded code, but can be fatal in concurrent code: 1593 1594 while (a) 1595 do_something_with(a); 1596 1597 For example, the optimized version of this code could result in 1598 passing a zero to do_something_with() in the case where the variable 1599 a was modified by some other CPU between the "while" statement and 1600 the call to do_something_with(). 1601 1602 Again, use READ_ONCE() to prevent the compiler from doing this: 1603 1604 while (tmp = READ_ONCE(a)) 1605 do_something_with(tmp); 1606 1607 Note that if the compiler runs short of registers, it might save 1608 tmp onto the stack. The overhead of this saving and later restoring 1609 is why compilers reload variables. Doing so is perfectly safe for 1610 single-threaded code, so you need to tell the compiler about cases 1611 where it is not safe. 1612 1613 (*) The compiler is within its rights to omit a load entirely if it knows 1614 what the value will be. For example, if the compiler can prove that 1615 the value of variable 'a' is always zero, it can optimize this code: 1616 1617 while (tmp = a) 1618 do_something_with(tmp); 1619 1620 Into this: 1621 1622 do { } while (0); 1623 1624 This transformation is a win for single-threaded code because it 1625 gets rid of a load and a branch. The problem is that the compiler 1626 will carry out its proof assuming that the current CPU is the only 1627 one updating variable 'a'. If variable 'a' is shared, then the 1628 compiler's proof will be erroneous. Use READ_ONCE() to tell the 1629 compiler that it doesn't know as much as it thinks it does: 1630 1631 while (tmp = READ_ONCE(a)) 1632 do_something_with(tmp); 1633 1634 But please note that the compiler is also closely watching what you 1635 do with the value after the READ_ONCE(). For example, suppose you 1636 do the following and MAX is a preprocessor macro with the value 1: 1637 1638 while ((tmp = READ_ONCE(a)) % MAX) 1639 do_something_with(tmp); 1640 1641 Then the compiler knows that the result of the "%" operator applied 1642 to MAX will always be zero, again allowing the compiler to optimize 1643 the code into near-nonexistence. (It will still load from the 1644 variable 'a'.) 1645 1646 (*) Similarly, the compiler is within its rights to omit a store entirely 1647 if it knows that the variable already has the value being stored. 1648 Again, the compiler assumes that the current CPU is the only one 1649 storing into the variable, which can cause the compiler to do the 1650 wrong thing for shared variables. For example, suppose you have 1651 the following: 1652 1653 a = 0; 1654 ... Code that does not store to variable a ... 1655 a = 0; 1656 1657 The compiler sees that the value of variable 'a' is already zero, so 1658 it might well omit the second store. This would come as a fatal 1659 surprise if some other CPU might have stored to variable 'a' in the 1660 meantime. 1661 1662 Use WRITE_ONCE() to prevent the compiler from making this sort of 1663 wrong guess: 1664 1665 WRITE_ONCE(a, 0); 1666 ... Code that does not store to variable a ... 1667 WRITE_ONCE(a, 0); 1668 1669 (*) The compiler is within its rights to reorder memory accesses unless 1670 you tell it not to. For example, consider the following interaction 1671 between process-level code and an interrupt handler: 1672 1673 void process_level(void) 1674 { 1675 msg = get_message(); 1676 flag = true; 1677 } 1678 1679 void interrupt_handler(void) 1680 { 1681 if (flag) 1682 process_message(msg); 1683 } 1684 1685 There is nothing to prevent the compiler from transforming 1686 process_level() to the following, in fact, this might well be a 1687 win for single-threaded code: 1688 1689 void process_level(void) 1690 { 1691 flag = true; 1692 msg = get_message(); 1693 } 1694 1695 If the interrupt occurs between these two statement, then 1696 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() 1697 to prevent this as follows: 1698 1699 void process_level(void) 1700 { 1701 WRITE_ONCE(msg, get_message()); 1702 WRITE_ONCE(flag, true); 1703 } 1704 1705 void interrupt_handler(void) 1706 { 1707 if (READ_ONCE(flag)) 1708 process_message(READ_ONCE(msg)); 1709 } 1710 1711 Note that the READ_ONCE() and WRITE_ONCE() wrappers in 1712 interrupt_handler() are needed if this interrupt handler can itself 1713 be interrupted by something that also accesses 'flag' and 'msg', 1714 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() 1715 and WRITE_ONCE() are not needed in interrupt_handler() other than 1716 for documentation purposes. (Note also that nested interrupts 1717 do not typically occur in modern Linux kernels, in fact, if an 1718 interrupt handler returns with interrupts enabled, you will get a 1719 WARN_ONCE() splat.) 1720 1721 You should assume that the compiler can move READ_ONCE() and 1722 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), 1723 barrier(), or similar primitives. 1724 1725 This effect could also be achieved using barrier(), but READ_ONCE() 1726 and WRITE_ONCE() are more selective: With READ_ONCE() and 1727 WRITE_ONCE(), the compiler need only forget the contents of the 1728 indicated memory locations, while with barrier() the compiler must 1729 discard the value of all memory locations that it has currented 1730 cached in any machine registers. Of course, the compiler must also 1731 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, 1732 though the CPU of course need not do so. 1733 1734 (*) The compiler is within its rights to invent stores to a variable, 1735 as in the following example: 1736 1737 if (a) 1738 b = a; 1739 else 1740 b = 42; 1741 1742 The compiler might save a branch by optimizing this as follows: 1743 1744 b = 42; 1745 if (a) 1746 b = a; 1747 1748 In single-threaded code, this is not only safe, but also saves 1749 a branch. Unfortunately, in concurrent code, this optimization 1750 could cause some other CPU to see a spurious value of 42 -- even 1751 if variable 'a' was never zero -- when loading variable 'b'. 1752 Use WRITE_ONCE() to prevent this as follows: 1753 1754 if (a) 1755 WRITE_ONCE(b, a); 1756 else 1757 WRITE_ONCE(b, 42); 1758 1759 The compiler can also invent loads. These are usually less 1760 damaging, but they can result in cache-line bouncing and thus in 1761 poor performance and scalability. Use READ_ONCE() to prevent 1762 invented loads. 1763 1764 (*) For aligned memory locations whose size allows them to be accessed 1765 with a single memory-reference instruction, prevents "load tearing" 1766 and "store tearing," in which a single large access is replaced by 1767 multiple smaller accesses. For example, given an architecture having 1768 16-bit store instructions with 7-bit immediate fields, the compiler 1769 might be tempted to use two 16-bit store-immediate instructions to 1770 implement the following 32-bit store: 1771 1772 p = 0x00010002; 1773 1774 Please note that GCC really does use this sort of optimization, 1775 which is not surprising given that it would likely take more 1776 than two instructions to build the constant and then store it. 1777 This optimization can therefore be a win in single-threaded code. 1778 In fact, a recent bug (since fixed) caused GCC to incorrectly use 1779 this optimization in a volatile store. In the absence of such bugs, 1780 use of WRITE_ONCE() prevents store tearing in the following example: 1781 1782 WRITE_ONCE(p, 0x00010002); 1783 1784 Use of packed structures can also result in load and store tearing, 1785 as in this example: 1786 1787 struct __attribute__((__packed__)) foo { 1788 short a; 1789 int b; 1790 short c; 1791 }; 1792 struct foo foo1, foo2; 1793 ... 1794 1795 foo2.a = foo1.a; 1796 foo2.b = foo1.b; 1797 foo2.c = foo1.c; 1798 1799 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no 1800 volatile markings, the compiler would be well within its rights to 1801 implement these three assignment statements as a pair of 32-bit 1802 loads followed by a pair of 32-bit stores. This would result in 1803 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() 1804 and WRITE_ONCE() again prevent tearing in this example: 1805 1806 foo2.a = foo1.a; 1807 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); 1808 foo2.c = foo1.c; 1809 1810All that aside, it is never necessary to use READ_ONCE() and 1811WRITE_ONCE() on a variable that has been marked volatile. For example, 1812because 'jiffies' is marked volatile, it is never necessary to 1813say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and 1814WRITE_ONCE() are implemented as volatile casts, which has no effect when 1815its argument is already marked volatile. 1816 1817Please note that these compiler barriers have no direct effect on the CPU, 1818which may then reorder things however it wishes. 1819 1820 1821CPU MEMORY BARRIERS 1822------------------- 1823 1824The Linux kernel has eight basic CPU memory barriers: 1825 1826 TYPE MANDATORY SMP CONDITIONAL 1827 =============== ======================= =========================== 1828 GENERAL mb() smp_mb() 1829 WRITE wmb() smp_wmb() 1830 READ rmb() smp_rmb() 1831 DATA DEPENDENCY READ_ONCE() 1832 1833 1834All memory barriers except the data dependency barriers imply a compiler 1835barrier. Data dependencies do not impose any additional compiler ordering. 1836 1837Aside: In the case of data dependencies, the compiler would be expected 1838to issue the loads in the correct order (eg. `a[b]` would have to load 1839the value of b before loading a[b]), however there is no guarantee in 1840the C specification that the compiler may not speculate the value of b 1841(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1) 1842tmp = a[b]; ). There is also the problem of a compiler reloading b after 1843having loaded a[b], thus having a newer copy of b than a[b]. A consensus 1844has not yet been reached about these problems, however the READ_ONCE() 1845macro is a good place to start looking. 1846 1847SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1848systems because it is assumed that a CPU will appear to be self-consistent, 1849and will order overlapping accesses correctly with respect to itself. 1850However, see the subsection on "Virtual Machine Guests" below. 1851 1852[!] Note that SMP memory barriers _must_ be used to control the ordering of 1853references to shared memory on SMP systems, though the use of locking instead 1854is sufficient. 1855 1856Mandatory barriers should not be used to control SMP effects, since mandatory 1857barriers impose unnecessary overhead on both SMP and UP systems. They may, 1858however, be used to control MMIO effects on accesses through relaxed memory I/O 1859windows. These barriers are required even on non-SMP systems as they affect 1860the order in which memory operations appear to a device by prohibiting both the 1861compiler and the CPU from reordering them. 1862 1863 1864There are some more advanced barrier functions: 1865 1866 (*) smp_store_mb(var, value) 1867 1868 This assigns the value to the variable and then inserts a full memory 1869 barrier after it. It isn't guaranteed to insert anything more than a 1870 compiler barrier in a UP compilation. 1871 1872 1873 (*) smp_mb__before_atomic(); 1874 (*) smp_mb__after_atomic(); 1875 1876 These are for use with atomic (such as add, subtract, increment and 1877 decrement) functions that don't return a value, especially when used for 1878 reference counting. These functions do not imply memory barriers. 1879 1880 These are also used for atomic bitop functions that do not return a 1881 value (such as set_bit and clear_bit). 1882 1883 As an example, consider a piece of code that marks an object as being dead 1884 and then decrements the object's reference count: 1885 1886 obj->dead = 1; 1887 smp_mb__before_atomic(); 1888 atomic_dec(&obj->ref_count); 1889 1890 This makes sure that the death mark on the object is perceived to be set 1891 *before* the reference counter is decremented. 1892 1893 See Documentation/atomic_{t,bitops}.txt for more information. 1894 1895 1896 (*) dma_wmb(); 1897 (*) dma_rmb(); 1898 1899 These are for use with consistent memory to guarantee the ordering 1900 of writes or reads of shared memory accessible to both the CPU and a 1901 DMA capable device. 1902 1903 For example, consider a device driver that shares memory with a device 1904 and uses a descriptor status value to indicate if the descriptor belongs 1905 to the device or the CPU, and a doorbell to notify it when new 1906 descriptors are available: 1907 1908 if (desc->status != DEVICE_OWN) { 1909 /* do not read data until we own descriptor */ 1910 dma_rmb(); 1911 1912 /* read/modify data */ 1913 read_data = desc->data; 1914 desc->data = write_data; 1915 1916 /* flush modifications before status update */ 1917 dma_wmb(); 1918 1919 /* assign ownership */ 1920 desc->status = DEVICE_OWN; 1921 1922 /* notify device of new descriptors */ 1923 writel(DESC_NOTIFY, doorbell); 1924 } 1925 1926 The dma_rmb() allows us guarantee the device has released ownership 1927 before we read the data from the descriptor, and the dma_wmb() allows 1928 us to guarantee the data is written to the descriptor before the device 1929 can see it now has ownership. Note that, when using writel(), a prior 1930 wmb() is not needed to guarantee that the cache coherent memory writes 1931 have completed before writing to the MMIO region. The cheaper 1932 writel_relaxed() does not provide this guarantee and must not be used 1933 here. 1934 1935 See the subsection "Kernel I/O barrier effects" for more information on 1936 relaxed I/O accessors and the Documentation/DMA-API.txt file for more 1937 information on consistent memory. 1938 1939 1940=============================== 1941IMPLICIT KERNEL MEMORY BARRIERS 1942=============================== 1943 1944Some of the other functions in the linux kernel imply memory barriers, amongst 1945which are locking and scheduling functions. 1946 1947This specification is a _minimum_ guarantee; any particular architecture may 1948provide more substantial guarantees, but these may not be relied upon outside 1949of arch specific code. 1950 1951 1952LOCK ACQUISITION FUNCTIONS 1953-------------------------- 1954 1955The Linux kernel has a number of locking constructs: 1956 1957 (*) spin locks 1958 (*) R/W spin locks 1959 (*) mutexes 1960 (*) semaphores 1961 (*) R/W semaphores 1962 1963In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations 1964for each construct. These operations all imply certain barriers: 1965 1966 (1) ACQUIRE operation implication: 1967 1968 Memory operations issued after the ACQUIRE will be completed after the 1969 ACQUIRE operation has completed. 1970 1971 Memory operations issued before the ACQUIRE may be completed after 1972 the ACQUIRE operation has completed. 1973 1974 (2) RELEASE operation implication: 1975 1976 Memory operations issued before the RELEASE will be completed before the 1977 RELEASE operation has completed. 1978 1979 Memory operations issued after the RELEASE may be completed before the 1980 RELEASE operation has completed. 1981 1982 (3) ACQUIRE vs ACQUIRE implication: 1983 1984 All ACQUIRE operations issued before another ACQUIRE operation will be 1985 completed before that ACQUIRE operation. 1986 1987 (4) ACQUIRE vs RELEASE implication: 1988 1989 All ACQUIRE operations issued before a RELEASE operation will be 1990 completed before the RELEASE operation. 1991 1992 (5) Failed conditional ACQUIRE implication: 1993 1994 Certain locking variants of the ACQUIRE operation may fail, either due to 1995 being unable to get the lock immediately, or due to receiving an unblocked 1996 signal while asleep waiting for the lock to become available. Failed 1997 locks do not imply any sort of barrier. 1998 1999[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only 2000one-way barriers is that the effects of instructions outside of a critical 2001section may seep into the inside of the critical section. 2002 2003An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier 2004because it is possible for an access preceding the ACQUIRE to happen after the 2005ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and 2006the two accesses can themselves then cross: 2007 2008 *A = a; 2009 ACQUIRE M 2010 RELEASE M 2011 *B = b; 2012 2013may occur as: 2014 2015 ACQUIRE M, STORE *B, STORE *A, RELEASE M 2016 2017When the ACQUIRE and RELEASE are a lock acquisition and release, 2018respectively, this same reordering can occur if the lock's ACQUIRE and 2019RELEASE are to the same lock variable, but only from the perspective of 2020another CPU not holding that lock. In short, a ACQUIRE followed by an 2021RELEASE may -not- be assumed to be a full memory barrier. 2022 2023Similarly, the reverse case of a RELEASE followed by an ACQUIRE does 2024not imply a full memory barrier. Therefore, the CPU's execution of the 2025critical sections corresponding to the RELEASE and the ACQUIRE can cross, 2026so that: 2027 2028 *A = a; 2029 RELEASE M 2030 ACQUIRE N 2031 *B = b; 2032 2033could occur as: 2034 2035 ACQUIRE N, STORE *B, STORE *A, RELEASE M 2036 2037It might appear that this reordering could introduce a deadlock. 2038However, this cannot happen because if such a deadlock threatened, 2039the RELEASE would simply complete, thereby avoiding the deadlock. 2040 2041 Why does this work? 2042 2043 One key point is that we are only talking about the CPU doing 2044 the reordering, not the compiler. If the compiler (or, for 2045 that matter, the developer) switched the operations, deadlock 2046 -could- occur. 2047 2048 But suppose the CPU reordered the operations. In this case, 2049 the unlock precedes the lock in the assembly code. The CPU 2050 simply elected to try executing the later lock operation first. 2051 If there is a deadlock, this lock operation will simply spin (or 2052 try to sleep, but more on that later). The CPU will eventually 2053 execute the unlock operation (which preceded the lock operation 2054 in the assembly code), which will unravel the potential deadlock, 2055 allowing the lock operation to succeed. 2056 2057 But what if the lock is a sleeplock? In that case, the code will 2058 try to enter the scheduler, where it will eventually encounter 2059 a memory barrier, which will force the earlier unlock operation 2060 to complete, again unraveling the deadlock. There might be 2061 a sleep-unlock race, but the locking primitive needs to resolve 2062 such races properly in any case. 2063 2064Locks and semaphores may not provide any guarantee of ordering on UP compiled 2065systems, and so cannot be counted on in such a situation to actually achieve 2066anything at all - especially with respect to I/O accesses - unless combined 2067with interrupt disabling operations. 2068 2069See also the section on "Inter-CPU acquiring barrier effects". 2070 2071 2072As an example, consider the following: 2073 2074 *A = a; 2075 *B = b; 2076 ACQUIRE 2077 *C = c; 2078 *D = d; 2079 RELEASE 2080 *E = e; 2081 *F = f; 2082 2083The following sequence of events is acceptable: 2084 2085 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE 2086 2087 [+] Note that {*F,*A} indicates a combined access. 2088 2089But none of the following are: 2090 2091 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E 2092 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F 2093 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F 2094 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E 2095 2096 2097 2098INTERRUPT DISABLING FUNCTIONS 2099----------------------------- 2100 2101Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts 2102(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O 2103barriers are required in such a situation, they must be provided from some 2104other means. 2105 2106 2107SLEEP AND WAKE-UP FUNCTIONS 2108--------------------------- 2109 2110Sleeping and waking on an event flagged in global data can be viewed as an 2111interaction between two pieces of data: the task state of the task waiting for 2112the event and the global data used to indicate the event. To make sure that 2113these appear to happen in the right order, the primitives to begin the process 2114of going to sleep, and the primitives to initiate a wake up imply certain 2115barriers. 2116 2117Firstly, the sleeper normally follows something like this sequence of events: 2118 2119 for (;;) { 2120 set_current_state(TASK_UNINTERRUPTIBLE); 2121 if (event_indicated) 2122 break; 2123 schedule(); 2124 } 2125 2126A general memory barrier is interpolated automatically by set_current_state() 2127after it has altered the task state: 2128 2129 CPU 1 2130 =============================== 2131 set_current_state(); 2132 smp_store_mb(); 2133 STORE current->state 2134 <general barrier> 2135 LOAD event_indicated 2136 2137set_current_state() may be wrapped by: 2138 2139 prepare_to_wait(); 2140 prepare_to_wait_exclusive(); 2141 2142which therefore also imply a general memory barrier after setting the state. 2143The whole sequence above is available in various canned forms, all of which 2144interpolate the memory barrier in the right place: 2145 2146 wait_event(); 2147 wait_event_interruptible(); 2148 wait_event_interruptible_exclusive(); 2149 wait_event_interruptible_timeout(); 2150 wait_event_killable(); 2151 wait_event_timeout(); 2152 wait_on_bit(); 2153 wait_on_bit_lock(); 2154 2155 2156Secondly, code that performs a wake up normally follows something like this: 2157 2158 event_indicated = 1; 2159 wake_up(&event_wait_queue); 2160 2161or: 2162 2163 event_indicated = 1; 2164 wake_up_process(event_daemon); 2165 2166A general memory barrier is executed by wake_up() if it wakes something up. 2167If it doesn't wake anything up then a memory barrier may or may not be 2168executed; you must not rely on it. The barrier occurs before the task state 2169is accessed, in particular, it sits between the STORE to indicate the event 2170and the STORE to set TASK_RUNNING: 2171 2172 CPU 1 (Sleeper) CPU 2 (Waker) 2173 =============================== =============================== 2174 set_current_state(); STORE event_indicated 2175 smp_store_mb(); wake_up(); 2176 STORE current->state ... 2177 <general barrier> <general barrier> 2178 LOAD event_indicated if ((LOAD task->state) & TASK_NORMAL) 2179 STORE task->state 2180 2181where "task" is the thread being woken up and it equals CPU 1's "current". 2182 2183To repeat, a general memory barrier is guaranteed to be executed by wake_up() 2184if something is actually awakened, but otherwise there is no such guarantee. 2185To see this, consider the following sequence of events, where X and Y are both 2186initially zero: 2187 2188 CPU 1 CPU 2 2189 =============================== =============================== 2190 X = 1; Y = 1; 2191 smp_mb(); wake_up(); 2192 LOAD Y LOAD X 2193 2194If a wakeup does occur, one (at least) of the two loads must see 1. If, on 2195the other hand, a wakeup does not occur, both loads might see 0. 2196 2197wake_up_process() always executes a general memory barrier. The barrier again 2198occurs before the task state is accessed. In particular, if the wake_up() in 2199the previous snippet were replaced by a call to wake_up_process() then one of 2200the two loads would be guaranteed to see 1. 2201 2202The available waker functions include: 2203 2204 complete(); 2205 wake_up(); 2206 wake_up_all(); 2207 wake_up_bit(); 2208 wake_up_interruptible(); 2209 wake_up_interruptible_all(); 2210 wake_up_interruptible_nr(); 2211 wake_up_interruptible_poll(); 2212 wake_up_interruptible_sync(); 2213 wake_up_interruptible_sync_poll(); 2214 wake_up_locked(); 2215 wake_up_locked_poll(); 2216 wake_up_nr(); 2217 wake_up_poll(); 2218 wake_up_process(); 2219 2220In terms of memory ordering, these functions all provide the same guarantees of 2221a wake_up() (or stronger). 2222 2223[!] Note that the memory barriers implied by the sleeper and the waker do _not_ 2224order multiple stores before the wake-up with respect to loads of those stored 2225values after the sleeper has called set_current_state(). For instance, if the 2226sleeper does: 2227 2228 set_current_state(TASK_INTERRUPTIBLE); 2229 if (event_indicated) 2230 break; 2231 __set_current_state(TASK_RUNNING); 2232 do_something(my_data); 2233 2234and the waker does: 2235 2236 my_data = value; 2237 event_indicated = 1; 2238 wake_up(&event_wait_queue); 2239 2240there's no guarantee that the change to event_indicated will be perceived by 2241the sleeper as coming after the change to my_data. In such a circumstance, the 2242code on both sides must interpolate its own memory barriers between the 2243separate data accesses. Thus the above sleeper ought to do: 2244 2245 set_current_state(TASK_INTERRUPTIBLE); 2246 if (event_indicated) { 2247 smp_rmb(); 2248 do_something(my_data); 2249 } 2250 2251and the waker should do: 2252 2253 my_data = value; 2254 smp_wmb(); 2255 event_indicated = 1; 2256 wake_up(&event_wait_queue); 2257 2258 2259MISCELLANEOUS FUNCTIONS 2260----------------------- 2261 2262Other functions that imply barriers: 2263 2264 (*) schedule() and similar imply full memory barriers. 2265 2266 2267=================================== 2268INTER-CPU ACQUIRING BARRIER EFFECTS 2269=================================== 2270 2271On SMP systems locking primitives give a more substantial form of barrier: one 2272that does affect memory access ordering on other CPUs, within the context of 2273conflict on any particular lock. 2274 2275 2276ACQUIRES VS MEMORY ACCESSES 2277--------------------------- 2278 2279Consider the following: the system has a pair of spinlocks (M) and (Q), and 2280three CPUs; then should the following sequence of events occur: 2281 2282 CPU 1 CPU 2 2283 =============================== =============================== 2284 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); 2285 ACQUIRE M ACQUIRE Q 2286 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); 2287 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); 2288 RELEASE M RELEASE Q 2289 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); 2290 2291Then there is no guarantee as to what order CPU 3 will see the accesses to *A 2292through *H occur in, other than the constraints imposed by the separate locks 2293on the separate CPUs. It might, for example, see: 2294 2295 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M 2296 2297But it won't see any of: 2298 2299 *B, *C or *D preceding ACQUIRE M 2300 *A, *B or *C following RELEASE M 2301 *F, *G or *H preceding ACQUIRE Q 2302 *E, *F or *G following RELEASE Q 2303 2304 2305================================= 2306WHERE ARE MEMORY BARRIERS NEEDED? 2307================================= 2308 2309Under normal operation, memory operation reordering is generally not going to 2310be a problem as a single-threaded linear piece of code will still appear to 2311work correctly, even if it's in an SMP kernel. There are, however, four 2312circumstances in which reordering definitely _could_ be a problem: 2313 2314 (*) Interprocessor interaction. 2315 2316 (*) Atomic operations. 2317 2318 (*) Accessing devices. 2319 2320 (*) Interrupts. 2321 2322 2323INTERPROCESSOR INTERACTION 2324-------------------------- 2325 2326When there's a system with more than one processor, more than one CPU in the 2327system may be working on the same data set at the same time. This can cause 2328synchronisation problems, and the usual way of dealing with them is to use 2329locks. Locks, however, are quite expensive, and so it may be preferable to 2330operate without the use of a lock if at all possible. In such a case 2331operations that affect both CPUs may have to be carefully ordered to prevent 2332a malfunction. 2333 2334Consider, for example, the R/W semaphore slow path. Here a waiting process is 2335queued on the semaphore, by virtue of it having a piece of its stack linked to 2336the semaphore's list of waiting processes: 2337 2338 struct rw_semaphore { 2339 ... 2340 spinlock_t lock; 2341 struct list_head waiters; 2342 }; 2343 2344 struct rwsem_waiter { 2345 struct list_head list; 2346 struct task_struct *task; 2347 }; 2348 2349To wake up a particular waiter, the up_read() or up_write() functions have to: 2350 2351 (1) read the next pointer from this waiter's record to know as to where the 2352 next waiter record is; 2353 2354 (2) read the pointer to the waiter's task structure; 2355 2356 (3) clear the task pointer to tell the waiter it has been given the semaphore; 2357 2358 (4) call wake_up_process() on the task; and 2359 2360 (5) release the reference held on the waiter's task struct. 2361 2362In other words, it has to perform this sequence of events: 2363 2364 LOAD waiter->list.next; 2365 LOAD waiter->task; 2366 STORE waiter->task; 2367 CALL wakeup 2368 RELEASE task 2369 2370and if any of these steps occur out of order, then the whole thing may 2371malfunction. 2372 2373Once it has queued itself and dropped the semaphore lock, the waiter does not 2374get the lock again; it instead just waits for its task pointer to be cleared 2375before proceeding. Since the record is on the waiter's stack, this means that 2376if the task pointer is cleared _before_ the next pointer in the list is read, 2377another CPU might start processing the waiter and might clobber the waiter's 2378stack before the up*() function has a chance to read the next pointer. 2379 2380Consider then what might happen to the above sequence of events: 2381 2382 CPU 1 CPU 2 2383 =============================== =============================== 2384 down_xxx() 2385 Queue waiter 2386 Sleep 2387 up_yyy() 2388 LOAD waiter->task; 2389 STORE waiter->task; 2390 Woken up by other event 2391 <preempt> 2392 Resume processing 2393 down_xxx() returns 2394 call foo() 2395 foo() clobbers *waiter 2396 </preempt> 2397 LOAD waiter->list.next; 2398 --- OOPS --- 2399 2400This could be dealt with using the semaphore lock, but then the down_xxx() 2401function has to needlessly get the spinlock again after being woken up. 2402 2403The way to deal with this is to insert a general SMP memory barrier: 2404 2405 LOAD waiter->list.next; 2406 LOAD waiter->task; 2407 smp_mb(); 2408 STORE waiter->task; 2409 CALL wakeup 2410 RELEASE task 2411 2412In this case, the barrier makes a guarantee that all memory accesses before the 2413barrier will appear to happen before all the memory accesses after the barrier 2414with respect to the other CPUs on the system. It does _not_ guarantee that all 2415the memory accesses before the barrier will be complete by the time the barrier 2416instruction itself is complete. 2417 2418On a UP system - where this wouldn't be a problem - the smp_mb() is just a 2419compiler barrier, thus making sure the compiler emits the instructions in the 2420right order without actually intervening in the CPU. Since there's only one 2421CPU, that CPU's dependency ordering logic will take care of everything else. 2422 2423 2424ATOMIC OPERATIONS 2425----------------- 2426 2427While they are technically interprocessor interaction considerations, atomic 2428operations are noted specially as some of them imply full memory barriers and 2429some don't, but they're very heavily relied on as a group throughout the 2430kernel. 2431 2432See Documentation/atomic_t.txt for more information. 2433 2434 2435ACCESSING DEVICES 2436----------------- 2437 2438Many devices can be memory mapped, and so appear to the CPU as if they're just 2439a set of memory locations. To control such a device, the driver usually has to 2440make the right memory accesses in exactly the right order. 2441 2442However, having a clever CPU or a clever compiler creates a potential problem 2443in that the carefully sequenced accesses in the driver code won't reach the 2444device in the requisite order if the CPU or the compiler thinks it is more 2445efficient to reorder, combine or merge accesses - something that would cause 2446the device to malfunction. 2447 2448Inside of the Linux kernel, I/O should be done through the appropriate accessor 2449routines - such as inb() or writel() - which know how to make such accesses 2450appropriately sequential. While this, for the most part, renders the explicit 2451use of memory barriers unnecessary, if the accessor functions are used to refer 2452to an I/O memory window with relaxed memory access properties, then _mandatory_ 2453memory barriers are required to enforce ordering. 2454 2455See Documentation/driver-api/device-io.rst for more information. 2456 2457 2458INTERRUPTS 2459---------- 2460 2461A driver may be interrupted by its own interrupt service routine, and thus the 2462two parts of the driver may interfere with each other's attempts to control or 2463access the device. 2464 2465This may be alleviated - at least in part - by disabling local interrupts (a 2466form of locking), such that the critical operations are all contained within 2467the interrupt-disabled section in the driver. While the driver's interrupt 2468routine is executing, the driver's core may not run on the same CPU, and its 2469interrupt is not permitted to happen again until the current interrupt has been 2470handled, thus the interrupt handler does not need to lock against that. 2471 2472However, consider a driver that was talking to an ethernet card that sports an 2473address register and a data register. If that driver's core talks to the card 2474under interrupt-disablement and then the driver's interrupt handler is invoked: 2475 2476 LOCAL IRQ DISABLE 2477 writew(ADDR, 3); 2478 writew(DATA, y); 2479 LOCAL IRQ ENABLE 2480 <interrupt> 2481 writew(ADDR, 4); 2482 q = readw(DATA); 2483 </interrupt> 2484 2485The store to the data register might happen after the second store to the 2486address register if ordering rules are sufficiently relaxed: 2487 2488 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 2489 2490 2491If ordering rules are relaxed, it must be assumed that accesses done inside an 2492interrupt disabled section may leak outside of it and may interleave with 2493accesses performed in an interrupt - and vice versa - unless implicit or 2494explicit barriers are used. 2495 2496Normally this won't be a problem because the I/O accesses done inside such 2497sections will include synchronous load operations on strictly ordered I/O 2498registers that form implicit I/O barriers. 2499 2500 2501A similar situation may occur between an interrupt routine and two routines 2502running on separate CPUs that communicate with each other. If such a case is 2503likely, then interrupt-disabling locks should be used to guarantee ordering. 2504 2505 2506========================== 2507KERNEL I/O BARRIER EFFECTS 2508========================== 2509 2510Interfacing with peripherals via I/O accesses is deeply architecture and device 2511specific. Therefore, drivers which are inherently non-portable may rely on 2512specific behaviours of their target systems in order to achieve synchronization 2513in the most lightweight manner possible. For drivers intending to be portable 2514between multiple architectures and bus implementations, the kernel offers a 2515series of accessor functions that provide various degrees of ordering 2516guarantees: 2517 2518 (*) readX(), writeX(): 2519 2520 The readX() and writeX() MMIO accessors take a pointer to the 2521 peripheral being accessed as an __iomem * parameter. For pointers 2522 mapped with the default I/O attributes (e.g. those returned by 2523 ioremap()), the ordering guarantees are as follows: 2524 2525 1. All readX() and writeX() accesses to the same peripheral are ordered 2526 with respect to each other. This ensures that MMIO register accesses 2527 by the same CPU thread to a particular device will arrive in program 2528 order. 2529 2530 2. A writeX() issued by a CPU thread holding a spinlock is ordered 2531 before a writeX() to the same peripheral from another CPU thread 2532 issued after a later acquisition of the same spinlock. This ensures 2533 that MMIO register writes to a particular device issued while holding 2534 a spinlock will arrive in an order consistent with acquisitions of 2535 the lock. 2536 2537 3. A writeX() by a CPU thread to the peripheral will first wait for the 2538 completion of all prior writes to memory either issued by, or 2539 propagated to, the same thread. This ensures that writes by the CPU 2540 to an outbound DMA buffer allocated by dma_alloc_coherent() will be 2541 visible to a DMA engine when the CPU writes to its MMIO control 2542 register to trigger the transfer. 2543 2544 4. A readX() by a CPU thread from the peripheral will complete before 2545 any subsequent reads from memory by the same thread can begin. This 2546 ensures that reads by the CPU from an incoming DMA buffer allocated 2547 by dma_alloc_coherent() will not see stale data after reading from 2548 the DMA engine's MMIO status register to establish that the DMA 2549 transfer has completed. 2550 2551 5. A readX() by a CPU thread from the peripheral will complete before 2552 any subsequent delay() loop can begin execution on the same thread. 2553 This ensures that two MMIO register writes by the CPU to a peripheral 2554 will arrive at least 1us apart if the first write is immediately read 2555 back with readX() and udelay(1) is called prior to the second 2556 writeX(): 2557 2558 writel(42, DEVICE_REGISTER_0); // Arrives at the device... 2559 readl(DEVICE_REGISTER_0); 2560 udelay(1); 2561 writel(42, DEVICE_REGISTER_1); // ...at least 1us before this. 2562 2563 The ordering properties of __iomem pointers obtained with non-default 2564 attributes (e.g. those returned by ioremap_wc()) are specific to the 2565 underlying architecture and therefore the guarantees listed above cannot 2566 generally be relied upon for accesses to these types of mappings. 2567 2568 (*) readX_relaxed(), writeX_relaxed(): 2569 2570 These are similar to readX() and writeX(), but provide weaker memory 2571 ordering guarantees. Specifically, they do not guarantee ordering with 2572 respect to locking, normal memory accesses or delay() loops (i.e. 2573 bullets 2-5 above) but they are still guaranteed to be ordered with 2574 respect to other accesses from the same CPU thread to the same 2575 peripheral when operating on __iomem pointers mapped with the default 2576 I/O attributes. 2577 2578 (*) readsX(), writesX(): 2579 2580 The readsX() and writesX() MMIO accessors are designed for accessing 2581 register-based, memory-mapped FIFOs residing on peripherals that are not 2582 capable of performing DMA. Consequently, they provide only the ordering 2583 guarantees of readX_relaxed() and writeX_relaxed(), as documented above. 2584 2585 (*) inX(), outX(): 2586 2587 The inX() and outX() accessors are intended to access legacy port-mapped 2588 I/O peripherals, which may require special instructions on some 2589 architectures (notably x86). The port number of the peripheral being 2590 accessed is passed as an argument. 2591 2592 Since many CPU architectures ultimately access these peripherals via an 2593 internal virtual memory mapping, the portable ordering guarantees 2594 provided by inX() and outX() are the same as those provided by readX() 2595 and writeX() respectively when accessing a mapping with the default I/O 2596 attributes. 2597 2598 Device drivers may expect outX() to emit a non-posted write transaction 2599 that waits for a completion response from the I/O peripheral before 2600 returning. This is not guaranteed by all architectures and is therefore 2601 not part of the portable ordering semantics. 2602 2603 (*) insX(), outsX(): 2604 2605 As above, the insX() and outsX() accessors provide the same ordering 2606 guarantees as readsX() and writesX() respectively when accessing a 2607 mapping with the default I/O attributes. 2608 2609 (*) ioreadX(), iowriteX(): 2610 2611 These will perform appropriately for the type of access they're actually 2612 doing, be it inX()/outX() or readX()/writeX(). 2613 2614With the exception of the string accessors (insX(), outsX(), readsX() and 2615writesX()), all of the above assume that the underlying peripheral is 2616little-endian and will therefore perform byte-swapping operations on big-endian 2617architectures. 2618 2619 2620======================================== 2621ASSUMED MINIMUM EXECUTION ORDERING MODEL 2622======================================== 2623 2624It has to be assumed that the conceptual CPU is weakly-ordered but that it will 2625maintain the appearance of program causality with respect to itself. Some CPUs 2626(such as i386 or x86_64) are more constrained than others (such as powerpc or 2627frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 2628of arch-specific code. 2629 2630This means that it must be considered that the CPU will execute its instruction 2631stream in any order it feels like - or even in parallel - provided that if an 2632instruction in the stream depends on an earlier instruction, then that 2633earlier instruction must be sufficiently complete[*] before the later 2634instruction may proceed; in other words: provided that the appearance of 2635causality is maintained. 2636 2637 [*] Some instructions have more than one effect - such as changing the 2638 condition codes, changing registers or changing memory - and different 2639 instructions may depend on different effects. 2640 2641A CPU may also discard any instruction sequence that winds up having no 2642ultimate effect. For example, if two adjacent instructions both load an 2643immediate value into the same register, the first may be discarded. 2644 2645 2646Similarly, it has to be assumed that compiler might reorder the instruction 2647stream in any way it sees fit, again provided the appearance of causality is 2648maintained. 2649 2650 2651============================ 2652THE EFFECTS OF THE CPU CACHE 2653============================ 2654 2655The way cached memory operations are perceived across the system is affected to 2656a certain extent by the caches that lie between CPUs and memory, and by the 2657memory coherence system that maintains the consistency of state in the system. 2658 2659As far as the way a CPU interacts with another part of the system through the 2660caches goes, the memory system has to include the CPU's caches, and memory 2661barriers for the most part act at the interface between the CPU and its cache 2662(memory barriers logically act on the dotted line in the following diagram): 2663 2664 <--- CPU ---> : <----------- Memory -----------> 2665 : 2666 +--------+ +--------+ : +--------+ +-----------+ 2667 | | | | : | | | | +--------+ 2668 | CPU | | Memory | : | CPU | | | | | 2669 | Core |--->| Access |----->| Cache |<-->| | | | 2670 | | | Queue | : | | | |--->| Memory | 2671 | | | | : | | | | | | 2672 +--------+ +--------+ : +--------+ | | | | 2673 : | Cache | +--------+ 2674 : | Coherency | 2675 : | Mechanism | +--------+ 2676 +--------+ +--------+ : +--------+ | | | | 2677 | | | | : | | | | | | 2678 | CPU | | Memory | : | CPU | | |--->| Device | 2679 | Core |--->| Access |----->| Cache |<-->| | | | 2680 | | | Queue | : | | | | | | 2681 | | | | : | | | | +--------+ 2682 +--------+ +--------+ : +--------+ +-----------+ 2683 : 2684 : 2685 2686Although any particular load or store may not actually appear outside of the 2687CPU that issued it since it may have been satisfied within the CPU's own cache, 2688it will still appear as if the full memory access had taken place as far as the 2689other CPUs are concerned since the cache coherency mechanisms will migrate the 2690cacheline over to the accessing CPU and propagate the effects upon conflict. 2691 2692The CPU core may execute instructions in any order it deems fit, provided the 2693expected program causality appears to be maintained. Some of the instructions 2694generate load and store operations which then go into the queue of memory 2695accesses to be performed. The core may place these in the queue in any order 2696it wishes, and continue execution until it is forced to wait for an instruction 2697to complete. 2698 2699What memory barriers are concerned with is controlling the order in which 2700accesses cross from the CPU side of things to the memory side of things, and 2701the order in which the effects are perceived to happen by the other observers 2702in the system. 2703 2704[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 2705their own loads and stores as if they had happened in program order. 2706 2707[!] MMIO or other device accesses may bypass the cache system. This depends on 2708the properties of the memory window through which devices are accessed and/or 2709the use of any special device communication instructions the CPU may have. 2710 2711 2712CACHE COHERENCY 2713--------------- 2714 2715Life isn't quite as simple as it may appear above, however: for while the 2716caches are expected to be coherent, there's no guarantee that that coherency 2717will be ordered. This means that while changes made on one CPU will 2718eventually become visible on all CPUs, there's no guarantee that they will 2719become apparent in the same order on those other CPUs. 2720 2721 2722Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 2723has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 2724 2725 : 2726 : +--------+ 2727 : +---------+ | | 2728 +--------+ : +--->| Cache A |<------->| | 2729 | | : | +---------+ | | 2730 | CPU 1 |<---+ | | 2731 | | : | +---------+ | | 2732 +--------+ : +--->| Cache B |<------->| | 2733 : +---------+ | | 2734 : | Memory | 2735 : +---------+ | System | 2736 +--------+ : +--->| Cache C |<------->| | 2737 | | : | +---------+ | | 2738 | CPU 2 |<---+ | | 2739 | | : | +---------+ | | 2740 +--------+ : +--->| Cache D |<------->| | 2741 : +---------+ | | 2742 : +--------+ 2743 : 2744 2745Imagine the system has the following properties: 2746 2747 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 2748 resident in memory; 2749 2750 (*) an even-numbered cache line may be in cache B, cache D or it may still be 2751 resident in memory; 2752 2753 (*) while the CPU core is interrogating one cache, the other cache may be 2754 making use of the bus to access the rest of the system - perhaps to 2755 displace a dirty cacheline or to do a speculative load; 2756 2757 (*) each cache has a queue of operations that need to be applied to that cache 2758 to maintain coherency with the rest of the system; 2759 2760 (*) the coherency queue is not flushed by normal loads to lines already 2761 present in the cache, even though the contents of the queue may 2762 potentially affect those loads. 2763 2764Imagine, then, that two writes are made on the first CPU, with a write barrier 2765between them to guarantee that they will appear to reach that CPU's caches in 2766the requisite order: 2767 2768 CPU 1 CPU 2 COMMENT 2769 =============== =============== ======================================= 2770 u == 0, v == 1 and p == &u, q == &u 2771 v = 2; 2772 smp_wmb(); Make sure change to v is visible before 2773 change to p 2774 <A:modify v=2> v is now in cache A exclusively 2775 p = &v; 2776 <B:modify p=&v> p is now in cache B exclusively 2777 2778The write memory barrier forces the other CPUs in the system to perceive that 2779the local CPU's caches have apparently been updated in the correct order. But 2780now imagine that the second CPU wants to read those values: 2781 2782 CPU 1 CPU 2 COMMENT 2783 =============== =============== ======================================= 2784 ... 2785 q = p; 2786 x = *q; 2787 2788The above pair of reads may then fail to happen in the expected order, as the 2789cacheline holding p may get updated in one of the second CPU's caches while 2790the update to the cacheline holding v is delayed in the other of the second 2791CPU's caches by some other cache event: 2792 2793 CPU 1 CPU 2 COMMENT 2794 =============== =============== ======================================= 2795 u == 0, v == 1 and p == &u, q == &u 2796 v = 2; 2797 smp_wmb(); 2798 <A:modify v=2> <C:busy> 2799 <C:queue v=2> 2800 p = &v; q = p; 2801 <D:request p> 2802 <B:modify p=&v> <D:commit p=&v> 2803 <D:read p> 2804 x = *q; 2805 <C:read *q> Reads from v before v updated in cache 2806 <C:unbusy> 2807 <C:commit v=2> 2808 2809Basically, while both cachelines will be updated on CPU 2 eventually, there's 2810no guarantee that, without intervention, the order of update will be the same 2811as that committed on CPU 1. 2812 2813 2814To intervene, we need to interpolate a data dependency barrier or a read 2815barrier between the loads (which as of v4.15 is supplied unconditionally 2816by the READ_ONCE() macro). This will force the cache to commit its 2817coherency queue before processing any further requests: 2818 2819 CPU 1 CPU 2 COMMENT 2820 =============== =============== ======================================= 2821 u == 0, v == 1 and p == &u, q == &u 2822 v = 2; 2823 smp_wmb(); 2824 <A:modify v=2> <C:busy> 2825 <C:queue v=2> 2826 p = &v; q = p; 2827 <D:request p> 2828 <B:modify p=&v> <D:commit p=&v> 2829 <D:read p> 2830 smp_read_barrier_depends() 2831 <C:unbusy> 2832 <C:commit v=2> 2833 x = *q; 2834 <C:read *q> Reads from v after v updated in cache 2835 2836 2837This sort of problem can be encountered on DEC Alpha processors as they have a 2838split cache that improves performance by making better use of the data bus. 2839While most CPUs do imply a data dependency barrier on the read when a memory 2840access depends on a read, not all do, so it may not be relied on. 2841 2842Other CPUs may also have split caches, but must coordinate between the various 2843cachelets for normal memory accesses. The semantics of the Alpha removes the 2844need for hardware coordination in the absence of memory barriers, which 2845permitted Alpha to sport higher CPU clock rates back in the day. However, 2846please note that (again, as of v4.15) smp_read_barrier_depends() should not 2847be used except in Alpha arch-specific code and within the READ_ONCE() macro. 2848 2849 2850CACHE COHERENCY VS DMA 2851---------------------- 2852 2853Not all systems maintain cache coherency with respect to devices doing DMA. In 2854such cases, a device attempting DMA may obtain stale data from RAM because 2855dirty cache lines may be resident in the caches of various CPUs, and may not 2856have been written back to RAM yet. To deal with this, the appropriate part of 2857the kernel must flush the overlapping bits of cache on each CPU (and maybe 2858invalidate them as well). 2859 2860In addition, the data DMA'd to RAM by a device may be overwritten by dirty 2861cache lines being written back to RAM from a CPU's cache after the device has 2862installed its own data, or cache lines present in the CPU's cache may simply 2863obscure the fact that RAM has been updated, until at such time as the cacheline 2864is discarded from the CPU's cache and reloaded. To deal with this, the 2865appropriate part of the kernel must invalidate the overlapping bits of the 2866cache on each CPU. 2867 2868See Documentation/core-api/cachetlb.rst for more information on cache management. 2869 2870 2871CACHE COHERENCY VS MMIO 2872----------------------- 2873 2874Memory mapped I/O usually takes place through memory locations that are part of 2875a window in the CPU's memory space that has different properties assigned than 2876the usual RAM directed window. 2877 2878Amongst these properties is usually the fact that such accesses bypass the 2879caching entirely and go directly to the device buses. This means MMIO accesses 2880may, in effect, overtake accesses to cached memory that were emitted earlier. 2881A memory barrier isn't sufficient in such a case, but rather the cache must be 2882flushed between the cached memory write and the MMIO access if the two are in 2883any way dependent. 2884 2885 2886========================= 2887THE THINGS CPUS GET UP TO 2888========================= 2889 2890A programmer might take it for granted that the CPU will perform memory 2891operations in exactly the order specified, so that if the CPU is, for example, 2892given the following piece of code to execute: 2893 2894 a = READ_ONCE(*A); 2895 WRITE_ONCE(*B, b); 2896 c = READ_ONCE(*C); 2897 d = READ_ONCE(*D); 2898 WRITE_ONCE(*E, e); 2899 2900they would then expect that the CPU will complete the memory operation for each 2901instruction before moving on to the next one, leading to a definite sequence of 2902operations as seen by external observers in the system: 2903 2904 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 2905 2906 2907Reality is, of course, much messier. With many CPUs and compilers, the above 2908assumption doesn't hold because: 2909 2910 (*) loads are more likely to need to be completed immediately to permit 2911 execution progress, whereas stores can often be deferred without a 2912 problem; 2913 2914 (*) loads may be done speculatively, and the result discarded should it prove 2915 to have been unnecessary; 2916 2917 (*) loads may be done speculatively, leading to the result having been fetched 2918 at the wrong time in the expected sequence of events; 2919 2920 (*) the order of the memory accesses may be rearranged to promote better use 2921 of the CPU buses and caches; 2922 2923 (*) loads and stores may be combined to improve performance when talking to 2924 memory or I/O hardware that can do batched accesses of adjacent locations, 2925 thus cutting down on transaction setup costs (memory and PCI devices may 2926 both be able to do this); and 2927 2928 (*) the CPU's data cache may affect the ordering, and while cache-coherency 2929 mechanisms may alleviate this - once the store has actually hit the cache 2930 - there's no guarantee that the coherency management will be propagated in 2931 order to other CPUs. 2932 2933So what another CPU, say, might actually observe from the above piece of code 2934is: 2935 2936 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2937 2938 (Where "LOAD {*C,*D}" is a combined load) 2939 2940 2941However, it is guaranteed that a CPU will be self-consistent: it will see its 2942_own_ accesses appear to be correctly ordered, without the need for a memory 2943barrier. For instance with the following code: 2944 2945 U = READ_ONCE(*A); 2946 WRITE_ONCE(*A, V); 2947 WRITE_ONCE(*A, W); 2948 X = READ_ONCE(*A); 2949 WRITE_ONCE(*A, Y); 2950 Z = READ_ONCE(*A); 2951 2952and assuming no intervention by an external influence, it can be assumed that 2953the final result will appear to be: 2954 2955 U == the original value of *A 2956 X == W 2957 Z == Y 2958 *A == Y 2959 2960The code above may cause the CPU to generate the full sequence of memory 2961accesses: 2962 2963 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2964 2965in that order, but, without intervention, the sequence may have almost any 2966combination of elements combined or discarded, provided the program's view 2967of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() 2968are -not- optional in the above example, as there are architectures 2969where a given CPU might reorder successive loads to the same location. 2970On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is 2971necessary to prevent this, for example, on Itanium the volatile casts 2972used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq 2973and st.rel instructions (respectively) that prevent such reordering. 2974 2975The compiler may also combine, discard or defer elements of the sequence before 2976the CPU even sees them. 2977 2978For instance: 2979 2980 *A = V; 2981 *A = W; 2982 2983may be reduced to: 2984 2985 *A = W; 2986 2987since, without either a write barrier or an WRITE_ONCE(), it can be 2988assumed that the effect of the storage of V to *A is lost. Similarly: 2989 2990 *A = Y; 2991 Z = *A; 2992 2993may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be 2994reduced to: 2995 2996 *A = Y; 2997 Z = Y; 2998 2999and the LOAD operation never appear outside of the CPU. 3000 3001 3002AND THEN THERE'S THE ALPHA 3003-------------------------- 3004 3005The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 3006some versions of the Alpha CPU have a split data cache, permitting them to have 3007two semantically-related cache lines updated at separate times. This is where 3008the data dependency barrier really becomes necessary as this synchronises both 3009caches with the memory coherence system, thus making it seem like pointer 3010changes vs new data occur in the right order. 3011 3012The Alpha defines the Linux kernel's memory model, although as of v4.15 3013the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE() 3014greatly reduced Alpha's impact on the memory model. 3015 3016See the subsection on "Cache Coherency" above. 3017 3018 3019VIRTUAL MACHINE GUESTS 3020---------------------- 3021 3022Guests running within virtual machines might be affected by SMP effects even if 3023the guest itself is compiled without SMP support. This is an artifact of 3024interfacing with an SMP host while running an UP kernel. Using mandatory 3025barriers for this use-case would be possible but is often suboptimal. 3026 3027To handle this case optimally, low-level virt_mb() etc macros are available. 3028These have the same effect as smp_mb() etc when SMP is enabled, but generate 3029identical code for SMP and non-SMP systems. For example, virtual machine guests 3030should use virt_mb() rather than smp_mb() when synchronizing against a 3031(possibly SMP) host. 3032 3033These are equivalent to smp_mb() etc counterparts in all other respects, 3034in particular, they do not control MMIO effects: to control 3035MMIO effects, use mandatory barriers. 3036 3037 3038============ 3039EXAMPLE USES 3040============ 3041 3042CIRCULAR BUFFERS 3043---------------- 3044 3045Memory barriers can be used to implement circular buffering without the need 3046of a lock to serialise the producer with the consumer. See: 3047 3048 Documentation/core-api/circular-buffers.rst 3049 3050for details. 3051 3052 3053========== 3054REFERENCES 3055========== 3056 3057Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 3058Digital Press) 3059 Chapter 5.2: Physical Address Space Characteristics 3060 Chapter 5.4: Caches and Write Buffers 3061 Chapter 5.5: Data Sharing 3062 Chapter 5.6: Read/Write Ordering 3063 3064AMD64 Architecture Programmer's Manual Volume 2: System Programming 3065 Chapter 7.1: Memory-Access Ordering 3066 Chapter 7.4: Buffering and Combining Memory Writes 3067 3068ARM Architecture Reference Manual (ARMv8, for ARMv8-A architecture profile) 3069 Chapter B2: The AArch64 Application Level Memory Model 3070 3071IA-32 Intel Architecture Software Developer's Manual, Volume 3: 3072System Programming Guide 3073 Chapter 7.1: Locked Atomic Operations 3074 Chapter 7.2: Memory Ordering 3075 Chapter 7.4: Serializing Instructions 3076 3077The SPARC Architecture Manual, Version 9 3078 Chapter 8: Memory Models 3079 Appendix D: Formal Specification of the Memory Models 3080 Appendix J: Programming with the Memory Models 3081 3082Storage in the PowerPC (Stone and Fitzgerald) 3083 3084UltraSPARC Programmer Reference Manual 3085 Chapter 5: Memory Accesses and Cacheability 3086 Chapter 15: Sparc-V9 Memory Models 3087 3088UltraSPARC III Cu User's Manual 3089 Chapter 9: Memory Models 3090 3091UltraSPARC IIIi Processor User's Manual 3092 Chapter 8: Memory Models 3093 3094UltraSPARC Architecture 2005 3095 Chapter 9: Memory 3096 Appendix D: Formal Specifications of the Memory Models 3097 3098UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 3099 Chapter 8: Memory Models 3100 Appendix F: Caches and Cache Coherency 3101 3102Solaris Internals, Core Kernel Architecture, p63-68: 3103 Chapter 3.3: Hardware Considerations for Locks and 3104 Synchronization 3105 3106Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 3107for Kernel Programmers: 3108 Chapter 13: Other Memory Models 3109 3110Intel Itanium Architecture Software Developer's Manual: Volume 1: 3111 Section 2.6: Speculation 3112 Section 4.4: Memory Access 3113