1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 Paul E. McKenney <paulmck@linux.vnet.ibm.com> 7 8Contents: 9 10 (*) Abstract memory access model. 11 12 - Device operations. 13 - Guarantees. 14 15 (*) What are memory barriers? 16 17 - Varieties of memory barrier. 18 - What may not be assumed about memory barriers? 19 - Data dependency barriers. 20 - Control dependencies. 21 - SMP barrier pairing. 22 - Examples of memory barrier sequences. 23 - Read memory barriers vs load speculation. 24 - Transitivity 25 26 (*) Explicit kernel barriers. 27 28 - Compiler barrier. 29 - CPU memory barriers. 30 - MMIO write barrier. 31 32 (*) Implicit kernel memory barriers. 33 34 - Locking functions. 35 - Interrupt disabling functions. 36 - Sleep and wake-up functions. 37 - Miscellaneous functions. 38 39 (*) Inter-CPU locking barrier effects. 40 41 - Locks vs memory accesses. 42 - Locks vs I/O accesses. 43 44 (*) Where are memory barriers needed? 45 46 - Interprocessor interaction. 47 - Atomic operations. 48 - Accessing devices. 49 - Interrupts. 50 51 (*) Kernel I/O barrier effects. 52 53 (*) Assumed minimum execution ordering model. 54 55 (*) The effects of the cpu cache. 56 57 - Cache coherency. 58 - Cache coherency vs DMA. 59 - Cache coherency vs MMIO. 60 61 (*) The things CPUs get up to. 62 63 - And then there's the Alpha. 64 65 (*) Example uses. 66 67 - Circular buffers. 68 69 (*) References. 70 71 72============================ 73ABSTRACT MEMORY ACCESS MODEL 74============================ 75 76Consider the following abstract model of the system: 77 78 : : 79 : : 80 : : 81 +-------+ : +--------+ : +-------+ 82 | | : | | : | | 83 | | : | | : | | 84 | CPU 1 |<----->| Memory |<----->| CPU 2 | 85 | | : | | : | | 86 | | : | | : | | 87 +-------+ : +--------+ : +-------+ 88 ^ : ^ : ^ 89 | : | : | 90 | : | : | 91 | : v : | 92 | : +--------+ : | 93 | : | | : | 94 | : | | : | 95 +---------->| Device |<----------+ 96 : | | : 97 : | | : 98 : +--------+ : 99 : : 100 101Each CPU executes a program that generates memory access operations. In the 102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 103perform the memory operations in any order it likes, provided program causality 104appears to be maintained. Similarly, the compiler may also arrange the 105instructions it emits in any order it likes, provided it doesn't affect the 106apparent operation of the program. 107 108So in the above diagram, the effects of the memory operations performed by a 109CPU are perceived by the rest of the system as the operations cross the 110interface between the CPU and rest of the system (the dotted lines). 111 112 113For example, consider the following sequence of events: 114 115 CPU 1 CPU 2 116 =============== =============== 117 { A == 1; B == 2 } 118 A = 3; x = B; 119 B = 4; y = A; 120 121The set of accesses as seen by the memory system in the middle can be arranged 122in 24 different combinations: 123 124 STORE A=3, STORE B=4, y=LOAD A->3, x=LOAD B->4 125 STORE A=3, STORE B=4, x=LOAD B->4, y=LOAD A->3 126 STORE A=3, y=LOAD A->3, STORE B=4, x=LOAD B->4 127 STORE A=3, y=LOAD A->3, x=LOAD B->2, STORE B=4 128 STORE A=3, x=LOAD B->2, STORE B=4, y=LOAD A->3 129 STORE A=3, x=LOAD B->2, y=LOAD A->3, STORE B=4 130 STORE B=4, STORE A=3, y=LOAD A->3, x=LOAD B->4 131 STORE B=4, ... 132 ... 133 134and can thus result in four different combinations of values: 135 136 x == 2, y == 1 137 x == 2, y == 3 138 x == 4, y == 1 139 x == 4, y == 3 140 141 142Furthermore, the stores committed by a CPU to the memory system may not be 143perceived by the loads made by another CPU in the same order as the stores were 144committed. 145 146 147As a further example, consider this sequence of events: 148 149 CPU 1 CPU 2 150 =============== =============== 151 { A == 1, B == 2, C = 3, P == &A, Q == &C } 152 B = 4; Q = P; 153 P = &B D = *Q; 154 155There is an obvious data dependency here, as the value loaded into D depends on 156the address retrieved from P by CPU 2. At the end of the sequence, any of the 157following results are possible: 158 159 (Q == &A) and (D == 1) 160 (Q == &B) and (D == 2) 161 (Q == &B) and (D == 4) 162 163Note that CPU 2 will never try and load C into D because the CPU will load P 164into Q before issuing the load of *Q. 165 166 167DEVICE OPERATIONS 168----------------- 169 170Some devices present their control interfaces as collections of memory 171locations, but the order in which the control registers are accessed is very 172important. For instance, imagine an ethernet card with a set of internal 173registers that are accessed through an address port register (A) and a data 174port register (D). To read internal register 5, the following code might then 175be used: 176 177 *A = 5; 178 x = *D; 179 180but this might show up as either of the following two sequences: 181 182 STORE *A = 5, x = LOAD *D 183 x = LOAD *D, STORE *A = 5 184 185the second of which will almost certainly result in a malfunction, since it set 186the address _after_ attempting to read the register. 187 188 189GUARANTEES 190---------- 191 192There are some minimal guarantees that may be expected of a CPU: 193 194 (*) On any given CPU, dependent memory accesses will be issued in order, with 195 respect to itself. This means that for: 196 197 Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q); 198 199 the CPU will issue the following memory operations: 200 201 Q = LOAD P, D = LOAD *Q 202 203 and always in that order. On most systems, smp_read_barrier_depends() 204 does nothing, but it is required for DEC Alpha. The READ_ONCE() 205 is required to prevent compiler mischief. Please note that you 206 should normally use something like rcu_dereference() instead of 207 open-coding smp_read_barrier_depends(). 208 209 (*) Overlapping loads and stores within a particular CPU will appear to be 210 ordered within that CPU. This means that for: 211 212 a = READ_ONCE(*X); WRITE_ONCE(*X, b); 213 214 the CPU will only issue the following sequence of memory operations: 215 216 a = LOAD *X, STORE *X = b 217 218 And for: 219 220 WRITE_ONCE(*X, c); d = READ_ONCE(*X); 221 222 the CPU will only issue: 223 224 STORE *X = c, d = LOAD *X 225 226 (Loads and stores overlap if they are targeted at overlapping pieces of 227 memory). 228 229And there are a number of things that _must_ or _must_not_ be assumed: 230 231 (*) It _must_not_ be assumed that the compiler will do what you want 232 with memory references that are not protected by READ_ONCE() and 233 WRITE_ONCE(). Without them, the compiler is within its rights to 234 do all sorts of "creative" transformations, which are covered in 235 the Compiler Barrier section. 236 237 (*) It _must_not_ be assumed that independent loads and stores will be issued 238 in the order given. This means that for: 239 240 X = *A; Y = *B; *D = Z; 241 242 we may get any of the following sequences: 243 244 X = LOAD *A, Y = LOAD *B, STORE *D = Z 245 X = LOAD *A, STORE *D = Z, Y = LOAD *B 246 Y = LOAD *B, X = LOAD *A, STORE *D = Z 247 Y = LOAD *B, STORE *D = Z, X = LOAD *A 248 STORE *D = Z, X = LOAD *A, Y = LOAD *B 249 STORE *D = Z, Y = LOAD *B, X = LOAD *A 250 251 (*) It _must_ be assumed that overlapping memory accesses may be merged or 252 discarded. This means that for: 253 254 X = *A; Y = *(A + 4); 255 256 we may get any one of the following sequences: 257 258 X = LOAD *A; Y = LOAD *(A + 4); 259 Y = LOAD *(A + 4); X = LOAD *A; 260 {X, Y} = LOAD {*A, *(A + 4) }; 261 262 And for: 263 264 *A = X; *(A + 4) = Y; 265 266 we may get any of: 267 268 STORE *A = X; STORE *(A + 4) = Y; 269 STORE *(A + 4) = Y; STORE *A = X; 270 STORE {*A, *(A + 4) } = {X, Y}; 271 272And there are anti-guarantees: 273 274 (*) These guarantees do not apply to bitfields, because compilers often 275 generate code to modify these using non-atomic read-modify-write 276 sequences. Do not attempt to use bitfields to synchronize parallel 277 algorithms. 278 279 (*) Even in cases where bitfields are protected by locks, all fields 280 in a given bitfield must be protected by one lock. If two fields 281 in a given bitfield are protected by different locks, the compiler's 282 non-atomic read-modify-write sequences can cause an update to one 283 field to corrupt the value of an adjacent field. 284 285 (*) These guarantees apply only to properly aligned and sized scalar 286 variables. "Properly sized" currently means variables that are 287 the same size as "char", "short", "int" and "long". "Properly 288 aligned" means the natural alignment, thus no constraints for 289 "char", two-byte alignment for "short", four-byte alignment for 290 "int", and either four-byte or eight-byte alignment for "long", 291 on 32-bit and 64-bit systems, respectively. Note that these 292 guarantees were introduced into the C11 standard, so beware when 293 using older pre-C11 compilers (for example, gcc 4.6). The portion 294 of the standard containing this guarantee is Section 3.14, which 295 defines "memory location" as follows: 296 297 memory location 298 either an object of scalar type, or a maximal sequence 299 of adjacent bit-fields all having nonzero width 300 301 NOTE 1: Two threads of execution can update and access 302 separate memory locations without interfering with 303 each other. 304 305 NOTE 2: A bit-field and an adjacent non-bit-field member 306 are in separate memory locations. The same applies 307 to two bit-fields, if one is declared inside a nested 308 structure declaration and the other is not, or if the two 309 are separated by a zero-length bit-field declaration, 310 or if they are separated by a non-bit-field member 311 declaration. It is not safe to concurrently update two 312 bit-fields in the same structure if all members declared 313 between them are also bit-fields, no matter what the 314 sizes of those intervening bit-fields happen to be. 315 316 317========================= 318WHAT ARE MEMORY BARRIERS? 319========================= 320 321As can be seen above, independent memory operations are effectively performed 322in random order, but this can be a problem for CPU-CPU interaction and for I/O. 323What is required is some way of intervening to instruct the compiler and the 324CPU to restrict the order. 325 326Memory barriers are such interventions. They impose a perceived partial 327ordering over the memory operations on either side of the barrier. 328 329Such enforcement is important because the CPUs and other devices in a system 330can use a variety of tricks to improve performance, including reordering, 331deferral and combination of memory operations; speculative loads; speculative 332branch prediction and various types of caching. Memory barriers are used to 333override or suppress these tricks, allowing the code to sanely control the 334interaction of multiple CPUs and/or devices. 335 336 337VARIETIES OF MEMORY BARRIER 338--------------------------- 339 340Memory barriers come in four basic varieties: 341 342 (1) Write (or store) memory barriers. 343 344 A write memory barrier gives a guarantee that all the STORE operations 345 specified before the barrier will appear to happen before all the STORE 346 operations specified after the barrier with respect to the other 347 components of the system. 348 349 A write barrier is a partial ordering on stores only; it is not required 350 to have any effect on loads. 351 352 A CPU can be viewed as committing a sequence of store operations to the 353 memory system as time progresses. All stores before a write barrier will 354 occur in the sequence _before_ all the stores after the write barrier. 355 356 [!] Note that write barriers should normally be paired with read or data 357 dependency barriers; see the "SMP barrier pairing" subsection. 358 359 360 (2) Data dependency barriers. 361 362 A data dependency barrier is a weaker form of read barrier. In the case 363 where two loads are performed such that the second depends on the result 364 of the first (eg: the first load retrieves the address to which the second 365 load will be directed), a data dependency barrier would be required to 366 make sure that the target of the second load is updated before the address 367 obtained by the first load is accessed. 368 369 A data dependency barrier is a partial ordering on interdependent loads 370 only; it is not required to have any effect on stores, independent loads 371 or overlapping loads. 372 373 As mentioned in (1), the other CPUs in the system can be viewed as 374 committing sequences of stores to the memory system that the CPU being 375 considered can then perceive. A data dependency barrier issued by the CPU 376 under consideration guarantees that for any load preceding it, if that 377 load touches one of a sequence of stores from another CPU, then by the 378 time the barrier completes, the effects of all the stores prior to that 379 touched by the load will be perceptible to any loads issued after the data 380 dependency barrier. 381 382 See the "Examples of memory barrier sequences" subsection for diagrams 383 showing the ordering constraints. 384 385 [!] Note that the first load really has to have a _data_ dependency and 386 not a control dependency. If the address for the second load is dependent 387 on the first load, but the dependency is through a conditional rather than 388 actually loading the address itself, then it's a _control_ dependency and 389 a full read barrier or better is required. See the "Control dependencies" 390 subsection for more information. 391 392 [!] Note that data dependency barriers should normally be paired with 393 write barriers; see the "SMP barrier pairing" subsection. 394 395 396 (3) Read (or load) memory barriers. 397 398 A read barrier is a data dependency barrier plus a guarantee that all the 399 LOAD operations specified before the barrier will appear to happen before 400 all the LOAD operations specified after the barrier with respect to the 401 other components of the system. 402 403 A read barrier is a partial ordering on loads only; it is not required to 404 have any effect on stores. 405 406 Read memory barriers imply data dependency barriers, and so can substitute 407 for them. 408 409 [!] Note that read barriers should normally be paired with write barriers; 410 see the "SMP barrier pairing" subsection. 411 412 413 (4) General memory barriers. 414 415 A general memory barrier gives a guarantee that all the LOAD and STORE 416 operations specified before the barrier will appear to happen before all 417 the LOAD and STORE operations specified after the barrier with respect to 418 the other components of the system. 419 420 A general memory barrier is a partial ordering over both loads and stores. 421 422 General memory barriers imply both read and write memory barriers, and so 423 can substitute for either. 424 425 426And a couple of implicit varieties: 427 428 (5) ACQUIRE operations. 429 430 This acts as a one-way permeable barrier. It guarantees that all memory 431 operations after the ACQUIRE operation will appear to happen after the 432 ACQUIRE operation with respect to the other components of the system. 433 ACQUIRE operations include LOCK operations and smp_load_acquire() 434 operations. 435 436 Memory operations that occur before an ACQUIRE operation may appear to 437 happen after it completes. 438 439 An ACQUIRE operation should almost always be paired with a RELEASE 440 operation. 441 442 443 (6) RELEASE operations. 444 445 This also acts as a one-way permeable barrier. It guarantees that all 446 memory operations before the RELEASE operation will appear to happen 447 before the RELEASE operation with respect to the other components of the 448 system. RELEASE operations include UNLOCK operations and 449 smp_store_release() operations. 450 451 Memory operations that occur after a RELEASE operation may appear to 452 happen before it completes. 453 454 The use of ACQUIRE and RELEASE operations generally precludes the need 455 for other sorts of memory barrier (but note the exceptions mentioned in 456 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE 457 pair is -not- guaranteed to act as a full memory barrier. However, after 458 an ACQUIRE on a given variable, all memory accesses preceding any prior 459 RELEASE on that same variable are guaranteed to be visible. In other 460 words, within a given variable's critical section, all accesses of all 461 previous critical sections for that variable are guaranteed to have 462 completed. 463 464 This means that ACQUIRE acts as a minimal "acquire" operation and 465 RELEASE acts as a minimal "release" operation. 466 467 468Memory barriers are only required where there's a possibility of interaction 469between two CPUs or between a CPU and a device. If it can be guaranteed that 470there won't be any such interaction in any particular piece of code, then 471memory barriers are unnecessary in that piece of code. 472 473 474Note that these are the _minimum_ guarantees. Different architectures may give 475more substantial guarantees, but they may _not_ be relied upon outside of arch 476specific code. 477 478 479WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 480---------------------------------------------- 481 482There are certain things that the Linux kernel memory barriers do not guarantee: 483 484 (*) There is no guarantee that any of the memory accesses specified before a 485 memory barrier will be _complete_ by the completion of a memory barrier 486 instruction; the barrier can be considered to draw a line in that CPU's 487 access queue that accesses of the appropriate type may not cross. 488 489 (*) There is no guarantee that issuing a memory barrier on one CPU will have 490 any direct effect on another CPU or any other hardware in the system. The 491 indirect effect will be the order in which the second CPU sees the effects 492 of the first CPU's accesses occur, but see the next point: 493 494 (*) There is no guarantee that a CPU will see the correct order of effects 495 from a second CPU's accesses, even _if_ the second CPU uses a memory 496 barrier, unless the first CPU _also_ uses a matching memory barrier (see 497 the subsection on "SMP Barrier Pairing"). 498 499 (*) There is no guarantee that some intervening piece of off-the-CPU 500 hardware[*] will not reorder the memory accesses. CPU cache coherency 501 mechanisms should propagate the indirect effects of a memory barrier 502 between CPUs, but might not do so in order. 503 504 [*] For information on bus mastering DMA and coherency please read: 505 506 Documentation/PCI/pci.txt 507 Documentation/DMA-API-HOWTO.txt 508 Documentation/DMA-API.txt 509 510 511DATA DEPENDENCY BARRIERS 512------------------------ 513 514The usage requirements of data dependency barriers are a little subtle, and 515it's not always obvious that they're needed. To illustrate, consider the 516following sequence of events: 517 518 CPU 1 CPU 2 519 =============== =============== 520 { A == 1, B == 2, C = 3, P == &A, Q == &C } 521 B = 4; 522 <write barrier> 523 WRITE_ONCE(P, &B) 524 Q = READ_ONCE(P); 525 D = *Q; 526 527There's a clear data dependency here, and it would seem that by the end of the 528sequence, Q must be either &A or &B, and that: 529 530 (Q == &A) implies (D == 1) 531 (Q == &B) implies (D == 4) 532 533But! CPU 2's perception of P may be updated _before_ its perception of B, thus 534leading to the following situation: 535 536 (Q == &B) and (D == 2) ???? 537 538Whilst this may seem like a failure of coherency or causality maintenance, it 539isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 540Alpha). 541 542To deal with this, a data dependency barrier or better must be inserted 543between the address load and the data load: 544 545 CPU 1 CPU 2 546 =============== =============== 547 { A == 1, B == 2, C = 3, P == &A, Q == &C } 548 B = 4; 549 <write barrier> 550 WRITE_ONCE(P, &B); 551 Q = READ_ONCE(P); 552 <data dependency barrier> 553 D = *Q; 554 555This enforces the occurrence of one of the two implications, and prevents the 556third possibility from arising. 557 558[!] Note that this extremely counterintuitive situation arises most easily on 559machines with split caches, so that, for example, one cache bank processes 560even-numbered cache lines and the other bank processes odd-numbered cache 561lines. The pointer P might be stored in an odd-numbered cache line, and the 562variable B might be stored in an even-numbered cache line. Then, if the 563even-numbered bank of the reading CPU's cache is extremely busy while the 564odd-numbered bank is idle, one can see the new value of the pointer P (&B), 565but the old value of the variable B (2). 566 567 568Another example of where data dependency barriers might be required is where a 569number is read from memory and then used to calculate the index for an array 570access: 571 572 CPU 1 CPU 2 573 =============== =============== 574 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } 575 M[1] = 4; 576 <write barrier> 577 WRITE_ONCE(P, 1); 578 Q = READ_ONCE(P); 579 <data dependency barrier> 580 D = M[Q]; 581 582 583The data dependency barrier is very important to the RCU system, 584for example. See rcu_assign_pointer() and rcu_dereference() in 585include/linux/rcupdate.h. This permits the current target of an RCU'd 586pointer to be replaced with a new modified target, without the replacement 587target appearing to be incompletely initialised. 588 589See also the subsection on "Cache Coherency" for a more thorough example. 590 591 592CONTROL DEPENDENCIES 593-------------------- 594 595A load-load control dependency requires a full read memory barrier, not 596simply a data dependency barrier to make it work correctly. Consider the 597following bit of code: 598 599 q = READ_ONCE(a); 600 if (q) { 601 <data dependency barrier> /* BUG: No data dependency!!! */ 602 p = READ_ONCE(b); 603 } 604 605This will not have the desired effect because there is no actual data 606dependency, but rather a control dependency that the CPU may short-circuit 607by attempting to predict the outcome in advance, so that other CPUs see 608the load from b as having happened before the load from a. In such a 609case what's actually required is: 610 611 q = READ_ONCE(a); 612 if (q) { 613 <read barrier> 614 p = READ_ONCE(b); 615 } 616 617However, stores are not speculated. This means that ordering -is- provided 618for load-store control dependencies, as in the following example: 619 620 q = READ_ONCE(a); 621 if (q) { 622 WRITE_ONCE(b, p); 623 } 624 625Control dependencies pair normally with other types of barriers. That 626said, please note that READ_ONCE() is not optional! Without the 627READ_ONCE(), the compiler might combine the load from 'a' with other 628loads from 'a', and the store to 'b' with other stores to 'b', with 629possible highly counterintuitive effects on ordering. 630 631Worse yet, if the compiler is able to prove (say) that the value of 632variable 'a' is always non-zero, it would be well within its rights 633to optimize the original example by eliminating the "if" statement 634as follows: 635 636 q = a; 637 b = p; /* BUG: Compiler and CPU can both reorder!!! */ 638 639So don't leave out the READ_ONCE(). 640 641It is tempting to try to enforce ordering on identical stores on both 642branches of the "if" statement as follows: 643 644 q = READ_ONCE(a); 645 if (q) { 646 barrier(); 647 WRITE_ONCE(b, p); 648 do_something(); 649 } else { 650 barrier(); 651 WRITE_ONCE(b, p); 652 do_something_else(); 653 } 654 655Unfortunately, current compilers will transform this as follows at high 656optimization levels: 657 658 q = READ_ONCE(a); 659 barrier(); 660 WRITE_ONCE(b, p); /* BUG: No ordering vs. load from a!!! */ 661 if (q) { 662 /* WRITE_ONCE(b, p); -- moved up, BUG!!! */ 663 do_something(); 664 } else { 665 /* WRITE_ONCE(b, p); -- moved up, BUG!!! */ 666 do_something_else(); 667 } 668 669Now there is no conditional between the load from 'a' and the store to 670'b', which means that the CPU is within its rights to reorder them: 671The conditional is absolutely required, and must be present in the 672assembly code even after all compiler optimizations have been applied. 673Therefore, if you need ordering in this example, you need explicit 674memory barriers, for example, smp_store_release(): 675 676 q = READ_ONCE(a); 677 if (q) { 678 smp_store_release(&b, p); 679 do_something(); 680 } else { 681 smp_store_release(&b, p); 682 do_something_else(); 683 } 684 685In contrast, without explicit memory barriers, two-legged-if control 686ordering is guaranteed only when the stores differ, for example: 687 688 q = READ_ONCE(a); 689 if (q) { 690 WRITE_ONCE(b, p); 691 do_something(); 692 } else { 693 WRITE_ONCE(b, r); 694 do_something_else(); 695 } 696 697The initial READ_ONCE() is still required to prevent the compiler from 698proving the value of 'a'. 699 700In addition, you need to be careful what you do with the local variable 'q', 701otherwise the compiler might be able to guess the value and again remove 702the needed conditional. For example: 703 704 q = READ_ONCE(a); 705 if (q % MAX) { 706 WRITE_ONCE(b, p); 707 do_something(); 708 } else { 709 WRITE_ONCE(b, r); 710 do_something_else(); 711 } 712 713If MAX is defined to be 1, then the compiler knows that (q % MAX) is 714equal to zero, in which case the compiler is within its rights to 715transform the above code into the following: 716 717 q = READ_ONCE(a); 718 WRITE_ONCE(b, p); 719 do_something_else(); 720 721Given this transformation, the CPU is not required to respect the ordering 722between the load from variable 'a' and the store to variable 'b'. It is 723tempting to add a barrier(), but this does not help. The conditional 724is gone, and the barrier won't bring it back. Therefore, if you are 725relying on this ordering, you should make sure that MAX is greater than 726one, perhaps as follows: 727 728 q = READ_ONCE(a); 729 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 730 if (q % MAX) { 731 WRITE_ONCE(b, p); 732 do_something(); 733 } else { 734 WRITE_ONCE(b, r); 735 do_something_else(); 736 } 737 738Please note once again that the stores to 'b' differ. If they were 739identical, as noted earlier, the compiler could pull this store outside 740of the 'if' statement. 741 742You must also be careful not to rely too much on boolean short-circuit 743evaluation. Consider this example: 744 745 q = READ_ONCE(a); 746 if (q || 1 > 0) 747 WRITE_ONCE(b, 1); 748 749Because the first condition cannot fault and the second condition is 750always true, the compiler can transform this example as following, 751defeating control dependency: 752 753 q = READ_ONCE(a); 754 WRITE_ONCE(b, 1); 755 756This example underscores the need to ensure that the compiler cannot 757out-guess your code. More generally, although READ_ONCE() does force 758the compiler to actually emit code for a given load, it does not force 759the compiler to use the results. 760 761Finally, control dependencies do -not- provide transitivity. This is 762demonstrated by two related examples, with the initial values of 763x and y both being zero: 764 765 CPU 0 CPU 1 766 ======================= ======================= 767 r1 = READ_ONCE(x); r2 = READ_ONCE(y); 768 if (r1 > 0) if (r2 > 0) 769 WRITE_ONCE(y, 1); WRITE_ONCE(x, 1); 770 771 assert(!(r1 == 1 && r2 == 1)); 772 773The above two-CPU example will never trigger the assert(). However, 774if control dependencies guaranteed transitivity (which they do not), 775then adding the following CPU would guarantee a related assertion: 776 777 CPU 2 778 ===================== 779 WRITE_ONCE(x, 2); 780 781 assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */ 782 783But because control dependencies do -not- provide transitivity, the above 784assertion can fail after the combined three-CPU example completes. If you 785need the three-CPU example to provide ordering, you will need smp_mb() 786between the loads and stores in the CPU 0 and CPU 1 code fragments, 787that is, just before or just after the "if" statements. Furthermore, 788the original two-CPU example is very fragile and should be avoided. 789 790These two examples are the LB and WWC litmus tests from this paper: 791http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this 792site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html. 793 794In summary: 795 796 (*) Control dependencies can order prior loads against later stores. 797 However, they do -not- guarantee any other sort of ordering: 798 Not prior loads against later loads, nor prior stores against 799 later anything. If you need these other forms of ordering, 800 use smp_rmb(), smp_wmb(), or, in the case of prior stores and 801 later loads, smp_mb(). 802 803 (*) If both legs of the "if" statement begin with identical stores 804 to the same variable, a barrier() statement is required at the 805 beginning of each leg of the "if" statement. 806 807 (*) Control dependencies require at least one run-time conditional 808 between the prior load and the subsequent store, and this 809 conditional must involve the prior load. If the compiler is able 810 to optimize the conditional away, it will have also optimized 811 away the ordering. Careful use of READ_ONCE() and WRITE_ONCE() 812 can help to preserve the needed conditional. 813 814 (*) Control dependencies require that the compiler avoid reordering the 815 dependency into nonexistence. Careful use of READ_ONCE() or 816 atomic{,64}_read() can help to preserve your control dependency. 817 Please see the Compiler Barrier section for more information. 818 819 (*) Control dependencies pair normally with other types of barriers. 820 821 (*) Control dependencies do -not- provide transitivity. If you 822 need transitivity, use smp_mb(). 823 824 825SMP BARRIER PAIRING 826------------------- 827 828When dealing with CPU-CPU interactions, certain types of memory barrier should 829always be paired. A lack of appropriate pairing is almost certainly an error. 830 831General barriers pair with each other, though they also pair with most 832other types of barriers, albeit without transitivity. An acquire barrier 833pairs with a release barrier, but both may also pair with other barriers, 834including of course general barriers. A write barrier pairs with a data 835dependency barrier, a control dependency, an acquire barrier, a release 836barrier, a read barrier, or a general barrier. Similarly a read barrier, 837control dependency, or a data dependency barrier pairs with a write 838barrier, an acquire barrier, a release barrier, or a general barrier: 839 840 CPU 1 CPU 2 841 =============== =============== 842 WRITE_ONCE(a, 1); 843 <write barrier> 844 WRITE_ONCE(b, 2); x = READ_ONCE(b); 845 <read barrier> 846 y = READ_ONCE(a); 847 848Or: 849 850 CPU 1 CPU 2 851 =============== =============================== 852 a = 1; 853 <write barrier> 854 WRITE_ONCE(b, &a); x = READ_ONCE(b); 855 <data dependency barrier> 856 y = *x; 857 858Or even: 859 860 CPU 1 CPU 2 861 =============== =============================== 862 r1 = READ_ONCE(y); 863 <general barrier> 864 WRITE_ONCE(y, 1); if (r2 = READ_ONCE(x)) { 865 <implicit control dependency> 866 WRITE_ONCE(y, 1); 867 } 868 869 assert(r1 == 0 || r2 == 0); 870 871Basically, the read barrier always has to be there, even though it can be of 872the "weaker" type. 873 874[!] Note that the stores before the write barrier would normally be expected to 875match the loads after the read barrier or the data dependency barrier, and vice 876versa: 877 878 CPU 1 CPU 2 879 =================== =================== 880 WRITE_ONCE(a, 1); }---- --->{ v = READ_ONCE(c); 881 WRITE_ONCE(b, 2); } \ / { w = READ_ONCE(d); 882 <write barrier> \ <read barrier> 883 WRITE_ONCE(c, 3); } / \ { x = READ_ONCE(a); 884 WRITE_ONCE(d, 4); }---- --->{ y = READ_ONCE(b); 885 886 887EXAMPLES OF MEMORY BARRIER SEQUENCES 888------------------------------------ 889 890Firstly, write barriers act as partial orderings on store operations. 891Consider the following sequence of events: 892 893 CPU 1 894 ======================= 895 STORE A = 1 896 STORE B = 2 897 STORE C = 3 898 <write barrier> 899 STORE D = 4 900 STORE E = 5 901 902This sequence of events is committed to the memory coherence system in an order 903that the rest of the system might perceive as the unordered set of { STORE A, 904STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 905}: 906 907 +-------+ : : 908 | | +------+ 909 | |------>| C=3 | } /\ 910 | | : +------+ }----- \ -----> Events perceptible to 911 | | : | A=1 | } \/ the rest of the system 912 | | : +------+ } 913 | CPU 1 | : | B=2 | } 914 | | +------+ } 915 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 916 | | +------+ } requires all stores prior to the 917 | | : | E=5 | } barrier to be committed before 918 | | : +------+ } further stores may take place 919 | |------>| D=4 | } 920 | | +------+ 921 +-------+ : : 922 | 923 | Sequence in which stores are committed to the 924 | memory system by CPU 1 925 V 926 927 928Secondly, data dependency barriers act as partial orderings on data-dependent 929loads. Consider the following sequence of events: 930 931 CPU 1 CPU 2 932 ======================= ======================= 933 { B = 7; X = 9; Y = 8; C = &Y } 934 STORE A = 1 935 STORE B = 2 936 <write barrier> 937 STORE C = &B LOAD X 938 STORE D = 4 LOAD C (gets &B) 939 LOAD *C (reads B) 940 941Without intervention, CPU 2 may perceive the events on CPU 1 in some 942effectively random order, despite the write barrier issued by CPU 1: 943 944 +-------+ : : : : 945 | | +------+ +-------+ | Sequence of update 946 | |------>| B=2 |----- --->| Y->8 | | of perception on 947 | | : +------+ \ +-------+ | CPU 2 948 | CPU 1 | : | A=1 | \ --->| C->&Y | V 949 | | +------+ | +-------+ 950 | | wwwwwwwwwwwwwwww | : : 951 | | +------+ | : : 952 | | : | C=&B |--- | : : +-------+ 953 | | : +------+ \ | +-------+ | | 954 | |------>| D=4 | ----------->| C->&B |------>| | 955 | | +------+ | +-------+ | | 956 +-------+ : : | : : | | 957 | : : | | 958 | : : | CPU 2 | 959 | +-------+ | | 960 Apparently incorrect ---> | | B->7 |------>| | 961 perception of B (!) | +-------+ | | 962 | : : | | 963 | +-------+ | | 964 The load of X holds ---> \ | X->9 |------>| | 965 up the maintenance \ +-------+ | | 966 of coherence of B ----->| B->2 | +-------+ 967 +-------+ 968 : : 969 970 971In the above example, CPU 2 perceives that B is 7, despite the load of *C 972(which would be B) coming after the LOAD of C. 973 974If, however, a data dependency barrier were to be placed between the load of C 975and the load of *C (ie: B) on CPU 2: 976 977 CPU 1 CPU 2 978 ======================= ======================= 979 { B = 7; X = 9; Y = 8; C = &Y } 980 STORE A = 1 981 STORE B = 2 982 <write barrier> 983 STORE C = &B LOAD X 984 STORE D = 4 LOAD C (gets &B) 985 <data dependency barrier> 986 LOAD *C (reads B) 987 988then the following will occur: 989 990 +-------+ : : : : 991 | | +------+ +-------+ 992 | |------>| B=2 |----- --->| Y->8 | 993 | | : +------+ \ +-------+ 994 | CPU 1 | : | A=1 | \ --->| C->&Y | 995 | | +------+ | +-------+ 996 | | wwwwwwwwwwwwwwww | : : 997 | | +------+ | : : 998 | | : | C=&B |--- | : : +-------+ 999 | | : +------+ \ | +-------+ | | 1000 | |------>| D=4 | ----------->| C->&B |------>| | 1001 | | +------+ | +-------+ | | 1002 +-------+ : : | : : | | 1003 | : : | | 1004 | : : | CPU 2 | 1005 | +-------+ | | 1006 | | X->9 |------>| | 1007 | +-------+ | | 1008 Makes sure all effects ---> \ ddddddddddddddddd | | 1009 prior to the store of C \ +-------+ | | 1010 are perceptible to ----->| B->2 |------>| | 1011 subsequent loads +-------+ | | 1012 : : +-------+ 1013 1014 1015And thirdly, a read barrier acts as a partial order on loads. Consider the 1016following sequence of events: 1017 1018 CPU 1 CPU 2 1019 ======================= ======================= 1020 { A = 0, B = 9 } 1021 STORE A=1 1022 <write barrier> 1023 STORE B=2 1024 LOAD B 1025 LOAD A 1026 1027Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 1028some effectively random order, despite the write barrier issued by CPU 1: 1029 1030 +-------+ : : : : 1031 | | +------+ +-------+ 1032 | |------>| A=1 |------ --->| A->0 | 1033 | | +------+ \ +-------+ 1034 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1035 | | +------+ | +-------+ 1036 | |------>| B=2 |--- | : : 1037 | | +------+ \ | : : +-------+ 1038 +-------+ : : \ | +-------+ | | 1039 ---------->| B->2 |------>| | 1040 | +-------+ | CPU 2 | 1041 | | A->0 |------>| | 1042 | +-------+ | | 1043 | : : +-------+ 1044 \ : : 1045 \ +-------+ 1046 ---->| A->1 | 1047 +-------+ 1048 : : 1049 1050 1051If, however, a read barrier were to be placed between the load of B and the 1052load of A on CPU 2: 1053 1054 CPU 1 CPU 2 1055 ======================= ======================= 1056 { A = 0, B = 9 } 1057 STORE A=1 1058 <write barrier> 1059 STORE B=2 1060 LOAD B 1061 <read barrier> 1062 LOAD A 1063 1064then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 10652: 1066 1067 +-------+ : : : : 1068 | | +------+ +-------+ 1069 | |------>| A=1 |------ --->| A->0 | 1070 | | +------+ \ +-------+ 1071 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1072 | | +------+ | +-------+ 1073 | |------>| B=2 |--- | : : 1074 | | +------+ \ | : : +-------+ 1075 +-------+ : : \ | +-------+ | | 1076 ---------->| B->2 |------>| | 1077 | +-------+ | CPU 2 | 1078 | : : | | 1079 | : : | | 1080 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1081 barrier causes all effects \ +-------+ | | 1082 prior to the storage of B ---->| A->1 |------>| | 1083 to be perceptible to CPU 2 +-------+ | | 1084 : : +-------+ 1085 1086 1087To illustrate this more completely, consider what could happen if the code 1088contained a load of A either side of the read barrier: 1089 1090 CPU 1 CPU 2 1091 ======================= ======================= 1092 { A = 0, B = 9 } 1093 STORE A=1 1094 <write barrier> 1095 STORE B=2 1096 LOAD B 1097 LOAD A [first load of A] 1098 <read barrier> 1099 LOAD A [second load of A] 1100 1101Even though the two loads of A both occur after the load of B, they may both 1102come up with different values: 1103 1104 +-------+ : : : : 1105 | | +------+ +-------+ 1106 | |------>| A=1 |------ --->| A->0 | 1107 | | +------+ \ +-------+ 1108 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1109 | | +------+ | +-------+ 1110 | |------>| B=2 |--- | : : 1111 | | +------+ \ | : : +-------+ 1112 +-------+ : : \ | +-------+ | | 1113 ---------->| B->2 |------>| | 1114 | +-------+ | CPU 2 | 1115 | : : | | 1116 | : : | | 1117 | +-------+ | | 1118 | | A->0 |------>| 1st | 1119 | +-------+ | | 1120 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1121 barrier causes all effects \ +-------+ | | 1122 prior to the storage of B ---->| A->1 |------>| 2nd | 1123 to be perceptible to CPU 2 +-------+ | | 1124 : : +-------+ 1125 1126 1127But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 1128before the read barrier completes anyway: 1129 1130 +-------+ : : : : 1131 | | +------+ +-------+ 1132 | |------>| A=1 |------ --->| A->0 | 1133 | | +------+ \ +-------+ 1134 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1135 | | +------+ | +-------+ 1136 | |------>| B=2 |--- | : : 1137 | | +------+ \ | : : +-------+ 1138 +-------+ : : \ | +-------+ | | 1139 ---------->| B->2 |------>| | 1140 | +-------+ | CPU 2 | 1141 | : : | | 1142 \ : : | | 1143 \ +-------+ | | 1144 ---->| A->1 |------>| 1st | 1145 +-------+ | | 1146 rrrrrrrrrrrrrrrrr | | 1147 +-------+ | | 1148 | A->1 |------>| 2nd | 1149 +-------+ | | 1150 : : +-------+ 1151 1152 1153The guarantee is that the second load will always come up with A == 1 if the 1154load of B came up with B == 2. No such guarantee exists for the first load of 1155A; that may come up with either A == 0 or A == 1. 1156 1157 1158READ MEMORY BARRIERS VS LOAD SPECULATION 1159---------------------------------------- 1160 1161Many CPUs speculate with loads: that is they see that they will need to load an 1162item from memory, and they find a time where they're not using the bus for any 1163other loads, and so do the load in advance - even though they haven't actually 1164got to that point in the instruction execution flow yet. This permits the 1165actual load instruction to potentially complete immediately because the CPU 1166already has the value to hand. 1167 1168It may turn out that the CPU didn't actually need the value - perhaps because a 1169branch circumvented the load - in which case it can discard the value or just 1170cache it for later use. 1171 1172Consider: 1173 1174 CPU 1 CPU 2 1175 ======================= ======================= 1176 LOAD B 1177 DIVIDE } Divide instructions generally 1178 DIVIDE } take a long time to perform 1179 LOAD A 1180 1181Which might appear as this: 1182 1183 : : +-------+ 1184 +-------+ | | 1185 --->| B->2 |------>| | 1186 +-------+ | CPU 2 | 1187 : :DIVIDE | | 1188 +-------+ | | 1189 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1190 division speculates on the +-------+ ~ | | 1191 LOAD of A : : ~ | | 1192 : :DIVIDE | | 1193 : : ~ | | 1194 Once the divisions are complete --> : : ~-->| | 1195 the CPU can then perform the : : | | 1196 LOAD with immediate effect : : +-------+ 1197 1198 1199Placing a read barrier or a data dependency barrier just before the second 1200load: 1201 1202 CPU 1 CPU 2 1203 ======================= ======================= 1204 LOAD B 1205 DIVIDE 1206 DIVIDE 1207 <read barrier> 1208 LOAD A 1209 1210will force any value speculatively obtained to be reconsidered to an extent 1211dependent on the type of barrier used. If there was no change made to the 1212speculated memory location, then the speculated value will just be used: 1213 1214 : : +-------+ 1215 +-------+ | | 1216 --->| B->2 |------>| | 1217 +-------+ | CPU 2 | 1218 : :DIVIDE | | 1219 +-------+ | | 1220 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1221 division speculates on the +-------+ ~ | | 1222 LOAD of A : : ~ | | 1223 : :DIVIDE | | 1224 : : ~ | | 1225 : : ~ | | 1226 rrrrrrrrrrrrrrrr~ | | 1227 : : ~ | | 1228 : : ~-->| | 1229 : : | | 1230 : : +-------+ 1231 1232 1233but if there was an update or an invalidation from another CPU pending, then 1234the speculation will be cancelled and the value reloaded: 1235 1236 : : +-------+ 1237 +-------+ | | 1238 --->| B->2 |------>| | 1239 +-------+ | CPU 2 | 1240 : :DIVIDE | | 1241 +-------+ | | 1242 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1243 division speculates on the +-------+ ~ | | 1244 LOAD of A : : ~ | | 1245 : :DIVIDE | | 1246 : : ~ | | 1247 : : ~ | | 1248 rrrrrrrrrrrrrrrrr | | 1249 +-------+ | | 1250 The speculation is discarded ---> --->| A->1 |------>| | 1251 and an updated value is +-------+ | | 1252 retrieved : : +-------+ 1253 1254 1255TRANSITIVITY 1256------------ 1257 1258Transitivity is a deeply intuitive notion about ordering that is not 1259always provided by real computer systems. The following example 1260demonstrates transitivity (also called "cumulativity"): 1261 1262 CPU 1 CPU 2 CPU 3 1263 ======================= ======================= ======================= 1264 { X = 0, Y = 0 } 1265 STORE X=1 LOAD X STORE Y=1 1266 <general barrier> <general barrier> 1267 LOAD Y LOAD X 1268 1269Suppose that CPU 2's load from X returns 1 and its load from Y returns 0. 1270This indicates that CPU 2's load from X in some sense follows CPU 1's 1271store to X and that CPU 2's load from Y in some sense preceded CPU 3's 1272store to Y. The question is then "Can CPU 3's load from X return 0?" 1273 1274Because CPU 2's load from X in some sense came after CPU 1's store, it 1275is natural to expect that CPU 3's load from X must therefore return 1. 1276This expectation is an example of transitivity: if a load executing on 1277CPU A follows a load from the same variable executing on CPU B, then 1278CPU A's load must either return the same value that CPU B's load did, 1279or must return some later value. 1280 1281In the Linux kernel, use of general memory barriers guarantees 1282transitivity. Therefore, in the above example, if CPU 2's load from X 1283returns 1 and its load from Y returns 0, then CPU 3's load from X must 1284also return 1. 1285 1286However, transitivity is -not- guaranteed for read or write barriers. 1287For example, suppose that CPU 2's general barrier in the above example 1288is changed to a read barrier as shown below: 1289 1290 CPU 1 CPU 2 CPU 3 1291 ======================= ======================= ======================= 1292 { X = 0, Y = 0 } 1293 STORE X=1 LOAD X STORE Y=1 1294 <read barrier> <general barrier> 1295 LOAD Y LOAD X 1296 1297This substitution destroys transitivity: in this example, it is perfectly 1298legal for CPU 2's load from X to return 1, its load from Y to return 0, 1299and CPU 3's load from X to return 0. 1300 1301The key point is that although CPU 2's read barrier orders its pair 1302of loads, it does not guarantee to order CPU 1's store. Therefore, if 1303this example runs on a system where CPUs 1 and 2 share a store buffer 1304or a level of cache, CPU 2 might have early access to CPU 1's writes. 1305General barriers are therefore required to ensure that all CPUs agree 1306on the combined order of CPU 1's and CPU 2's accesses. 1307 1308To reiterate, if your code requires transitivity, use general barriers 1309throughout. 1310 1311 1312======================== 1313EXPLICIT KERNEL BARRIERS 1314======================== 1315 1316The Linux kernel has a variety of different barriers that act at different 1317levels: 1318 1319 (*) Compiler barrier. 1320 1321 (*) CPU memory barriers. 1322 1323 (*) MMIO write barrier. 1324 1325 1326COMPILER BARRIER 1327---------------- 1328 1329The Linux kernel has an explicit compiler barrier function that prevents the 1330compiler from moving the memory accesses either side of it to the other side: 1331 1332 barrier(); 1333 1334This is a general barrier -- there are no read-read or write-write 1335variants of barrier(). However, READ_ONCE() and WRITE_ONCE() can be 1336thought of as weak forms of barrier() that affect only the specific 1337accesses flagged by the READ_ONCE() or WRITE_ONCE(). 1338 1339The barrier() function has the following effects: 1340 1341 (*) Prevents the compiler from reordering accesses following the 1342 barrier() to precede any accesses preceding the barrier(). 1343 One example use for this property is to ease communication between 1344 interrupt-handler code and the code that was interrupted. 1345 1346 (*) Within a loop, forces the compiler to load the variables used 1347 in that loop's conditional on each pass through that loop. 1348 1349The READ_ONCE() and WRITE_ONCE() functions can prevent any number of 1350optimizations that, while perfectly safe in single-threaded code, can 1351be fatal in concurrent code. Here are some examples of these sorts 1352of optimizations: 1353 1354 (*) The compiler is within its rights to reorder loads and stores 1355 to the same variable, and in some cases, the CPU is within its 1356 rights to reorder loads to the same variable. This means that 1357 the following code: 1358 1359 a[0] = x; 1360 a[1] = x; 1361 1362 Might result in an older value of x stored in a[1] than in a[0]. 1363 Prevent both the compiler and the CPU from doing this as follows: 1364 1365 a[0] = READ_ONCE(x); 1366 a[1] = READ_ONCE(x); 1367 1368 In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for 1369 accesses from multiple CPUs to a single variable. 1370 1371 (*) The compiler is within its rights to merge successive loads from 1372 the same variable. Such merging can cause the compiler to "optimize" 1373 the following code: 1374 1375 while (tmp = a) 1376 do_something_with(tmp); 1377 1378 into the following code, which, although in some sense legitimate 1379 for single-threaded code, is almost certainly not what the developer 1380 intended: 1381 1382 if (tmp = a) 1383 for (;;) 1384 do_something_with(tmp); 1385 1386 Use READ_ONCE() to prevent the compiler from doing this to you: 1387 1388 while (tmp = READ_ONCE(a)) 1389 do_something_with(tmp); 1390 1391 (*) The compiler is within its rights to reload a variable, for example, 1392 in cases where high register pressure prevents the compiler from 1393 keeping all data of interest in registers. The compiler might 1394 therefore optimize the variable 'tmp' out of our previous example: 1395 1396 while (tmp = a) 1397 do_something_with(tmp); 1398 1399 This could result in the following code, which is perfectly safe in 1400 single-threaded code, but can be fatal in concurrent code: 1401 1402 while (a) 1403 do_something_with(a); 1404 1405 For example, the optimized version of this code could result in 1406 passing a zero to do_something_with() in the case where the variable 1407 a was modified by some other CPU between the "while" statement and 1408 the call to do_something_with(). 1409 1410 Again, use READ_ONCE() to prevent the compiler from doing this: 1411 1412 while (tmp = READ_ONCE(a)) 1413 do_something_with(tmp); 1414 1415 Note that if the compiler runs short of registers, it might save 1416 tmp onto the stack. The overhead of this saving and later restoring 1417 is why compilers reload variables. Doing so is perfectly safe for 1418 single-threaded code, so you need to tell the compiler about cases 1419 where it is not safe. 1420 1421 (*) The compiler is within its rights to omit a load entirely if it knows 1422 what the value will be. For example, if the compiler can prove that 1423 the value of variable 'a' is always zero, it can optimize this code: 1424 1425 while (tmp = a) 1426 do_something_with(tmp); 1427 1428 Into this: 1429 1430 do { } while (0); 1431 1432 This transformation is a win for single-threaded code because it 1433 gets rid of a load and a branch. The problem is that the compiler 1434 will carry out its proof assuming that the current CPU is the only 1435 one updating variable 'a'. If variable 'a' is shared, then the 1436 compiler's proof will be erroneous. Use READ_ONCE() to tell the 1437 compiler that it doesn't know as much as it thinks it does: 1438 1439 while (tmp = READ_ONCE(a)) 1440 do_something_with(tmp); 1441 1442 But please note that the compiler is also closely watching what you 1443 do with the value after the READ_ONCE(). For example, suppose you 1444 do the following and MAX is a preprocessor macro with the value 1: 1445 1446 while ((tmp = READ_ONCE(a)) % MAX) 1447 do_something_with(tmp); 1448 1449 Then the compiler knows that the result of the "%" operator applied 1450 to MAX will always be zero, again allowing the compiler to optimize 1451 the code into near-nonexistence. (It will still load from the 1452 variable 'a'.) 1453 1454 (*) Similarly, the compiler is within its rights to omit a store entirely 1455 if it knows that the variable already has the value being stored. 1456 Again, the compiler assumes that the current CPU is the only one 1457 storing into the variable, which can cause the compiler to do the 1458 wrong thing for shared variables. For example, suppose you have 1459 the following: 1460 1461 a = 0; 1462 /* Code that does not store to variable a. */ 1463 a = 0; 1464 1465 The compiler sees that the value of variable 'a' is already zero, so 1466 it might well omit the second store. This would come as a fatal 1467 surprise if some other CPU might have stored to variable 'a' in the 1468 meantime. 1469 1470 Use WRITE_ONCE() to prevent the compiler from making this sort of 1471 wrong guess: 1472 1473 WRITE_ONCE(a, 0); 1474 /* Code that does not store to variable a. */ 1475 WRITE_ONCE(a, 0); 1476 1477 (*) The compiler is within its rights to reorder memory accesses unless 1478 you tell it not to. For example, consider the following interaction 1479 between process-level code and an interrupt handler: 1480 1481 void process_level(void) 1482 { 1483 msg = get_message(); 1484 flag = true; 1485 } 1486 1487 void interrupt_handler(void) 1488 { 1489 if (flag) 1490 process_message(msg); 1491 } 1492 1493 There is nothing to prevent the compiler from transforming 1494 process_level() to the following, in fact, this might well be a 1495 win for single-threaded code: 1496 1497 void process_level(void) 1498 { 1499 flag = true; 1500 msg = get_message(); 1501 } 1502 1503 If the interrupt occurs between these two statement, then 1504 interrupt_handler() might be passed a garbled msg. Use WRITE_ONCE() 1505 to prevent this as follows: 1506 1507 void process_level(void) 1508 { 1509 WRITE_ONCE(msg, get_message()); 1510 WRITE_ONCE(flag, true); 1511 } 1512 1513 void interrupt_handler(void) 1514 { 1515 if (READ_ONCE(flag)) 1516 process_message(READ_ONCE(msg)); 1517 } 1518 1519 Note that the READ_ONCE() and WRITE_ONCE() wrappers in 1520 interrupt_handler() are needed if this interrupt handler can itself 1521 be interrupted by something that also accesses 'flag' and 'msg', 1522 for example, a nested interrupt or an NMI. Otherwise, READ_ONCE() 1523 and WRITE_ONCE() are not needed in interrupt_handler() other than 1524 for documentation purposes. (Note also that nested interrupts 1525 do not typically occur in modern Linux kernels, in fact, if an 1526 interrupt handler returns with interrupts enabled, you will get a 1527 WARN_ONCE() splat.) 1528 1529 You should assume that the compiler can move READ_ONCE() and 1530 WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(), 1531 barrier(), or similar primitives. 1532 1533 This effect could also be achieved using barrier(), but READ_ONCE() 1534 and WRITE_ONCE() are more selective: With READ_ONCE() and 1535 WRITE_ONCE(), the compiler need only forget the contents of the 1536 indicated memory locations, while with barrier() the compiler must 1537 discard the value of all memory locations that it has currented 1538 cached in any machine registers. Of course, the compiler must also 1539 respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur, 1540 though the CPU of course need not do so. 1541 1542 (*) The compiler is within its rights to invent stores to a variable, 1543 as in the following example: 1544 1545 if (a) 1546 b = a; 1547 else 1548 b = 42; 1549 1550 The compiler might save a branch by optimizing this as follows: 1551 1552 b = 42; 1553 if (a) 1554 b = a; 1555 1556 In single-threaded code, this is not only safe, but also saves 1557 a branch. Unfortunately, in concurrent code, this optimization 1558 could cause some other CPU to see a spurious value of 42 -- even 1559 if variable 'a' was never zero -- when loading variable 'b'. 1560 Use WRITE_ONCE() to prevent this as follows: 1561 1562 if (a) 1563 WRITE_ONCE(b, a); 1564 else 1565 WRITE_ONCE(b, 42); 1566 1567 The compiler can also invent loads. These are usually less 1568 damaging, but they can result in cache-line bouncing and thus in 1569 poor performance and scalability. Use READ_ONCE() to prevent 1570 invented loads. 1571 1572 (*) For aligned memory locations whose size allows them to be accessed 1573 with a single memory-reference instruction, prevents "load tearing" 1574 and "store tearing," in which a single large access is replaced by 1575 multiple smaller accesses. For example, given an architecture having 1576 16-bit store instructions with 7-bit immediate fields, the compiler 1577 might be tempted to use two 16-bit store-immediate instructions to 1578 implement the following 32-bit store: 1579 1580 p = 0x00010002; 1581 1582 Please note that GCC really does use this sort of optimization, 1583 which is not surprising given that it would likely take more 1584 than two instructions to build the constant and then store it. 1585 This optimization can therefore be a win in single-threaded code. 1586 In fact, a recent bug (since fixed) caused GCC to incorrectly use 1587 this optimization in a volatile store. In the absence of such bugs, 1588 use of WRITE_ONCE() prevents store tearing in the following example: 1589 1590 WRITE_ONCE(p, 0x00010002); 1591 1592 Use of packed structures can also result in load and store tearing, 1593 as in this example: 1594 1595 struct __attribute__((__packed__)) foo { 1596 short a; 1597 int b; 1598 short c; 1599 }; 1600 struct foo foo1, foo2; 1601 ... 1602 1603 foo2.a = foo1.a; 1604 foo2.b = foo1.b; 1605 foo2.c = foo1.c; 1606 1607 Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no 1608 volatile markings, the compiler would be well within its rights to 1609 implement these three assignment statements as a pair of 32-bit 1610 loads followed by a pair of 32-bit stores. This would result in 1611 load tearing on 'foo1.b' and store tearing on 'foo2.b'. READ_ONCE() 1612 and WRITE_ONCE() again prevent tearing in this example: 1613 1614 foo2.a = foo1.a; 1615 WRITE_ONCE(foo2.b, READ_ONCE(foo1.b)); 1616 foo2.c = foo1.c; 1617 1618All that aside, it is never necessary to use READ_ONCE() and 1619WRITE_ONCE() on a variable that has been marked volatile. For example, 1620because 'jiffies' is marked volatile, it is never necessary to 1621say READ_ONCE(jiffies). The reason for this is that READ_ONCE() and 1622WRITE_ONCE() are implemented as volatile casts, which has no effect when 1623its argument is already marked volatile. 1624 1625Please note that these compiler barriers have no direct effect on the CPU, 1626which may then reorder things however it wishes. 1627 1628 1629CPU MEMORY BARRIERS 1630------------------- 1631 1632The Linux kernel has eight basic CPU memory barriers: 1633 1634 TYPE MANDATORY SMP CONDITIONAL 1635 =============== ======================= =========================== 1636 GENERAL mb() smp_mb() 1637 WRITE wmb() smp_wmb() 1638 READ rmb() smp_rmb() 1639 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() 1640 1641 1642All memory barriers except the data dependency barriers imply a compiler 1643barrier. Data dependencies do not impose any additional compiler ordering. 1644 1645Aside: In the case of data dependencies, the compiler would be expected 1646to issue the loads in the correct order (eg. `a[b]` would have to load 1647the value of b before loading a[b]), however there is no guarantee in 1648the C specification that the compiler may not speculate the value of b 1649(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1) 1650tmp = a[b]; ). There is also the problem of a compiler reloading b after 1651having loaded a[b], thus having a newer copy of b than a[b]. A consensus 1652has not yet been reached about these problems, however the READ_ONCE() 1653macro is a good place to start looking. 1654 1655SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1656systems because it is assumed that a CPU will appear to be self-consistent, 1657and will order overlapping accesses correctly with respect to itself. 1658However, see the subsection on "Virtual Machine Guests" below. 1659 1660[!] Note that SMP memory barriers _must_ be used to control the ordering of 1661references to shared memory on SMP systems, though the use of locking instead 1662is sufficient. 1663 1664Mandatory barriers should not be used to control SMP effects, since mandatory 1665barriers impose unnecessary overhead on both SMP and UP systems. They may, 1666however, be used to control MMIO effects on accesses through relaxed memory I/O 1667windows. These barriers are required even on non-SMP systems as they affect 1668the order in which memory operations appear to a device by prohibiting both the 1669compiler and the CPU from reordering them. 1670 1671 1672There are some more advanced barrier functions: 1673 1674 (*) smp_store_mb(var, value) 1675 1676 This assigns the value to the variable and then inserts a full memory 1677 barrier after it. It isn't guaranteed to insert anything more than a 1678 compiler barrier in a UP compilation. 1679 1680 1681 (*) smp_mb__before_atomic(); 1682 (*) smp_mb__after_atomic(); 1683 1684 These are for use with atomic (such as add, subtract, increment and 1685 decrement) functions that don't return a value, especially when used for 1686 reference counting. These functions do not imply memory barriers. 1687 1688 These are also used for atomic bitop functions that do not return a 1689 value (such as set_bit and clear_bit). 1690 1691 As an example, consider a piece of code that marks an object as being dead 1692 and then decrements the object's reference count: 1693 1694 obj->dead = 1; 1695 smp_mb__before_atomic(); 1696 atomic_dec(&obj->ref_count); 1697 1698 This makes sure that the death mark on the object is perceived to be set 1699 *before* the reference counter is decremented. 1700 1701 See Documentation/atomic_ops.txt for more information. See the "Atomic 1702 operations" subsection for information on where to use these. 1703 1704 1705 (*) lockless_dereference(); 1706 This can be thought of as a pointer-fetch wrapper around the 1707 smp_read_barrier_depends() data-dependency barrier. 1708 1709 This is also similar to rcu_dereference(), but in cases where 1710 object lifetime is handled by some mechanism other than RCU, for 1711 example, when the objects removed only when the system goes down. 1712 In addition, lockless_dereference() is used in some data structures 1713 that can be used both with and without RCU. 1714 1715 1716 (*) dma_wmb(); 1717 (*) dma_rmb(); 1718 1719 These are for use with consistent memory to guarantee the ordering 1720 of writes or reads of shared memory accessible to both the CPU and a 1721 DMA capable device. 1722 1723 For example, consider a device driver that shares memory with a device 1724 and uses a descriptor status value to indicate if the descriptor belongs 1725 to the device or the CPU, and a doorbell to notify it when new 1726 descriptors are available: 1727 1728 if (desc->status != DEVICE_OWN) { 1729 /* do not read data until we own descriptor */ 1730 dma_rmb(); 1731 1732 /* read/modify data */ 1733 read_data = desc->data; 1734 desc->data = write_data; 1735 1736 /* flush modifications before status update */ 1737 dma_wmb(); 1738 1739 /* assign ownership */ 1740 desc->status = DEVICE_OWN; 1741 1742 /* force memory to sync before notifying device via MMIO */ 1743 wmb(); 1744 1745 /* notify device of new descriptors */ 1746 writel(DESC_NOTIFY, doorbell); 1747 } 1748 1749 The dma_rmb() allows us guarantee the device has released ownership 1750 before we read the data from the descriptor, and the dma_wmb() allows 1751 us to guarantee the data is written to the descriptor before the device 1752 can see it now has ownership. The wmb() is needed to guarantee that the 1753 cache coherent memory writes have completed before attempting a write to 1754 the cache incoherent MMIO region. 1755 1756 See Documentation/DMA-API.txt for more information on consistent memory. 1757 1758MMIO WRITE BARRIER 1759------------------ 1760 1761The Linux kernel also has a special barrier for use with memory-mapped I/O 1762writes: 1763 1764 mmiowb(); 1765 1766This is a variation on the mandatory write barrier that causes writes to weakly 1767ordered I/O regions to be partially ordered. Its effects may go beyond the 1768CPU->Hardware interface and actually affect the hardware at some level. 1769 1770See the subsection "Locks vs I/O accesses" for more information. 1771 1772 1773=============================== 1774IMPLICIT KERNEL MEMORY BARRIERS 1775=============================== 1776 1777Some of the other functions in the linux kernel imply memory barriers, amongst 1778which are locking and scheduling functions. 1779 1780This specification is a _minimum_ guarantee; any particular architecture may 1781provide more substantial guarantees, but these may not be relied upon outside 1782of arch specific code. 1783 1784 1785ACQUIRING FUNCTIONS 1786------------------- 1787 1788The Linux kernel has a number of locking constructs: 1789 1790 (*) spin locks 1791 (*) R/W spin locks 1792 (*) mutexes 1793 (*) semaphores 1794 (*) R/W semaphores 1795 1796In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations 1797for each construct. These operations all imply certain barriers: 1798 1799 (1) ACQUIRE operation implication: 1800 1801 Memory operations issued after the ACQUIRE will be completed after the 1802 ACQUIRE operation has completed. 1803 1804 Memory operations issued before the ACQUIRE may be completed after 1805 the ACQUIRE operation has completed. An smp_mb__before_spinlock(), 1806 combined with a following ACQUIRE, orders prior stores against 1807 subsequent loads and stores. Note that this is weaker than smp_mb()! 1808 The smp_mb__before_spinlock() primitive is free on many architectures. 1809 1810 (2) RELEASE operation implication: 1811 1812 Memory operations issued before the RELEASE will be completed before the 1813 RELEASE operation has completed. 1814 1815 Memory operations issued after the RELEASE may be completed before the 1816 RELEASE operation has completed. 1817 1818 (3) ACQUIRE vs ACQUIRE implication: 1819 1820 All ACQUIRE operations issued before another ACQUIRE operation will be 1821 completed before that ACQUIRE operation. 1822 1823 (4) ACQUIRE vs RELEASE implication: 1824 1825 All ACQUIRE operations issued before a RELEASE operation will be 1826 completed before the RELEASE operation. 1827 1828 (5) Failed conditional ACQUIRE implication: 1829 1830 Certain locking variants of the ACQUIRE operation may fail, either due to 1831 being unable to get the lock immediately, or due to receiving an unblocked 1832 signal whilst asleep waiting for the lock to become available. Failed 1833 locks do not imply any sort of barrier. 1834 1835[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only 1836one-way barriers is that the effects of instructions outside of a critical 1837section may seep into the inside of the critical section. 1838 1839An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier 1840because it is possible for an access preceding the ACQUIRE to happen after the 1841ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and 1842the two accesses can themselves then cross: 1843 1844 *A = a; 1845 ACQUIRE M 1846 RELEASE M 1847 *B = b; 1848 1849may occur as: 1850 1851 ACQUIRE M, STORE *B, STORE *A, RELEASE M 1852 1853When the ACQUIRE and RELEASE are a lock acquisition and release, 1854respectively, this same reordering can occur if the lock's ACQUIRE and 1855RELEASE are to the same lock variable, but only from the perspective of 1856another CPU not holding that lock. In short, a ACQUIRE followed by an 1857RELEASE may -not- be assumed to be a full memory barrier. 1858 1859Similarly, the reverse case of a RELEASE followed by an ACQUIRE does 1860not imply a full memory barrier. Therefore, the CPU's execution of the 1861critical sections corresponding to the RELEASE and the ACQUIRE can cross, 1862so that: 1863 1864 *A = a; 1865 RELEASE M 1866 ACQUIRE N 1867 *B = b; 1868 1869could occur as: 1870 1871 ACQUIRE N, STORE *B, STORE *A, RELEASE M 1872 1873It might appear that this reordering could introduce a deadlock. 1874However, this cannot happen because if such a deadlock threatened, 1875the RELEASE would simply complete, thereby avoiding the deadlock. 1876 1877 Why does this work? 1878 1879 One key point is that we are only talking about the CPU doing 1880 the reordering, not the compiler. If the compiler (or, for 1881 that matter, the developer) switched the operations, deadlock 1882 -could- occur. 1883 1884 But suppose the CPU reordered the operations. In this case, 1885 the unlock precedes the lock in the assembly code. The CPU 1886 simply elected to try executing the later lock operation first. 1887 If there is a deadlock, this lock operation will simply spin (or 1888 try to sleep, but more on that later). The CPU will eventually 1889 execute the unlock operation (which preceded the lock operation 1890 in the assembly code), which will unravel the potential deadlock, 1891 allowing the lock operation to succeed. 1892 1893 But what if the lock is a sleeplock? In that case, the code will 1894 try to enter the scheduler, where it will eventually encounter 1895 a memory barrier, which will force the earlier unlock operation 1896 to complete, again unraveling the deadlock. There might be 1897 a sleep-unlock race, but the locking primitive needs to resolve 1898 such races properly in any case. 1899 1900Locks and semaphores may not provide any guarantee of ordering on UP compiled 1901systems, and so cannot be counted on in such a situation to actually achieve 1902anything at all - especially with respect to I/O accesses - unless combined 1903with interrupt disabling operations. 1904 1905See also the section on "Inter-CPU locking barrier effects". 1906 1907 1908As an example, consider the following: 1909 1910 *A = a; 1911 *B = b; 1912 ACQUIRE 1913 *C = c; 1914 *D = d; 1915 RELEASE 1916 *E = e; 1917 *F = f; 1918 1919The following sequence of events is acceptable: 1920 1921 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE 1922 1923 [+] Note that {*F,*A} indicates a combined access. 1924 1925But none of the following are: 1926 1927 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E 1928 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F 1929 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F 1930 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E 1931 1932 1933 1934INTERRUPT DISABLING FUNCTIONS 1935----------------------------- 1936 1937Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts 1938(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O 1939barriers are required in such a situation, they must be provided from some 1940other means. 1941 1942 1943SLEEP AND WAKE-UP FUNCTIONS 1944--------------------------- 1945 1946Sleeping and waking on an event flagged in global data can be viewed as an 1947interaction between two pieces of data: the task state of the task waiting for 1948the event and the global data used to indicate the event. To make sure that 1949these appear to happen in the right order, the primitives to begin the process 1950of going to sleep, and the primitives to initiate a wake up imply certain 1951barriers. 1952 1953Firstly, the sleeper normally follows something like this sequence of events: 1954 1955 for (;;) { 1956 set_current_state(TASK_UNINTERRUPTIBLE); 1957 if (event_indicated) 1958 break; 1959 schedule(); 1960 } 1961 1962A general memory barrier is interpolated automatically by set_current_state() 1963after it has altered the task state: 1964 1965 CPU 1 1966 =============================== 1967 set_current_state(); 1968 smp_store_mb(); 1969 STORE current->state 1970 <general barrier> 1971 LOAD event_indicated 1972 1973set_current_state() may be wrapped by: 1974 1975 prepare_to_wait(); 1976 prepare_to_wait_exclusive(); 1977 1978which therefore also imply a general memory barrier after setting the state. 1979The whole sequence above is available in various canned forms, all of which 1980interpolate the memory barrier in the right place: 1981 1982 wait_event(); 1983 wait_event_interruptible(); 1984 wait_event_interruptible_exclusive(); 1985 wait_event_interruptible_timeout(); 1986 wait_event_killable(); 1987 wait_event_timeout(); 1988 wait_on_bit(); 1989 wait_on_bit_lock(); 1990 1991 1992Secondly, code that performs a wake up normally follows something like this: 1993 1994 event_indicated = 1; 1995 wake_up(&event_wait_queue); 1996 1997or: 1998 1999 event_indicated = 1; 2000 wake_up_process(event_daemon); 2001 2002A write memory barrier is implied by wake_up() and co. if and only if they wake 2003something up. The barrier occurs before the task state is cleared, and so sits 2004between the STORE to indicate the event and the STORE to set TASK_RUNNING: 2005 2006 CPU 1 CPU 2 2007 =============================== =============================== 2008 set_current_state(); STORE event_indicated 2009 smp_store_mb(); wake_up(); 2010 STORE current->state <write barrier> 2011 <general barrier> STORE current->state 2012 LOAD event_indicated 2013 2014To repeat, this write memory barrier is present if and only if something 2015is actually awakened. To see this, consider the following sequence of 2016events, where X and Y are both initially zero: 2017 2018 CPU 1 CPU 2 2019 =============================== =============================== 2020 X = 1; STORE event_indicated 2021 smp_mb(); wake_up(); 2022 Y = 1; wait_event(wq, Y == 1); 2023 wake_up(); load from Y sees 1, no memory barrier 2024 load from X might see 0 2025 2026In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed 2027to see 1. 2028 2029The available waker functions include: 2030 2031 complete(); 2032 wake_up(); 2033 wake_up_all(); 2034 wake_up_bit(); 2035 wake_up_interruptible(); 2036 wake_up_interruptible_all(); 2037 wake_up_interruptible_nr(); 2038 wake_up_interruptible_poll(); 2039 wake_up_interruptible_sync(); 2040 wake_up_interruptible_sync_poll(); 2041 wake_up_locked(); 2042 wake_up_locked_poll(); 2043 wake_up_nr(); 2044 wake_up_poll(); 2045 wake_up_process(); 2046 2047 2048[!] Note that the memory barriers implied by the sleeper and the waker do _not_ 2049order multiple stores before the wake-up with respect to loads of those stored 2050values after the sleeper has called set_current_state(). For instance, if the 2051sleeper does: 2052 2053 set_current_state(TASK_INTERRUPTIBLE); 2054 if (event_indicated) 2055 break; 2056 __set_current_state(TASK_RUNNING); 2057 do_something(my_data); 2058 2059and the waker does: 2060 2061 my_data = value; 2062 event_indicated = 1; 2063 wake_up(&event_wait_queue); 2064 2065there's no guarantee that the change to event_indicated will be perceived by 2066the sleeper as coming after the change to my_data. In such a circumstance, the 2067code on both sides must interpolate its own memory barriers between the 2068separate data accesses. Thus the above sleeper ought to do: 2069 2070 set_current_state(TASK_INTERRUPTIBLE); 2071 if (event_indicated) { 2072 smp_rmb(); 2073 do_something(my_data); 2074 } 2075 2076and the waker should do: 2077 2078 my_data = value; 2079 smp_wmb(); 2080 event_indicated = 1; 2081 wake_up(&event_wait_queue); 2082 2083 2084MISCELLANEOUS FUNCTIONS 2085----------------------- 2086 2087Other functions that imply barriers: 2088 2089 (*) schedule() and similar imply full memory barriers. 2090 2091 2092=================================== 2093INTER-CPU ACQUIRING BARRIER EFFECTS 2094=================================== 2095 2096On SMP systems locking primitives give a more substantial form of barrier: one 2097that does affect memory access ordering on other CPUs, within the context of 2098conflict on any particular lock. 2099 2100 2101ACQUIRES VS MEMORY ACCESSES 2102--------------------------- 2103 2104Consider the following: the system has a pair of spinlocks (M) and (Q), and 2105three CPUs; then should the following sequence of events occur: 2106 2107 CPU 1 CPU 2 2108 =============================== =============================== 2109 WRITE_ONCE(*A, a); WRITE_ONCE(*E, e); 2110 ACQUIRE M ACQUIRE Q 2111 WRITE_ONCE(*B, b); WRITE_ONCE(*F, f); 2112 WRITE_ONCE(*C, c); WRITE_ONCE(*G, g); 2113 RELEASE M RELEASE Q 2114 WRITE_ONCE(*D, d); WRITE_ONCE(*H, h); 2115 2116Then there is no guarantee as to what order CPU 3 will see the accesses to *A 2117through *H occur in, other than the constraints imposed by the separate locks 2118on the separate CPUs. It might, for example, see: 2119 2120 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M 2121 2122But it won't see any of: 2123 2124 *B, *C or *D preceding ACQUIRE M 2125 *A, *B or *C following RELEASE M 2126 *F, *G or *H preceding ACQUIRE Q 2127 *E, *F or *G following RELEASE Q 2128 2129 2130 2131ACQUIRES VS I/O ACCESSES 2132------------------------ 2133 2134Under certain circumstances (especially involving NUMA), I/O accesses within 2135two spinlocked sections on two different CPUs may be seen as interleaved by the 2136PCI bridge, because the PCI bridge does not necessarily participate in the 2137cache-coherence protocol, and is therefore incapable of issuing the required 2138read memory barriers. 2139 2140For example: 2141 2142 CPU 1 CPU 2 2143 =============================== =============================== 2144 spin_lock(Q) 2145 writel(0, ADDR) 2146 writel(1, DATA); 2147 spin_unlock(Q); 2148 spin_lock(Q); 2149 writel(4, ADDR); 2150 writel(5, DATA); 2151 spin_unlock(Q); 2152 2153may be seen by the PCI bridge as follows: 2154 2155 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 2156 2157which would probably cause the hardware to malfunction. 2158 2159 2160What is necessary here is to intervene with an mmiowb() before dropping the 2161spinlock, for example: 2162 2163 CPU 1 CPU 2 2164 =============================== =============================== 2165 spin_lock(Q) 2166 writel(0, ADDR) 2167 writel(1, DATA); 2168 mmiowb(); 2169 spin_unlock(Q); 2170 spin_lock(Q); 2171 writel(4, ADDR); 2172 writel(5, DATA); 2173 mmiowb(); 2174 spin_unlock(Q); 2175 2176this will ensure that the two stores issued on CPU 1 appear at the PCI bridge 2177before either of the stores issued on CPU 2. 2178 2179 2180Furthermore, following a store by a load from the same device obviates the need 2181for the mmiowb(), because the load forces the store to complete before the load 2182is performed: 2183 2184 CPU 1 CPU 2 2185 =============================== =============================== 2186 spin_lock(Q) 2187 writel(0, ADDR) 2188 a = readl(DATA); 2189 spin_unlock(Q); 2190 spin_lock(Q); 2191 writel(4, ADDR); 2192 b = readl(DATA); 2193 spin_unlock(Q); 2194 2195 2196See Documentation/DocBook/deviceiobook.tmpl for more information. 2197 2198 2199================================= 2200WHERE ARE MEMORY BARRIERS NEEDED? 2201================================= 2202 2203Under normal operation, memory operation reordering is generally not going to 2204be a problem as a single-threaded linear piece of code will still appear to 2205work correctly, even if it's in an SMP kernel. There are, however, four 2206circumstances in which reordering definitely _could_ be a problem: 2207 2208 (*) Interprocessor interaction. 2209 2210 (*) Atomic operations. 2211 2212 (*) Accessing devices. 2213 2214 (*) Interrupts. 2215 2216 2217INTERPROCESSOR INTERACTION 2218-------------------------- 2219 2220When there's a system with more than one processor, more than one CPU in the 2221system may be working on the same data set at the same time. This can cause 2222synchronisation problems, and the usual way of dealing with them is to use 2223locks. Locks, however, are quite expensive, and so it may be preferable to 2224operate without the use of a lock if at all possible. In such a case 2225operations that affect both CPUs may have to be carefully ordered to prevent 2226a malfunction. 2227 2228Consider, for example, the R/W semaphore slow path. Here a waiting process is 2229queued on the semaphore, by virtue of it having a piece of its stack linked to 2230the semaphore's list of waiting processes: 2231 2232 struct rw_semaphore { 2233 ... 2234 spinlock_t lock; 2235 struct list_head waiters; 2236 }; 2237 2238 struct rwsem_waiter { 2239 struct list_head list; 2240 struct task_struct *task; 2241 }; 2242 2243To wake up a particular waiter, the up_read() or up_write() functions have to: 2244 2245 (1) read the next pointer from this waiter's record to know as to where the 2246 next waiter record is; 2247 2248 (2) read the pointer to the waiter's task structure; 2249 2250 (3) clear the task pointer to tell the waiter it has been given the semaphore; 2251 2252 (4) call wake_up_process() on the task; and 2253 2254 (5) release the reference held on the waiter's task struct. 2255 2256In other words, it has to perform this sequence of events: 2257 2258 LOAD waiter->list.next; 2259 LOAD waiter->task; 2260 STORE waiter->task; 2261 CALL wakeup 2262 RELEASE task 2263 2264and if any of these steps occur out of order, then the whole thing may 2265malfunction. 2266 2267Once it has queued itself and dropped the semaphore lock, the waiter does not 2268get the lock again; it instead just waits for its task pointer to be cleared 2269before proceeding. Since the record is on the waiter's stack, this means that 2270if the task pointer is cleared _before_ the next pointer in the list is read, 2271another CPU might start processing the waiter and might clobber the waiter's 2272stack before the up*() function has a chance to read the next pointer. 2273 2274Consider then what might happen to the above sequence of events: 2275 2276 CPU 1 CPU 2 2277 =============================== =============================== 2278 down_xxx() 2279 Queue waiter 2280 Sleep 2281 up_yyy() 2282 LOAD waiter->task; 2283 STORE waiter->task; 2284 Woken up by other event 2285 <preempt> 2286 Resume processing 2287 down_xxx() returns 2288 call foo() 2289 foo() clobbers *waiter 2290 </preempt> 2291 LOAD waiter->list.next; 2292 --- OOPS --- 2293 2294This could be dealt with using the semaphore lock, but then the down_xxx() 2295function has to needlessly get the spinlock again after being woken up. 2296 2297The way to deal with this is to insert a general SMP memory barrier: 2298 2299 LOAD waiter->list.next; 2300 LOAD waiter->task; 2301 smp_mb(); 2302 STORE waiter->task; 2303 CALL wakeup 2304 RELEASE task 2305 2306In this case, the barrier makes a guarantee that all memory accesses before the 2307barrier will appear to happen before all the memory accesses after the barrier 2308with respect to the other CPUs on the system. It does _not_ guarantee that all 2309the memory accesses before the barrier will be complete by the time the barrier 2310instruction itself is complete. 2311 2312On a UP system - where this wouldn't be a problem - the smp_mb() is just a 2313compiler barrier, thus making sure the compiler emits the instructions in the 2314right order without actually intervening in the CPU. Since there's only one 2315CPU, that CPU's dependency ordering logic will take care of everything else. 2316 2317 2318ATOMIC OPERATIONS 2319----------------- 2320 2321Whilst they are technically interprocessor interaction considerations, atomic 2322operations are noted specially as some of them imply full memory barriers and 2323some don't, but they're very heavily relied on as a group throughout the 2324kernel. 2325 2326Any atomic operation that modifies some state in memory and returns information 2327about the state (old or new) implies an SMP-conditional general memory barrier 2328(smp_mb()) on each side of the actual operation (with the exception of 2329explicit lock operations, described later). These include: 2330 2331 xchg(); 2332 atomic_xchg(); atomic_long_xchg(); 2333 atomic_inc_return(); atomic_long_inc_return(); 2334 atomic_dec_return(); atomic_long_dec_return(); 2335 atomic_add_return(); atomic_long_add_return(); 2336 atomic_sub_return(); atomic_long_sub_return(); 2337 atomic_inc_and_test(); atomic_long_inc_and_test(); 2338 atomic_dec_and_test(); atomic_long_dec_and_test(); 2339 atomic_sub_and_test(); atomic_long_sub_and_test(); 2340 atomic_add_negative(); atomic_long_add_negative(); 2341 test_and_set_bit(); 2342 test_and_clear_bit(); 2343 test_and_change_bit(); 2344 2345 /* when succeeds */ 2346 cmpxchg(); 2347 atomic_cmpxchg(); atomic_long_cmpxchg(); 2348 atomic_add_unless(); atomic_long_add_unless(); 2349 2350These are used for such things as implementing ACQUIRE-class and RELEASE-class 2351operations and adjusting reference counters towards object destruction, and as 2352such the implicit memory barrier effects are necessary. 2353 2354 2355The following operations are potential problems as they do _not_ imply memory 2356barriers, but might be used for implementing such things as RELEASE-class 2357operations: 2358 2359 atomic_set(); 2360 set_bit(); 2361 clear_bit(); 2362 change_bit(); 2363 2364With these the appropriate explicit memory barrier should be used if necessary 2365(smp_mb__before_atomic() for instance). 2366 2367 2368The following also do _not_ imply memory barriers, and so may require explicit 2369memory barriers under some circumstances (smp_mb__before_atomic() for 2370instance): 2371 2372 atomic_add(); 2373 atomic_sub(); 2374 atomic_inc(); 2375 atomic_dec(); 2376 2377If they're used for statistics generation, then they probably don't need memory 2378barriers, unless there's a coupling between statistical data. 2379 2380If they're used for reference counting on an object to control its lifetime, 2381they probably don't need memory barriers because either the reference count 2382will be adjusted inside a locked section, or the caller will already hold 2383sufficient references to make the lock, and thus a memory barrier unnecessary. 2384 2385If they're used for constructing a lock of some description, then they probably 2386do need memory barriers as a lock primitive generally has to do things in a 2387specific order. 2388 2389Basically, each usage case has to be carefully considered as to whether memory 2390barriers are needed or not. 2391 2392The following operations are special locking primitives: 2393 2394 test_and_set_bit_lock(); 2395 clear_bit_unlock(); 2396 __clear_bit_unlock(); 2397 2398These implement ACQUIRE-class and RELEASE-class operations. These should be used in 2399preference to other operations when implementing locking primitives, because 2400their implementations can be optimised on many architectures. 2401 2402[!] Note that special memory barrier primitives are available for these 2403situations because on some CPUs the atomic instructions used imply full memory 2404barriers, and so barrier instructions are superfluous in conjunction with them, 2405and in such cases the special barrier primitives will be no-ops. 2406 2407See Documentation/atomic_ops.txt for more information. 2408 2409 2410ACCESSING DEVICES 2411----------------- 2412 2413Many devices can be memory mapped, and so appear to the CPU as if they're just 2414a set of memory locations. To control such a device, the driver usually has to 2415make the right memory accesses in exactly the right order. 2416 2417However, having a clever CPU or a clever compiler creates a potential problem 2418in that the carefully sequenced accesses in the driver code won't reach the 2419device in the requisite order if the CPU or the compiler thinks it is more 2420efficient to reorder, combine or merge accesses - something that would cause 2421the device to malfunction. 2422 2423Inside of the Linux kernel, I/O should be done through the appropriate accessor 2424routines - such as inb() or writel() - which know how to make such accesses 2425appropriately sequential. Whilst this, for the most part, renders the explicit 2426use of memory barriers unnecessary, there are a couple of situations where they 2427might be needed: 2428 2429 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and 2430 so for _all_ general drivers locks should be used and mmiowb() must be 2431 issued prior to unlocking the critical section. 2432 2433 (2) If the accessor functions are used to refer to an I/O memory window with 2434 relaxed memory access properties, then _mandatory_ memory barriers are 2435 required to enforce ordering. 2436 2437See Documentation/DocBook/deviceiobook.tmpl for more information. 2438 2439 2440INTERRUPTS 2441---------- 2442 2443A driver may be interrupted by its own interrupt service routine, and thus the 2444two parts of the driver may interfere with each other's attempts to control or 2445access the device. 2446 2447This may be alleviated - at least in part - by disabling local interrupts (a 2448form of locking), such that the critical operations are all contained within 2449the interrupt-disabled section in the driver. Whilst the driver's interrupt 2450routine is executing, the driver's core may not run on the same CPU, and its 2451interrupt is not permitted to happen again until the current interrupt has been 2452handled, thus the interrupt handler does not need to lock against that. 2453 2454However, consider a driver that was talking to an ethernet card that sports an 2455address register and a data register. If that driver's core talks to the card 2456under interrupt-disablement and then the driver's interrupt handler is invoked: 2457 2458 LOCAL IRQ DISABLE 2459 writew(ADDR, 3); 2460 writew(DATA, y); 2461 LOCAL IRQ ENABLE 2462 <interrupt> 2463 writew(ADDR, 4); 2464 q = readw(DATA); 2465 </interrupt> 2466 2467The store to the data register might happen after the second store to the 2468address register if ordering rules are sufficiently relaxed: 2469 2470 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 2471 2472 2473If ordering rules are relaxed, it must be assumed that accesses done inside an 2474interrupt disabled section may leak outside of it and may interleave with 2475accesses performed in an interrupt - and vice versa - unless implicit or 2476explicit barriers are used. 2477 2478Normally this won't be a problem because the I/O accesses done inside such 2479sections will include synchronous load operations on strictly ordered I/O 2480registers that form implicit I/O barriers. If this isn't sufficient then an 2481mmiowb() may need to be used explicitly. 2482 2483 2484A similar situation may occur between an interrupt routine and two routines 2485running on separate CPUs that communicate with each other. If such a case is 2486likely, then interrupt-disabling locks should be used to guarantee ordering. 2487 2488 2489========================== 2490KERNEL I/O BARRIER EFFECTS 2491========================== 2492 2493When accessing I/O memory, drivers should use the appropriate accessor 2494functions: 2495 2496 (*) inX(), outX(): 2497 2498 These are intended to talk to I/O space rather than memory space, but 2499 that's primarily a CPU-specific concept. The i386 and x86_64 processors do 2500 indeed have special I/O space access cycles and instructions, but many 2501 CPUs don't have such a concept. 2502 2503 The PCI bus, amongst others, defines an I/O space concept which - on such 2504 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O 2505 space. However, it may also be mapped as a virtual I/O space in the CPU's 2506 memory map, particularly on those CPUs that don't support alternate I/O 2507 spaces. 2508 2509 Accesses to this space may be fully synchronous (as on i386), but 2510 intermediary bridges (such as the PCI host bridge) may not fully honour 2511 that. 2512 2513 They are guaranteed to be fully ordered with respect to each other. 2514 2515 They are not guaranteed to be fully ordered with respect to other types of 2516 memory and I/O operation. 2517 2518 (*) readX(), writeX(): 2519 2520 Whether these are guaranteed to be fully ordered and uncombined with 2521 respect to each other on the issuing CPU depends on the characteristics 2522 defined for the memory window through which they're accessing. On later 2523 i386 architecture machines, for example, this is controlled by way of the 2524 MTRR registers. 2525 2526 Ordinarily, these will be guaranteed to be fully ordered and uncombined, 2527 provided they're not accessing a prefetchable device. 2528 2529 However, intermediary hardware (such as a PCI bridge) may indulge in 2530 deferral if it so wishes; to flush a store, a load from the same location 2531 is preferred[*], but a load from the same device or from configuration 2532 space should suffice for PCI. 2533 2534 [*] NOTE! attempting to load from the same location as was written to may 2535 cause a malfunction - consider the 16550 Rx/Tx serial registers for 2536 example. 2537 2538 Used with prefetchable I/O memory, an mmiowb() barrier may be required to 2539 force stores to be ordered. 2540 2541 Please refer to the PCI specification for more information on interactions 2542 between PCI transactions. 2543 2544 (*) readX_relaxed(), writeX_relaxed() 2545 2546 These are similar to readX() and writeX(), but provide weaker memory 2547 ordering guarantees. Specifically, they do not guarantee ordering with 2548 respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee 2549 ordering with respect to LOCK or UNLOCK operations. If the latter is 2550 required, an mmiowb() barrier can be used. Note that relaxed accesses to 2551 the same peripheral are guaranteed to be ordered with respect to each 2552 other. 2553 2554 (*) ioreadX(), iowriteX() 2555 2556 These will perform appropriately for the type of access they're actually 2557 doing, be it inX()/outX() or readX()/writeX(). 2558 2559 2560======================================== 2561ASSUMED MINIMUM EXECUTION ORDERING MODEL 2562======================================== 2563 2564It has to be assumed that the conceptual CPU is weakly-ordered but that it will 2565maintain the appearance of program causality with respect to itself. Some CPUs 2566(such as i386 or x86_64) are more constrained than others (such as powerpc or 2567frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 2568of arch-specific code. 2569 2570This means that it must be considered that the CPU will execute its instruction 2571stream in any order it feels like - or even in parallel - provided that if an 2572instruction in the stream depends on an earlier instruction, then that 2573earlier instruction must be sufficiently complete[*] before the later 2574instruction may proceed; in other words: provided that the appearance of 2575causality is maintained. 2576 2577 [*] Some instructions have more than one effect - such as changing the 2578 condition codes, changing registers or changing memory - and different 2579 instructions may depend on different effects. 2580 2581A CPU may also discard any instruction sequence that winds up having no 2582ultimate effect. For example, if two adjacent instructions both load an 2583immediate value into the same register, the first may be discarded. 2584 2585 2586Similarly, it has to be assumed that compiler might reorder the instruction 2587stream in any way it sees fit, again provided the appearance of causality is 2588maintained. 2589 2590 2591============================ 2592THE EFFECTS OF THE CPU CACHE 2593============================ 2594 2595The way cached memory operations are perceived across the system is affected to 2596a certain extent by the caches that lie between CPUs and memory, and by the 2597memory coherence system that maintains the consistency of state in the system. 2598 2599As far as the way a CPU interacts with another part of the system through the 2600caches goes, the memory system has to include the CPU's caches, and memory 2601barriers for the most part act at the interface between the CPU and its cache 2602(memory barriers logically act on the dotted line in the following diagram): 2603 2604 <--- CPU ---> : <----------- Memory -----------> 2605 : 2606 +--------+ +--------+ : +--------+ +-----------+ 2607 | | | | : | | | | +--------+ 2608 | CPU | | Memory | : | CPU | | | | | 2609 | Core |--->| Access |----->| Cache |<-->| | | | 2610 | | | Queue | : | | | |--->| Memory | 2611 | | | | : | | | | | | 2612 +--------+ +--------+ : +--------+ | | | | 2613 : | Cache | +--------+ 2614 : | Coherency | 2615 : | Mechanism | +--------+ 2616 +--------+ +--------+ : +--------+ | | | | 2617 | | | | : | | | | | | 2618 | CPU | | Memory | : | CPU | | |--->| Device | 2619 | Core |--->| Access |----->| Cache |<-->| | | | 2620 | | | Queue | : | | | | | | 2621 | | | | : | | | | +--------+ 2622 +--------+ +--------+ : +--------+ +-----------+ 2623 : 2624 : 2625 2626Although any particular load or store may not actually appear outside of the 2627CPU that issued it since it may have been satisfied within the CPU's own cache, 2628it will still appear as if the full memory access had taken place as far as the 2629other CPUs are concerned since the cache coherency mechanisms will migrate the 2630cacheline over to the accessing CPU and propagate the effects upon conflict. 2631 2632The CPU core may execute instructions in any order it deems fit, provided the 2633expected program causality appears to be maintained. Some of the instructions 2634generate load and store operations which then go into the queue of memory 2635accesses to be performed. The core may place these in the queue in any order 2636it wishes, and continue execution until it is forced to wait for an instruction 2637to complete. 2638 2639What memory barriers are concerned with is controlling the order in which 2640accesses cross from the CPU side of things to the memory side of things, and 2641the order in which the effects are perceived to happen by the other observers 2642in the system. 2643 2644[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 2645their own loads and stores as if they had happened in program order. 2646 2647[!] MMIO or other device accesses may bypass the cache system. This depends on 2648the properties of the memory window through which devices are accessed and/or 2649the use of any special device communication instructions the CPU may have. 2650 2651 2652CACHE COHERENCY 2653--------------- 2654 2655Life isn't quite as simple as it may appear above, however: for while the 2656caches are expected to be coherent, there's no guarantee that that coherency 2657will be ordered. This means that whilst changes made on one CPU will 2658eventually become visible on all CPUs, there's no guarantee that they will 2659become apparent in the same order on those other CPUs. 2660 2661 2662Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 2663has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 2664 2665 : 2666 : +--------+ 2667 : +---------+ | | 2668 +--------+ : +--->| Cache A |<------->| | 2669 | | : | +---------+ | | 2670 | CPU 1 |<---+ | | 2671 | | : | +---------+ | | 2672 +--------+ : +--->| Cache B |<------->| | 2673 : +---------+ | | 2674 : | Memory | 2675 : +---------+ | System | 2676 +--------+ : +--->| Cache C |<------->| | 2677 | | : | +---------+ | | 2678 | CPU 2 |<---+ | | 2679 | | : | +---------+ | | 2680 +--------+ : +--->| Cache D |<------->| | 2681 : +---------+ | | 2682 : +--------+ 2683 : 2684 2685Imagine the system has the following properties: 2686 2687 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 2688 resident in memory; 2689 2690 (*) an even-numbered cache line may be in cache B, cache D or it may still be 2691 resident in memory; 2692 2693 (*) whilst the CPU core is interrogating one cache, the other cache may be 2694 making use of the bus to access the rest of the system - perhaps to 2695 displace a dirty cacheline or to do a speculative load; 2696 2697 (*) each cache has a queue of operations that need to be applied to that cache 2698 to maintain coherency with the rest of the system; 2699 2700 (*) the coherency queue is not flushed by normal loads to lines already 2701 present in the cache, even though the contents of the queue may 2702 potentially affect those loads. 2703 2704Imagine, then, that two writes are made on the first CPU, with a write barrier 2705between them to guarantee that they will appear to reach that CPU's caches in 2706the requisite order: 2707 2708 CPU 1 CPU 2 COMMENT 2709 =============== =============== ======================================= 2710 u == 0, v == 1 and p == &u, q == &u 2711 v = 2; 2712 smp_wmb(); Make sure change to v is visible before 2713 change to p 2714 <A:modify v=2> v is now in cache A exclusively 2715 p = &v; 2716 <B:modify p=&v> p is now in cache B exclusively 2717 2718The write memory barrier forces the other CPUs in the system to perceive that 2719the local CPU's caches have apparently been updated in the correct order. But 2720now imagine that the second CPU wants to read those values: 2721 2722 CPU 1 CPU 2 COMMENT 2723 =============== =============== ======================================= 2724 ... 2725 q = p; 2726 x = *q; 2727 2728The above pair of reads may then fail to happen in the expected order, as the 2729cacheline holding p may get updated in one of the second CPU's caches whilst 2730the update to the cacheline holding v is delayed in the other of the second 2731CPU's caches by some other cache event: 2732 2733 CPU 1 CPU 2 COMMENT 2734 =============== =============== ======================================= 2735 u == 0, v == 1 and p == &u, q == &u 2736 v = 2; 2737 smp_wmb(); 2738 <A:modify v=2> <C:busy> 2739 <C:queue v=2> 2740 p = &v; q = p; 2741 <D:request p> 2742 <B:modify p=&v> <D:commit p=&v> 2743 <D:read p> 2744 x = *q; 2745 <C:read *q> Reads from v before v updated in cache 2746 <C:unbusy> 2747 <C:commit v=2> 2748 2749Basically, whilst both cachelines will be updated on CPU 2 eventually, there's 2750no guarantee that, without intervention, the order of update will be the same 2751as that committed on CPU 1. 2752 2753 2754To intervene, we need to interpolate a data dependency barrier or a read 2755barrier between the loads. This will force the cache to commit its coherency 2756queue before processing any further requests: 2757 2758 CPU 1 CPU 2 COMMENT 2759 =============== =============== ======================================= 2760 u == 0, v == 1 and p == &u, q == &u 2761 v = 2; 2762 smp_wmb(); 2763 <A:modify v=2> <C:busy> 2764 <C:queue v=2> 2765 p = &v; q = p; 2766 <D:request p> 2767 <B:modify p=&v> <D:commit p=&v> 2768 <D:read p> 2769 smp_read_barrier_depends() 2770 <C:unbusy> 2771 <C:commit v=2> 2772 x = *q; 2773 <C:read *q> Reads from v after v updated in cache 2774 2775 2776This sort of problem can be encountered on DEC Alpha processors as they have a 2777split cache that improves performance by making better use of the data bus. 2778Whilst most CPUs do imply a data dependency barrier on the read when a memory 2779access depends on a read, not all do, so it may not be relied on. 2780 2781Other CPUs may also have split caches, but must coordinate between the various 2782cachelets for normal memory accesses. The semantics of the Alpha removes the 2783need for coordination in the absence of memory barriers. 2784 2785 2786CACHE COHERENCY VS DMA 2787---------------------- 2788 2789Not all systems maintain cache coherency with respect to devices doing DMA. In 2790such cases, a device attempting DMA may obtain stale data from RAM because 2791dirty cache lines may be resident in the caches of various CPUs, and may not 2792have been written back to RAM yet. To deal with this, the appropriate part of 2793the kernel must flush the overlapping bits of cache on each CPU (and maybe 2794invalidate them as well). 2795 2796In addition, the data DMA'd to RAM by a device may be overwritten by dirty 2797cache lines being written back to RAM from a CPU's cache after the device has 2798installed its own data, or cache lines present in the CPU's cache may simply 2799obscure the fact that RAM has been updated, until at such time as the cacheline 2800is discarded from the CPU's cache and reloaded. To deal with this, the 2801appropriate part of the kernel must invalidate the overlapping bits of the 2802cache on each CPU. 2803 2804See Documentation/cachetlb.txt for more information on cache management. 2805 2806 2807CACHE COHERENCY VS MMIO 2808----------------------- 2809 2810Memory mapped I/O usually takes place through memory locations that are part of 2811a window in the CPU's memory space that has different properties assigned than 2812the usual RAM directed window. 2813 2814Amongst these properties is usually the fact that such accesses bypass the 2815caching entirely and go directly to the device buses. This means MMIO accesses 2816may, in effect, overtake accesses to cached memory that were emitted earlier. 2817A memory barrier isn't sufficient in such a case, but rather the cache must be 2818flushed between the cached memory write and the MMIO access if the two are in 2819any way dependent. 2820 2821 2822========================= 2823THE THINGS CPUS GET UP TO 2824========================= 2825 2826A programmer might take it for granted that the CPU will perform memory 2827operations in exactly the order specified, so that if the CPU is, for example, 2828given the following piece of code to execute: 2829 2830 a = READ_ONCE(*A); 2831 WRITE_ONCE(*B, b); 2832 c = READ_ONCE(*C); 2833 d = READ_ONCE(*D); 2834 WRITE_ONCE(*E, e); 2835 2836they would then expect that the CPU will complete the memory operation for each 2837instruction before moving on to the next one, leading to a definite sequence of 2838operations as seen by external observers in the system: 2839 2840 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 2841 2842 2843Reality is, of course, much messier. With many CPUs and compilers, the above 2844assumption doesn't hold because: 2845 2846 (*) loads are more likely to need to be completed immediately to permit 2847 execution progress, whereas stores can often be deferred without a 2848 problem; 2849 2850 (*) loads may be done speculatively, and the result discarded should it prove 2851 to have been unnecessary; 2852 2853 (*) loads may be done speculatively, leading to the result having been fetched 2854 at the wrong time in the expected sequence of events; 2855 2856 (*) the order of the memory accesses may be rearranged to promote better use 2857 of the CPU buses and caches; 2858 2859 (*) loads and stores may be combined to improve performance when talking to 2860 memory or I/O hardware that can do batched accesses of adjacent locations, 2861 thus cutting down on transaction setup costs (memory and PCI devices may 2862 both be able to do this); and 2863 2864 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency 2865 mechanisms may alleviate this - once the store has actually hit the cache 2866 - there's no guarantee that the coherency management will be propagated in 2867 order to other CPUs. 2868 2869So what another CPU, say, might actually observe from the above piece of code 2870is: 2871 2872 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2873 2874 (Where "LOAD {*C,*D}" is a combined load) 2875 2876 2877However, it is guaranteed that a CPU will be self-consistent: it will see its 2878_own_ accesses appear to be correctly ordered, without the need for a memory 2879barrier. For instance with the following code: 2880 2881 U = READ_ONCE(*A); 2882 WRITE_ONCE(*A, V); 2883 WRITE_ONCE(*A, W); 2884 X = READ_ONCE(*A); 2885 WRITE_ONCE(*A, Y); 2886 Z = READ_ONCE(*A); 2887 2888and assuming no intervention by an external influence, it can be assumed that 2889the final result will appear to be: 2890 2891 U == the original value of *A 2892 X == W 2893 Z == Y 2894 *A == Y 2895 2896The code above may cause the CPU to generate the full sequence of memory 2897accesses: 2898 2899 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2900 2901in that order, but, without intervention, the sequence may have almost any 2902combination of elements combined or discarded, provided the program's view 2903of the world remains consistent. Note that READ_ONCE() and WRITE_ONCE() 2904are -not- optional in the above example, as there are architectures 2905where a given CPU might reorder successive loads to the same location. 2906On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is 2907necessary to prevent this, for example, on Itanium the volatile casts 2908used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq 2909and st.rel instructions (respectively) that prevent such reordering. 2910 2911The compiler may also combine, discard or defer elements of the sequence before 2912the CPU even sees them. 2913 2914For instance: 2915 2916 *A = V; 2917 *A = W; 2918 2919may be reduced to: 2920 2921 *A = W; 2922 2923since, without either a write barrier or an WRITE_ONCE(), it can be 2924assumed that the effect of the storage of V to *A is lost. Similarly: 2925 2926 *A = Y; 2927 Z = *A; 2928 2929may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be 2930reduced to: 2931 2932 *A = Y; 2933 Z = Y; 2934 2935and the LOAD operation never appear outside of the CPU. 2936 2937 2938AND THEN THERE'S THE ALPHA 2939-------------------------- 2940 2941The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 2942some versions of the Alpha CPU have a split data cache, permitting them to have 2943two semantically-related cache lines updated at separate times. This is where 2944the data dependency barrier really becomes necessary as this synchronises both 2945caches with the memory coherence system, thus making it seem like pointer 2946changes vs new data occur in the right order. 2947 2948The Alpha defines the Linux kernel's memory barrier model. 2949 2950See the subsection on "Cache Coherency" above. 2951 2952VIRTUAL MACHINE GUESTS 2953------------------- 2954 2955Guests running within virtual machines might be affected by SMP effects even if 2956the guest itself is compiled without SMP support. This is an artifact of 2957interfacing with an SMP host while running an UP kernel. Using mandatory 2958barriers for this use-case would be possible but is often suboptimal. 2959 2960To handle this case optimally, low-level virt_mb() etc macros are available. 2961These have the same effect as smp_mb() etc when SMP is enabled, but generate 2962identical code for SMP and non-SMP systems. For example, virtual machine guests 2963should use virt_mb() rather than smp_mb() when synchronizing against a 2964(possibly SMP) host. 2965 2966These are equivalent to smp_mb() etc counterparts in all other respects, 2967in particular, they do not control MMIO effects: to control 2968MMIO effects, use mandatory barriers. 2969 2970============ 2971EXAMPLE USES 2972============ 2973 2974CIRCULAR BUFFERS 2975---------------- 2976 2977Memory barriers can be used to implement circular buffering without the need 2978of a lock to serialise the producer with the consumer. See: 2979 2980 Documentation/circular-buffers.txt 2981 2982for details. 2983 2984 2985========== 2986REFERENCES 2987========== 2988 2989Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 2990Digital Press) 2991 Chapter 5.2: Physical Address Space Characteristics 2992 Chapter 5.4: Caches and Write Buffers 2993 Chapter 5.5: Data Sharing 2994 Chapter 5.6: Read/Write Ordering 2995 2996AMD64 Architecture Programmer's Manual Volume 2: System Programming 2997 Chapter 7.1: Memory-Access Ordering 2998 Chapter 7.4: Buffering and Combining Memory Writes 2999 3000IA-32 Intel Architecture Software Developer's Manual, Volume 3: 3001System Programming Guide 3002 Chapter 7.1: Locked Atomic Operations 3003 Chapter 7.2: Memory Ordering 3004 Chapter 7.4: Serializing Instructions 3005 3006The SPARC Architecture Manual, Version 9 3007 Chapter 8: Memory Models 3008 Appendix D: Formal Specification of the Memory Models 3009 Appendix J: Programming with the Memory Models 3010 3011UltraSPARC Programmer Reference Manual 3012 Chapter 5: Memory Accesses and Cacheability 3013 Chapter 15: Sparc-V9 Memory Models 3014 3015UltraSPARC III Cu User's Manual 3016 Chapter 9: Memory Models 3017 3018UltraSPARC IIIi Processor User's Manual 3019 Chapter 8: Memory Models 3020 3021UltraSPARC Architecture 2005 3022 Chapter 9: Memory 3023 Appendix D: Formal Specifications of the Memory Models 3024 3025UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 3026 Chapter 8: Memory Models 3027 Appendix F: Caches and Cache Coherency 3028 3029Solaris Internals, Core Kernel Architecture, p63-68: 3030 Chapter 3.3: Hardware Considerations for Locks and 3031 Synchronization 3032 3033Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 3034for Kernel Programmers: 3035 Chapter 13: Other Memory Models 3036 3037Intel Itanium Architecture Software Developer's Manual: Volume 1: 3038 Section 2.6: Speculation 3039 Section 4.4: Memory Access 3040