1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 Paul E. McKenney <paulmck@linux.vnet.ibm.com> 7 8Contents: 9 10 (*) Abstract memory access model. 11 12 - Device operations. 13 - Guarantees. 14 15 (*) What are memory barriers? 16 17 - Varieties of memory barrier. 18 - What may not be assumed about memory barriers? 19 - Data dependency barriers. 20 - Control dependencies. 21 - SMP barrier pairing. 22 - Examples of memory barrier sequences. 23 - Read memory barriers vs load speculation. 24 - Transitivity 25 26 (*) Explicit kernel barriers. 27 28 - Compiler barrier. 29 - CPU memory barriers. 30 - MMIO write barrier. 31 32 (*) Implicit kernel memory barriers. 33 34 - Locking functions. 35 - Interrupt disabling functions. 36 - Sleep and wake-up functions. 37 - Miscellaneous functions. 38 39 (*) Inter-CPU locking barrier effects. 40 41 - Locks vs memory accesses. 42 - Locks vs I/O accesses. 43 44 (*) Where are memory barriers needed? 45 46 - Interprocessor interaction. 47 - Atomic operations. 48 - Accessing devices. 49 - Interrupts. 50 51 (*) Kernel I/O barrier effects. 52 53 (*) Assumed minimum execution ordering model. 54 55 (*) The effects of the cpu cache. 56 57 - Cache coherency. 58 - Cache coherency vs DMA. 59 - Cache coherency vs MMIO. 60 61 (*) The things CPUs get up to. 62 63 - And then there's the Alpha. 64 65 (*) Example uses. 66 67 - Circular buffers. 68 69 (*) References. 70 71 72============================ 73ABSTRACT MEMORY ACCESS MODEL 74============================ 75 76Consider the following abstract model of the system: 77 78 : : 79 : : 80 : : 81 +-------+ : +--------+ : +-------+ 82 | | : | | : | | 83 | | : | | : | | 84 | CPU 1 |<----->| Memory |<----->| CPU 2 | 85 | | : | | : | | 86 | | : | | : | | 87 +-------+ : +--------+ : +-------+ 88 ^ : ^ : ^ 89 | : | : | 90 | : | : | 91 | : v : | 92 | : +--------+ : | 93 | : | | : | 94 | : | | : | 95 +---------->| Device |<----------+ 96 : | | : 97 : | | : 98 : +--------+ : 99 : : 100 101Each CPU executes a program that generates memory access operations. In the 102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 103perform the memory operations in any order it likes, provided program causality 104appears to be maintained. Similarly, the compiler may also arrange the 105instructions it emits in any order it likes, provided it doesn't affect the 106apparent operation of the program. 107 108So in the above diagram, the effects of the memory operations performed by a 109CPU are perceived by the rest of the system as the operations cross the 110interface between the CPU and rest of the system (the dotted lines). 111 112 113For example, consider the following sequence of events: 114 115 CPU 1 CPU 2 116 =============== =============== 117 { A == 1; B == 2 } 118 A = 3; x = B; 119 B = 4; y = A; 120 121The set of accesses as seen by the memory system in the middle can be arranged 122in 24 different combinations: 123 124 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 125 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 126 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 127 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 128 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 129 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 130 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 131 STORE B=4, ... 132 ... 133 134and can thus result in four different combinations of values: 135 136 x == 1, y == 2 137 x == 1, y == 4 138 x == 3, y == 2 139 x == 3, y == 4 140 141 142Furthermore, the stores committed by a CPU to the memory system may not be 143perceived by the loads made by another CPU in the same order as the stores were 144committed. 145 146 147As a further example, consider this sequence of events: 148 149 CPU 1 CPU 2 150 =============== =============== 151 { A == 1, B == 2, C = 3, P == &A, Q == &C } 152 B = 4; Q = P; 153 P = &B D = *Q; 154 155There is an obvious data dependency here, as the value loaded into D depends on 156the address retrieved from P by CPU 2. At the end of the sequence, any of the 157following results are possible: 158 159 (Q == &A) and (D == 1) 160 (Q == &B) and (D == 2) 161 (Q == &B) and (D == 4) 162 163Note that CPU 2 will never try and load C into D because the CPU will load P 164into Q before issuing the load of *Q. 165 166 167DEVICE OPERATIONS 168----------------- 169 170Some devices present their control interfaces as collections of memory 171locations, but the order in which the control registers are accessed is very 172important. For instance, imagine an ethernet card with a set of internal 173registers that are accessed through an address port register (A) and a data 174port register (D). To read internal register 5, the following code might then 175be used: 176 177 *A = 5; 178 x = *D; 179 180but this might show up as either of the following two sequences: 181 182 STORE *A = 5, x = LOAD *D 183 x = LOAD *D, STORE *A = 5 184 185the second of which will almost certainly result in a malfunction, since it set 186the address _after_ attempting to read the register. 187 188 189GUARANTEES 190---------- 191 192There are some minimal guarantees that may be expected of a CPU: 193 194 (*) On any given CPU, dependent memory accesses will be issued in order, with 195 respect to itself. This means that for: 196 197 ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q); 198 199 the CPU will issue the following memory operations: 200 201 Q = LOAD P, D = LOAD *Q 202 203 and always in that order. On most systems, smp_read_barrier_depends() 204 does nothing, but it is required for DEC Alpha. The ACCESS_ONCE() 205 is required to prevent compiler mischief. Please note that you 206 should normally use something like rcu_dereference() instead of 207 open-coding smp_read_barrier_depends(). 208 209 (*) Overlapping loads and stores within a particular CPU will appear to be 210 ordered within that CPU. This means that for: 211 212 a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b; 213 214 the CPU will only issue the following sequence of memory operations: 215 216 a = LOAD *X, STORE *X = b 217 218 And for: 219 220 ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X); 221 222 the CPU will only issue: 223 224 STORE *X = c, d = LOAD *X 225 226 (Loads and stores overlap if they are targeted at overlapping pieces of 227 memory). 228 229And there are a number of things that _must_ or _must_not_ be assumed: 230 231 (*) It _must_not_ be assumed that the compiler will do what you want with 232 memory references that are not protected by ACCESS_ONCE(). Without 233 ACCESS_ONCE(), the compiler is within its rights to do all sorts 234 of "creative" transformations, which are covered in the Compiler 235 Barrier section. 236 237 (*) It _must_not_ be assumed that independent loads and stores will be issued 238 in the order given. This means that for: 239 240 X = *A; Y = *B; *D = Z; 241 242 we may get any of the following sequences: 243 244 X = LOAD *A, Y = LOAD *B, STORE *D = Z 245 X = LOAD *A, STORE *D = Z, Y = LOAD *B 246 Y = LOAD *B, X = LOAD *A, STORE *D = Z 247 Y = LOAD *B, STORE *D = Z, X = LOAD *A 248 STORE *D = Z, X = LOAD *A, Y = LOAD *B 249 STORE *D = Z, Y = LOAD *B, X = LOAD *A 250 251 (*) It _must_ be assumed that overlapping memory accesses may be merged or 252 discarded. This means that for: 253 254 X = *A; Y = *(A + 4); 255 256 we may get any one of the following sequences: 257 258 X = LOAD *A; Y = LOAD *(A + 4); 259 Y = LOAD *(A + 4); X = LOAD *A; 260 {X, Y} = LOAD {*A, *(A + 4) }; 261 262 And for: 263 264 *A = X; *(A + 4) = Y; 265 266 we may get any of: 267 268 STORE *A = X; STORE *(A + 4) = Y; 269 STORE *(A + 4) = Y; STORE *A = X; 270 STORE {*A, *(A + 4) } = {X, Y}; 271 272 273========================= 274WHAT ARE MEMORY BARRIERS? 275========================= 276 277As can be seen above, independent memory operations are effectively performed 278in random order, but this can be a problem for CPU-CPU interaction and for I/O. 279What is required is some way of intervening to instruct the compiler and the 280CPU to restrict the order. 281 282Memory barriers are such interventions. They impose a perceived partial 283ordering over the memory operations on either side of the barrier. 284 285Such enforcement is important because the CPUs and other devices in a system 286can use a variety of tricks to improve performance, including reordering, 287deferral and combination of memory operations; speculative loads; speculative 288branch prediction and various types of caching. Memory barriers are used to 289override or suppress these tricks, allowing the code to sanely control the 290interaction of multiple CPUs and/or devices. 291 292 293VARIETIES OF MEMORY BARRIER 294--------------------------- 295 296Memory barriers come in four basic varieties: 297 298 (1) Write (or store) memory barriers. 299 300 A write memory barrier gives a guarantee that all the STORE operations 301 specified before the barrier will appear to happen before all the STORE 302 operations specified after the barrier with respect to the other 303 components of the system. 304 305 A write barrier is a partial ordering on stores only; it is not required 306 to have any effect on loads. 307 308 A CPU can be viewed as committing a sequence of store operations to the 309 memory system as time progresses. All stores before a write barrier will 310 occur in the sequence _before_ all the stores after the write barrier. 311 312 [!] Note that write barriers should normally be paired with read or data 313 dependency barriers; see the "SMP barrier pairing" subsection. 314 315 316 (2) Data dependency barriers. 317 318 A data dependency barrier is a weaker form of read barrier. In the case 319 where two loads are performed such that the second depends on the result 320 of the first (eg: the first load retrieves the address to which the second 321 load will be directed), a data dependency barrier would be required to 322 make sure that the target of the second load is updated before the address 323 obtained by the first load is accessed. 324 325 A data dependency barrier is a partial ordering on interdependent loads 326 only; it is not required to have any effect on stores, independent loads 327 or overlapping loads. 328 329 As mentioned in (1), the other CPUs in the system can be viewed as 330 committing sequences of stores to the memory system that the CPU being 331 considered can then perceive. A data dependency barrier issued by the CPU 332 under consideration guarantees that for any load preceding it, if that 333 load touches one of a sequence of stores from another CPU, then by the 334 time the barrier completes, the effects of all the stores prior to that 335 touched by the load will be perceptible to any loads issued after the data 336 dependency barrier. 337 338 See the "Examples of memory barrier sequences" subsection for diagrams 339 showing the ordering constraints. 340 341 [!] Note that the first load really has to have a _data_ dependency and 342 not a control dependency. If the address for the second load is dependent 343 on the first load, but the dependency is through a conditional rather than 344 actually loading the address itself, then it's a _control_ dependency and 345 a full read barrier or better is required. See the "Control dependencies" 346 subsection for more information. 347 348 [!] Note that data dependency barriers should normally be paired with 349 write barriers; see the "SMP barrier pairing" subsection. 350 351 352 (3) Read (or load) memory barriers. 353 354 A read barrier is a data dependency barrier plus a guarantee that all the 355 LOAD operations specified before the barrier will appear to happen before 356 all the LOAD operations specified after the barrier with respect to the 357 other components of the system. 358 359 A read barrier is a partial ordering on loads only; it is not required to 360 have any effect on stores. 361 362 Read memory barriers imply data dependency barriers, and so can substitute 363 for them. 364 365 [!] Note that read barriers should normally be paired with write barriers; 366 see the "SMP barrier pairing" subsection. 367 368 369 (4) General memory barriers. 370 371 A general memory barrier gives a guarantee that all the LOAD and STORE 372 operations specified before the barrier will appear to happen before all 373 the LOAD and STORE operations specified after the barrier with respect to 374 the other components of the system. 375 376 A general memory barrier is a partial ordering over both loads and stores. 377 378 General memory barriers imply both read and write memory barriers, and so 379 can substitute for either. 380 381 382And a couple of implicit varieties: 383 384 (5) ACQUIRE operations. 385 386 This acts as a one-way permeable barrier. It guarantees that all memory 387 operations after the ACQUIRE operation will appear to happen after the 388 ACQUIRE operation with respect to the other components of the system. 389 ACQUIRE operations include LOCK operations and smp_load_acquire() 390 operations. 391 392 Memory operations that occur before an ACQUIRE operation may appear to 393 happen after it completes. 394 395 An ACQUIRE operation should almost always be paired with a RELEASE 396 operation. 397 398 399 (6) RELEASE operations. 400 401 This also acts as a one-way permeable barrier. It guarantees that all 402 memory operations before the RELEASE operation will appear to happen 403 before the RELEASE operation with respect to the other components of the 404 system. RELEASE operations include UNLOCK operations and 405 smp_store_release() operations. 406 407 Memory operations that occur after a RELEASE operation may appear to 408 happen before it completes. 409 410 The use of ACQUIRE and RELEASE operations generally precludes the need 411 for other sorts of memory barrier (but note the exceptions mentioned in 412 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE 413 pair is -not- guaranteed to act as a full memory barrier. However, after 414 an ACQUIRE on a given variable, all memory accesses preceding any prior 415 RELEASE on that same variable are guaranteed to be visible. In other 416 words, within a given variable's critical section, all accesses of all 417 previous critical sections for that variable are guaranteed to have 418 completed. 419 420 This means that ACQUIRE acts as a minimal "acquire" operation and 421 RELEASE acts as a minimal "release" operation. 422 423 424Memory barriers are only required where there's a possibility of interaction 425between two CPUs or between a CPU and a device. If it can be guaranteed that 426there won't be any such interaction in any particular piece of code, then 427memory barriers are unnecessary in that piece of code. 428 429 430Note that these are the _minimum_ guarantees. Different architectures may give 431more substantial guarantees, but they may _not_ be relied upon outside of arch 432specific code. 433 434 435WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 436---------------------------------------------- 437 438There are certain things that the Linux kernel memory barriers do not guarantee: 439 440 (*) There is no guarantee that any of the memory accesses specified before a 441 memory barrier will be _complete_ by the completion of a memory barrier 442 instruction; the barrier can be considered to draw a line in that CPU's 443 access queue that accesses of the appropriate type may not cross. 444 445 (*) There is no guarantee that issuing a memory barrier on one CPU will have 446 any direct effect on another CPU or any other hardware in the system. The 447 indirect effect will be the order in which the second CPU sees the effects 448 of the first CPU's accesses occur, but see the next point: 449 450 (*) There is no guarantee that a CPU will see the correct order of effects 451 from a second CPU's accesses, even _if_ the second CPU uses a memory 452 barrier, unless the first CPU _also_ uses a matching memory barrier (see 453 the subsection on "SMP Barrier Pairing"). 454 455 (*) There is no guarantee that some intervening piece of off-the-CPU 456 hardware[*] will not reorder the memory accesses. CPU cache coherency 457 mechanisms should propagate the indirect effects of a memory barrier 458 between CPUs, but might not do so in order. 459 460 [*] For information on bus mastering DMA and coherency please read: 461 462 Documentation/PCI/pci.txt 463 Documentation/DMA-API-HOWTO.txt 464 Documentation/DMA-API.txt 465 466 467DATA DEPENDENCY BARRIERS 468------------------------ 469 470The usage requirements of data dependency barriers are a little subtle, and 471it's not always obvious that they're needed. To illustrate, consider the 472following sequence of events: 473 474 CPU 1 CPU 2 475 =============== =============== 476 { A == 1, B == 2, C = 3, P == &A, Q == &C } 477 B = 4; 478 <write barrier> 479 ACCESS_ONCE(P) = &B 480 Q = ACCESS_ONCE(P); 481 D = *Q; 482 483There's a clear data dependency here, and it would seem that by the end of the 484sequence, Q must be either &A or &B, and that: 485 486 (Q == &A) implies (D == 1) 487 (Q == &B) implies (D == 4) 488 489But! CPU 2's perception of P may be updated _before_ its perception of B, thus 490leading to the following situation: 491 492 (Q == &B) and (D == 2) ???? 493 494Whilst this may seem like a failure of coherency or causality maintenance, it 495isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 496Alpha). 497 498To deal with this, a data dependency barrier or better must be inserted 499between the address load and the data load: 500 501 CPU 1 CPU 2 502 =============== =============== 503 { A == 1, B == 2, C = 3, P == &A, Q == &C } 504 B = 4; 505 <write barrier> 506 ACCESS_ONCE(P) = &B 507 Q = ACCESS_ONCE(P); 508 <data dependency barrier> 509 D = *Q; 510 511This enforces the occurrence of one of the two implications, and prevents the 512third possibility from arising. 513 514[!] Note that this extremely counterintuitive situation arises most easily on 515machines with split caches, so that, for example, one cache bank processes 516even-numbered cache lines and the other bank processes odd-numbered cache 517lines. The pointer P might be stored in an odd-numbered cache line, and the 518variable B might be stored in an even-numbered cache line. Then, if the 519even-numbered bank of the reading CPU's cache is extremely busy while the 520odd-numbered bank is idle, one can see the new value of the pointer P (&B), 521but the old value of the variable B (2). 522 523 524Another example of where data dependency barriers might be required is where a 525number is read from memory and then used to calculate the index for an array 526access: 527 528 CPU 1 CPU 2 529 =============== =============== 530 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } 531 M[1] = 4; 532 <write barrier> 533 ACCESS_ONCE(P) = 1 534 Q = ACCESS_ONCE(P); 535 <data dependency barrier> 536 D = M[Q]; 537 538 539The data dependency barrier is very important to the RCU system, 540for example. See rcu_assign_pointer() and rcu_dereference() in 541include/linux/rcupdate.h. This permits the current target of an RCU'd 542pointer to be replaced with a new modified target, without the replacement 543target appearing to be incompletely initialised. 544 545See also the subsection on "Cache Coherency" for a more thorough example. 546 547 548CONTROL DEPENDENCIES 549-------------------- 550 551A control dependency requires a full read memory barrier, not simply a data 552dependency barrier to make it work correctly. Consider the following bit of 553code: 554 555 q = ACCESS_ONCE(a); 556 if (q) { 557 <data dependency barrier> /* BUG: No data dependency!!! */ 558 p = ACCESS_ONCE(b); 559 } 560 561This will not have the desired effect because there is no actual data 562dependency, but rather a control dependency that the CPU may short-circuit 563by attempting to predict the outcome in advance, so that other CPUs see 564the load from b as having happened before the load from a. In such a 565case what's actually required is: 566 567 q = ACCESS_ONCE(a); 568 if (q) { 569 <read barrier> 570 p = ACCESS_ONCE(b); 571 } 572 573However, stores are not speculated. This means that ordering -is- provided 574in the following example: 575 576 q = ACCESS_ONCE(a); 577 if (ACCESS_ONCE(q)) { 578 ACCESS_ONCE(b) = p; 579 } 580 581Please note that ACCESS_ONCE() is not optional! Without the ACCESS_ONCE(), 582the compiler is within its rights to transform this example: 583 584 q = a; 585 if (q) { 586 b = p; /* BUG: Compiler can reorder!!! */ 587 do_something(); 588 } else { 589 b = p; /* BUG: Compiler can reorder!!! */ 590 do_something_else(); 591 } 592 593into this, which of course defeats the ordering: 594 595 b = p; 596 q = a; 597 if (q) 598 do_something(); 599 else 600 do_something_else(); 601 602Worse yet, if the compiler is able to prove (say) that the value of 603variable 'a' is always non-zero, it would be well within its rights 604to optimize the original example by eliminating the "if" statement 605as follows: 606 607 q = a; 608 b = p; /* BUG: Compiler can reorder!!! */ 609 do_something(); 610 611The solution is again ACCESS_ONCE() and barrier(), which preserves the 612ordering between the load from variable 'a' and the store to variable 'b': 613 614 q = ACCESS_ONCE(a); 615 if (q) { 616 barrier(); 617 ACCESS_ONCE(b) = p; 618 do_something(); 619 } else { 620 barrier(); 621 ACCESS_ONCE(b) = p; 622 do_something_else(); 623 } 624 625The initial ACCESS_ONCE() is required to prevent the compiler from 626proving the value of 'a', and the pair of barrier() invocations are 627required to prevent the compiler from pulling the two identical stores 628to 'b' out from the legs of the "if" statement. 629 630It is important to note that control dependencies absolutely require a 631a conditional. For example, the following "optimized" version of 632the above example breaks ordering, which is why the barrier() invocations 633are absolutely required if you have identical stores in both legs of 634the "if" statement: 635 636 q = ACCESS_ONCE(a); 637 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ 638 if (q) { 639 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ 640 do_something(); 641 } else { 642 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ 643 do_something_else(); 644 } 645 646It is of course legal for the prior load to be part of the conditional, 647for example, as follows: 648 649 if (ACCESS_ONCE(a) > 0) { 650 barrier(); 651 ACCESS_ONCE(b) = q / 2; 652 do_something(); 653 } else { 654 barrier(); 655 ACCESS_ONCE(b) = q / 3; 656 do_something_else(); 657 } 658 659This will again ensure that the load from variable 'a' is ordered before the 660stores to variable 'b'. 661 662In addition, you need to be careful what you do with the local variable 'q', 663otherwise the compiler might be able to guess the value and again remove 664the needed conditional. For example: 665 666 q = ACCESS_ONCE(a); 667 if (q % MAX) { 668 barrier(); 669 ACCESS_ONCE(b) = p; 670 do_something(); 671 } else { 672 barrier(); 673 ACCESS_ONCE(b) = p; 674 do_something_else(); 675 } 676 677If MAX is defined to be 1, then the compiler knows that (q % MAX) is 678equal to zero, in which case the compiler is within its rights to 679transform the above code into the following: 680 681 q = ACCESS_ONCE(a); 682 ACCESS_ONCE(b) = p; 683 do_something_else(); 684 685This transformation loses the ordering between the load from variable 'a' 686and the store to variable 'b'. If you are relying on this ordering, you 687should do something like the following: 688 689 q = ACCESS_ONCE(a); 690 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 691 if (q % MAX) { 692 ACCESS_ONCE(b) = p; 693 do_something(); 694 } else { 695 ACCESS_ONCE(b) = p; 696 do_something_else(); 697 } 698 699Finally, control dependencies do -not- provide transitivity. This is 700demonstrated by two related examples: 701 702 CPU 0 CPU 1 703 ===================== ===================== 704 r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); 705 if (r1 >= 0) if (r2 >= 0) 706 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; 707 708 assert(!(r1 == 1 && r2 == 1)); 709 710The above two-CPU example will never trigger the assert(). However, 711if control dependencies guaranteed transitivity (which they do not), 712then adding the following two CPUs would guarantee a related assertion: 713 714 CPU 2 CPU 3 715 ===================== ===================== 716 ACCESS_ONCE(x) = 2; ACCESS_ONCE(y) = 2; 717 718 assert(!(r1 == 2 && r2 == 2 && x == 1 && y == 1)); /* FAILS!!! */ 719 720But because control dependencies do -not- provide transitivity, the 721above assertion can fail after the combined four-CPU example completes. 722If you need the four-CPU example to provide ordering, you will need 723smp_mb() between the loads and stores in the CPU 0 and CPU 1 code fragments. 724 725In summary: 726 727 (*) Control dependencies can order prior loads against later stores. 728 However, they do -not- guarantee any other sort of ordering: 729 Not prior loads against later loads, nor prior stores against 730 later anything. If you need these other forms of ordering, 731 use smb_rmb(), smp_wmb(), or, in the case of prior stores and 732 later loads, smp_mb(). 733 734 (*) If both legs of the "if" statement begin with identical stores 735 to the same variable, a barrier() statement is required at the 736 beginning of each leg of the "if" statement. 737 738 (*) Control dependencies require at least one run-time conditional 739 between the prior load and the subsequent store, and this 740 conditional must involve the prior load. If the compiler 741 is able to optimize the conditional away, it will have also 742 optimized away the ordering. Careful use of ACCESS_ONCE() can 743 help to preserve the needed conditional. 744 745 (*) Control dependencies require that the compiler avoid reordering the 746 dependency into nonexistence. Careful use of ACCESS_ONCE() or 747 barrier() can help to preserve your control dependency. Please 748 see the Compiler Barrier section for more information. 749 750 (*) Control dependencies do -not- provide transitivity. If you 751 need transitivity, use smp_mb(). 752 753 754SMP BARRIER PAIRING 755------------------- 756 757When dealing with CPU-CPU interactions, certain types of memory barrier should 758always be paired. A lack of appropriate pairing is almost certainly an error. 759 760General barriers pair with each other, though they also pair with 761most other types of barriers, albeit without transitivity. An acquire 762barrier pairs with a release barrier, but both may also pair with other 763barriers, including of course general barriers. A write barrier pairs 764with a data dependency barrier, an acquire barrier, a release barrier, 765a read barrier, or a general barrier. Similarly a read barrier or a 766data dependency barrier pairs with a write barrier, an acquire barrier, 767a release barrier, or a general barrier: 768 769 CPU 1 CPU 2 770 =============== =============== 771 ACCESS_ONCE(a) = 1; 772 <write barrier> 773 ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b); 774 <read barrier> 775 y = ACCESS_ONCE(a); 776 777Or: 778 779 CPU 1 CPU 2 780 =============== =============================== 781 a = 1; 782 <write barrier> 783 ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b); 784 <data dependency barrier> 785 y = *x; 786 787Basically, the read barrier always has to be there, even though it can be of 788the "weaker" type. 789 790[!] Note that the stores before the write barrier would normally be expected to 791match the loads after the read barrier or the data dependency barrier, and vice 792versa: 793 794 CPU 1 CPU 2 795 =================== =================== 796 ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c); 797 ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d); 798 <write barrier> \ <read barrier> 799 ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a); 800 ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b); 801 802 803EXAMPLES OF MEMORY BARRIER SEQUENCES 804------------------------------------ 805 806Firstly, write barriers act as partial orderings on store operations. 807Consider the following sequence of events: 808 809 CPU 1 810 ======================= 811 STORE A = 1 812 STORE B = 2 813 STORE C = 3 814 <write barrier> 815 STORE D = 4 816 STORE E = 5 817 818This sequence of events is committed to the memory coherence system in an order 819that the rest of the system might perceive as the unordered set of { STORE A, 820STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 821}: 822 823 +-------+ : : 824 | | +------+ 825 | |------>| C=3 | } /\ 826 | | : +------+ }----- \ -----> Events perceptible to 827 | | : | A=1 | } \/ the rest of the system 828 | | : +------+ } 829 | CPU 1 | : | B=2 | } 830 | | +------+ } 831 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 832 | | +------+ } requires all stores prior to the 833 | | : | E=5 | } barrier to be committed before 834 | | : +------+ } further stores may take place 835 | |------>| D=4 | } 836 | | +------+ 837 +-------+ : : 838 | 839 | Sequence in which stores are committed to the 840 | memory system by CPU 1 841 V 842 843 844Secondly, data dependency barriers act as partial orderings on data-dependent 845loads. Consider the following sequence of events: 846 847 CPU 1 CPU 2 848 ======================= ======================= 849 { B = 7; X = 9; Y = 8; C = &Y } 850 STORE A = 1 851 STORE B = 2 852 <write barrier> 853 STORE C = &B LOAD X 854 STORE D = 4 LOAD C (gets &B) 855 LOAD *C (reads B) 856 857Without intervention, CPU 2 may perceive the events on CPU 1 in some 858effectively random order, despite the write barrier issued by CPU 1: 859 860 +-------+ : : : : 861 | | +------+ +-------+ | Sequence of update 862 | |------>| B=2 |----- --->| Y->8 | | of perception on 863 | | : +------+ \ +-------+ | CPU 2 864 | CPU 1 | : | A=1 | \ --->| C->&Y | V 865 | | +------+ | +-------+ 866 | | wwwwwwwwwwwwwwww | : : 867 | | +------+ | : : 868 | | : | C=&B |--- | : : +-------+ 869 | | : +------+ \ | +-------+ | | 870 | |------>| D=4 | ----------->| C->&B |------>| | 871 | | +------+ | +-------+ | | 872 +-------+ : : | : : | | 873 | : : | | 874 | : : | CPU 2 | 875 | +-------+ | | 876 Apparently incorrect ---> | | B->7 |------>| | 877 perception of B (!) | +-------+ | | 878 | : : | | 879 | +-------+ | | 880 The load of X holds ---> \ | X->9 |------>| | 881 up the maintenance \ +-------+ | | 882 of coherence of B ----->| B->2 | +-------+ 883 +-------+ 884 : : 885 886 887In the above example, CPU 2 perceives that B is 7, despite the load of *C 888(which would be B) coming after the LOAD of C. 889 890If, however, a data dependency barrier were to be placed between the load of C 891and the load of *C (ie: B) on CPU 2: 892 893 CPU 1 CPU 2 894 ======================= ======================= 895 { B = 7; X = 9; Y = 8; C = &Y } 896 STORE A = 1 897 STORE B = 2 898 <write barrier> 899 STORE C = &B LOAD X 900 STORE D = 4 LOAD C (gets &B) 901 <data dependency barrier> 902 LOAD *C (reads B) 903 904then the following will occur: 905 906 +-------+ : : : : 907 | | +------+ +-------+ 908 | |------>| B=2 |----- --->| Y->8 | 909 | | : +------+ \ +-------+ 910 | CPU 1 | : | A=1 | \ --->| C->&Y | 911 | | +------+ | +-------+ 912 | | wwwwwwwwwwwwwwww | : : 913 | | +------+ | : : 914 | | : | C=&B |--- | : : +-------+ 915 | | : +------+ \ | +-------+ | | 916 | |------>| D=4 | ----------->| C->&B |------>| | 917 | | +------+ | +-------+ | | 918 +-------+ : : | : : | | 919 | : : | | 920 | : : | CPU 2 | 921 | +-------+ | | 922 | | X->9 |------>| | 923 | +-------+ | | 924 Makes sure all effects ---> \ ddddddddddddddddd | | 925 prior to the store of C \ +-------+ | | 926 are perceptible to ----->| B->2 |------>| | 927 subsequent loads +-------+ | | 928 : : +-------+ 929 930 931And thirdly, a read barrier acts as a partial order on loads. Consider the 932following sequence of events: 933 934 CPU 1 CPU 2 935 ======================= ======================= 936 { A = 0, B = 9 } 937 STORE A=1 938 <write barrier> 939 STORE B=2 940 LOAD B 941 LOAD A 942 943Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 944some effectively random order, despite the write barrier issued by CPU 1: 945 946 +-------+ : : : : 947 | | +------+ +-------+ 948 | |------>| A=1 |------ --->| A->0 | 949 | | +------+ \ +-------+ 950 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 951 | | +------+ | +-------+ 952 | |------>| B=2 |--- | : : 953 | | +------+ \ | : : +-------+ 954 +-------+ : : \ | +-------+ | | 955 ---------->| B->2 |------>| | 956 | +-------+ | CPU 2 | 957 | | A->0 |------>| | 958 | +-------+ | | 959 | : : +-------+ 960 \ : : 961 \ +-------+ 962 ---->| A->1 | 963 +-------+ 964 : : 965 966 967If, however, a read barrier were to be placed between the load of B and the 968load of A on CPU 2: 969 970 CPU 1 CPU 2 971 ======================= ======================= 972 { A = 0, B = 9 } 973 STORE A=1 974 <write barrier> 975 STORE B=2 976 LOAD B 977 <read barrier> 978 LOAD A 979 980then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 9812: 982 983 +-------+ : : : : 984 | | +------+ +-------+ 985 | |------>| A=1 |------ --->| A->0 | 986 | | +------+ \ +-------+ 987 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 988 | | +------+ | +-------+ 989 | |------>| B=2 |--- | : : 990 | | +------+ \ | : : +-------+ 991 +-------+ : : \ | +-------+ | | 992 ---------->| B->2 |------>| | 993 | +-------+ | CPU 2 | 994 | : : | | 995 | : : | | 996 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 997 barrier causes all effects \ +-------+ | | 998 prior to the storage of B ---->| A->1 |------>| | 999 to be perceptible to CPU 2 +-------+ | | 1000 : : +-------+ 1001 1002 1003To illustrate this more completely, consider what could happen if the code 1004contained a load of A either side of the read barrier: 1005 1006 CPU 1 CPU 2 1007 ======================= ======================= 1008 { A = 0, B = 9 } 1009 STORE A=1 1010 <write barrier> 1011 STORE B=2 1012 LOAD B 1013 LOAD A [first load of A] 1014 <read barrier> 1015 LOAD A [second load of A] 1016 1017Even though the two loads of A both occur after the load of B, they may both 1018come up with different values: 1019 1020 +-------+ : : : : 1021 | | +------+ +-------+ 1022 | |------>| A=1 |------ --->| A->0 | 1023 | | +------+ \ +-------+ 1024 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1025 | | +------+ | +-------+ 1026 | |------>| B=2 |--- | : : 1027 | | +------+ \ | : : +-------+ 1028 +-------+ : : \ | +-------+ | | 1029 ---------->| B->2 |------>| | 1030 | +-------+ | CPU 2 | 1031 | : : | | 1032 | : : | | 1033 | +-------+ | | 1034 | | A->0 |------>| 1st | 1035 | +-------+ | | 1036 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1037 barrier causes all effects \ +-------+ | | 1038 prior to the storage of B ---->| A->1 |------>| 2nd | 1039 to be perceptible to CPU 2 +-------+ | | 1040 : : +-------+ 1041 1042 1043But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 1044before the read barrier completes anyway: 1045 1046 +-------+ : : : : 1047 | | +------+ +-------+ 1048 | |------>| A=1 |------ --->| A->0 | 1049 | | +------+ \ +-------+ 1050 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1051 | | +------+ | +-------+ 1052 | |------>| B=2 |--- | : : 1053 | | +------+ \ | : : +-------+ 1054 +-------+ : : \ | +-------+ | | 1055 ---------->| B->2 |------>| | 1056 | +-------+ | CPU 2 | 1057 | : : | | 1058 \ : : | | 1059 \ +-------+ | | 1060 ---->| A->1 |------>| 1st | 1061 +-------+ | | 1062 rrrrrrrrrrrrrrrrr | | 1063 +-------+ | | 1064 | A->1 |------>| 2nd | 1065 +-------+ | | 1066 : : +-------+ 1067 1068 1069The guarantee is that the second load will always come up with A == 1 if the 1070load of B came up with B == 2. No such guarantee exists for the first load of 1071A; that may come up with either A == 0 or A == 1. 1072 1073 1074READ MEMORY BARRIERS VS LOAD SPECULATION 1075---------------------------------------- 1076 1077Many CPUs speculate with loads: that is they see that they will need to load an 1078item from memory, and they find a time where they're not using the bus for any 1079other loads, and so do the load in advance - even though they haven't actually 1080got to that point in the instruction execution flow yet. This permits the 1081actual load instruction to potentially complete immediately because the CPU 1082already has the value to hand. 1083 1084It may turn out that the CPU didn't actually need the value - perhaps because a 1085branch circumvented the load - in which case it can discard the value or just 1086cache it for later use. 1087 1088Consider: 1089 1090 CPU 1 CPU 2 1091 ======================= ======================= 1092 LOAD B 1093 DIVIDE } Divide instructions generally 1094 DIVIDE } take a long time to perform 1095 LOAD A 1096 1097Which might appear as this: 1098 1099 : : +-------+ 1100 +-------+ | | 1101 --->| B->2 |------>| | 1102 +-------+ | CPU 2 | 1103 : :DIVIDE | | 1104 +-------+ | | 1105 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1106 division speculates on the +-------+ ~ | | 1107 LOAD of A : : ~ | | 1108 : :DIVIDE | | 1109 : : ~ | | 1110 Once the divisions are complete --> : : ~-->| | 1111 the CPU can then perform the : : | | 1112 LOAD with immediate effect : : +-------+ 1113 1114 1115Placing a read barrier or a data dependency barrier just before the second 1116load: 1117 1118 CPU 1 CPU 2 1119 ======================= ======================= 1120 LOAD B 1121 DIVIDE 1122 DIVIDE 1123 <read barrier> 1124 LOAD A 1125 1126will force any value speculatively obtained to be reconsidered to an extent 1127dependent on the type of barrier used. If there was no change made to the 1128speculated memory location, then the speculated value will just be used: 1129 1130 : : +-------+ 1131 +-------+ | | 1132 --->| B->2 |------>| | 1133 +-------+ | CPU 2 | 1134 : :DIVIDE | | 1135 +-------+ | | 1136 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1137 division speculates on the +-------+ ~ | | 1138 LOAD of A : : ~ | | 1139 : :DIVIDE | | 1140 : : ~ | | 1141 : : ~ | | 1142 rrrrrrrrrrrrrrrr~ | | 1143 : : ~ | | 1144 : : ~-->| | 1145 : : | | 1146 : : +-------+ 1147 1148 1149but if there was an update or an invalidation from another CPU pending, then 1150the speculation will be cancelled and the value reloaded: 1151 1152 : : +-------+ 1153 +-------+ | | 1154 --->| B->2 |------>| | 1155 +-------+ | CPU 2 | 1156 : :DIVIDE | | 1157 +-------+ | | 1158 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1159 division speculates on the +-------+ ~ | | 1160 LOAD of A : : ~ | | 1161 : :DIVIDE | | 1162 : : ~ | | 1163 : : ~ | | 1164 rrrrrrrrrrrrrrrrr | | 1165 +-------+ | | 1166 The speculation is discarded ---> --->| A->1 |------>| | 1167 and an updated value is +-------+ | | 1168 retrieved : : +-------+ 1169 1170 1171TRANSITIVITY 1172------------ 1173 1174Transitivity is a deeply intuitive notion about ordering that is not 1175always provided by real computer systems. The following example 1176demonstrates transitivity (also called "cumulativity"): 1177 1178 CPU 1 CPU 2 CPU 3 1179 ======================= ======================= ======================= 1180 { X = 0, Y = 0 } 1181 STORE X=1 LOAD X STORE Y=1 1182 <general barrier> <general barrier> 1183 LOAD Y LOAD X 1184 1185Suppose that CPU 2's load from X returns 1 and its load from Y returns 0. 1186This indicates that CPU 2's load from X in some sense follows CPU 1's 1187store to X and that CPU 2's load from Y in some sense preceded CPU 3's 1188store to Y. The question is then "Can CPU 3's load from X return 0?" 1189 1190Because CPU 2's load from X in some sense came after CPU 1's store, it 1191is natural to expect that CPU 3's load from X must therefore return 1. 1192This expectation is an example of transitivity: if a load executing on 1193CPU A follows a load from the same variable executing on CPU B, then 1194CPU A's load must either return the same value that CPU B's load did, 1195or must return some later value. 1196 1197In the Linux kernel, use of general memory barriers guarantees 1198transitivity. Therefore, in the above example, if CPU 2's load from X 1199returns 1 and its load from Y returns 0, then CPU 3's load from X must 1200also return 1. 1201 1202However, transitivity is -not- guaranteed for read or write barriers. 1203For example, suppose that CPU 2's general barrier in the above example 1204is changed to a read barrier as shown below: 1205 1206 CPU 1 CPU 2 CPU 3 1207 ======================= ======================= ======================= 1208 { X = 0, Y = 0 } 1209 STORE X=1 LOAD X STORE Y=1 1210 <read barrier> <general barrier> 1211 LOAD Y LOAD X 1212 1213This substitution destroys transitivity: in this example, it is perfectly 1214legal for CPU 2's load from X to return 1, its load from Y to return 0, 1215and CPU 3's load from X to return 0. 1216 1217The key point is that although CPU 2's read barrier orders its pair 1218of loads, it does not guarantee to order CPU 1's store. Therefore, if 1219this example runs on a system where CPUs 1 and 2 share a store buffer 1220or a level of cache, CPU 2 might have early access to CPU 1's writes. 1221General barriers are therefore required to ensure that all CPUs agree 1222on the combined order of CPU 1's and CPU 2's accesses. 1223 1224To reiterate, if your code requires transitivity, use general barriers 1225throughout. 1226 1227 1228======================== 1229EXPLICIT KERNEL BARRIERS 1230======================== 1231 1232The Linux kernel has a variety of different barriers that act at different 1233levels: 1234 1235 (*) Compiler barrier. 1236 1237 (*) CPU memory barriers. 1238 1239 (*) MMIO write barrier. 1240 1241 1242COMPILER BARRIER 1243---------------- 1244 1245The Linux kernel has an explicit compiler barrier function that prevents the 1246compiler from moving the memory accesses either side of it to the other side: 1247 1248 barrier(); 1249 1250This is a general barrier -- there are no read-read or write-write variants 1251of barrier(). However, ACCESS_ONCE() can be thought of as a weak form 1252for barrier() that affects only the specific accesses flagged by the 1253ACCESS_ONCE(). 1254 1255The barrier() function has the following effects: 1256 1257 (*) Prevents the compiler from reordering accesses following the 1258 barrier() to precede any accesses preceding the barrier(). 1259 One example use for this property is to ease communication between 1260 interrupt-handler code and the code that was interrupted. 1261 1262 (*) Within a loop, forces the compiler to load the variables used 1263 in that loop's conditional on each pass through that loop. 1264 1265The ACCESS_ONCE() function can prevent any number of optimizations that, 1266while perfectly safe in single-threaded code, can be fatal in concurrent 1267code. Here are some examples of these sorts of optimizations: 1268 1269 (*) The compiler is within its rights to reorder loads and stores 1270 to the same variable, and in some cases, the CPU is within its 1271 rights to reorder loads to the same variable. This means that 1272 the following code: 1273 1274 a[0] = x; 1275 a[1] = x; 1276 1277 Might result in an older value of x stored in a[1] than in a[0]. 1278 Prevent both the compiler and the CPU from doing this as follows: 1279 1280 a[0] = ACCESS_ONCE(x); 1281 a[1] = ACCESS_ONCE(x); 1282 1283 In short, ACCESS_ONCE() provides cache coherence for accesses from 1284 multiple CPUs to a single variable. 1285 1286 (*) The compiler is within its rights to merge successive loads from 1287 the same variable. Such merging can cause the compiler to "optimize" 1288 the following code: 1289 1290 while (tmp = a) 1291 do_something_with(tmp); 1292 1293 into the following code, which, although in some sense legitimate 1294 for single-threaded code, is almost certainly not what the developer 1295 intended: 1296 1297 if (tmp = a) 1298 for (;;) 1299 do_something_with(tmp); 1300 1301 Use ACCESS_ONCE() to prevent the compiler from doing this to you: 1302 1303 while (tmp = ACCESS_ONCE(a)) 1304 do_something_with(tmp); 1305 1306 (*) The compiler is within its rights to reload a variable, for example, 1307 in cases where high register pressure prevents the compiler from 1308 keeping all data of interest in registers. The compiler might 1309 therefore optimize the variable 'tmp' out of our previous example: 1310 1311 while (tmp = a) 1312 do_something_with(tmp); 1313 1314 This could result in the following code, which is perfectly safe in 1315 single-threaded code, but can be fatal in concurrent code: 1316 1317 while (a) 1318 do_something_with(a); 1319 1320 For example, the optimized version of this code could result in 1321 passing a zero to do_something_with() in the case where the variable 1322 a was modified by some other CPU between the "while" statement and 1323 the call to do_something_with(). 1324 1325 Again, use ACCESS_ONCE() to prevent the compiler from doing this: 1326 1327 while (tmp = ACCESS_ONCE(a)) 1328 do_something_with(tmp); 1329 1330 Note that if the compiler runs short of registers, it might save 1331 tmp onto the stack. The overhead of this saving and later restoring 1332 is why compilers reload variables. Doing so is perfectly safe for 1333 single-threaded code, so you need to tell the compiler about cases 1334 where it is not safe. 1335 1336 (*) The compiler is within its rights to omit a load entirely if it knows 1337 what the value will be. For example, if the compiler can prove that 1338 the value of variable 'a' is always zero, it can optimize this code: 1339 1340 while (tmp = a) 1341 do_something_with(tmp); 1342 1343 Into this: 1344 1345 do { } while (0); 1346 1347 This transformation is a win for single-threaded code because it gets 1348 rid of a load and a branch. The problem is that the compiler will 1349 carry out its proof assuming that the current CPU is the only one 1350 updating variable 'a'. If variable 'a' is shared, then the compiler's 1351 proof will be erroneous. Use ACCESS_ONCE() to tell the compiler 1352 that it doesn't know as much as it thinks it does: 1353 1354 while (tmp = ACCESS_ONCE(a)) 1355 do_something_with(tmp); 1356 1357 But please note that the compiler is also closely watching what you 1358 do with the value after the ACCESS_ONCE(). For example, suppose you 1359 do the following and MAX is a preprocessor macro with the value 1: 1360 1361 while ((tmp = ACCESS_ONCE(a)) % MAX) 1362 do_something_with(tmp); 1363 1364 Then the compiler knows that the result of the "%" operator applied 1365 to MAX will always be zero, again allowing the compiler to optimize 1366 the code into near-nonexistence. (It will still load from the 1367 variable 'a'.) 1368 1369 (*) Similarly, the compiler is within its rights to omit a store entirely 1370 if it knows that the variable already has the value being stored. 1371 Again, the compiler assumes that the current CPU is the only one 1372 storing into the variable, which can cause the compiler to do the 1373 wrong thing for shared variables. For example, suppose you have 1374 the following: 1375 1376 a = 0; 1377 /* Code that does not store to variable a. */ 1378 a = 0; 1379 1380 The compiler sees that the value of variable 'a' is already zero, so 1381 it might well omit the second store. This would come as a fatal 1382 surprise if some other CPU might have stored to variable 'a' in the 1383 meantime. 1384 1385 Use ACCESS_ONCE() to prevent the compiler from making this sort of 1386 wrong guess: 1387 1388 ACCESS_ONCE(a) = 0; 1389 /* Code that does not store to variable a. */ 1390 ACCESS_ONCE(a) = 0; 1391 1392 (*) The compiler is within its rights to reorder memory accesses unless 1393 you tell it not to. For example, consider the following interaction 1394 between process-level code and an interrupt handler: 1395 1396 void process_level(void) 1397 { 1398 msg = get_message(); 1399 flag = true; 1400 } 1401 1402 void interrupt_handler(void) 1403 { 1404 if (flag) 1405 process_message(msg); 1406 } 1407 1408 There is nothing to prevent the compiler from transforming 1409 process_level() to the following, in fact, this might well be a 1410 win for single-threaded code: 1411 1412 void process_level(void) 1413 { 1414 flag = true; 1415 msg = get_message(); 1416 } 1417 1418 If the interrupt occurs between these two statement, then 1419 interrupt_handler() might be passed a garbled msg. Use ACCESS_ONCE() 1420 to prevent this as follows: 1421 1422 void process_level(void) 1423 { 1424 ACCESS_ONCE(msg) = get_message(); 1425 ACCESS_ONCE(flag) = true; 1426 } 1427 1428 void interrupt_handler(void) 1429 { 1430 if (ACCESS_ONCE(flag)) 1431 process_message(ACCESS_ONCE(msg)); 1432 } 1433 1434 Note that the ACCESS_ONCE() wrappers in interrupt_handler() 1435 are needed if this interrupt handler can itself be interrupted 1436 by something that also accesses 'flag' and 'msg', for example, 1437 a nested interrupt or an NMI. Otherwise, ACCESS_ONCE() is not 1438 needed in interrupt_handler() other than for documentation purposes. 1439 (Note also that nested interrupts do not typically occur in modern 1440 Linux kernels, in fact, if an interrupt handler returns with 1441 interrupts enabled, you will get a WARN_ONCE() splat.) 1442 1443 You should assume that the compiler can move ACCESS_ONCE() past 1444 code not containing ACCESS_ONCE(), barrier(), or similar primitives. 1445 1446 This effect could also be achieved using barrier(), but ACCESS_ONCE() 1447 is more selective: With ACCESS_ONCE(), the compiler need only forget 1448 the contents of the indicated memory locations, while with barrier() 1449 the compiler must discard the value of all memory locations that 1450 it has currented cached in any machine registers. Of course, 1451 the compiler must also respect the order in which the ACCESS_ONCE()s 1452 occur, though the CPU of course need not do so. 1453 1454 (*) The compiler is within its rights to invent stores to a variable, 1455 as in the following example: 1456 1457 if (a) 1458 b = a; 1459 else 1460 b = 42; 1461 1462 The compiler might save a branch by optimizing this as follows: 1463 1464 b = 42; 1465 if (a) 1466 b = a; 1467 1468 In single-threaded code, this is not only safe, but also saves 1469 a branch. Unfortunately, in concurrent code, this optimization 1470 could cause some other CPU to see a spurious value of 42 -- even 1471 if variable 'a' was never zero -- when loading variable 'b'. 1472 Use ACCESS_ONCE() to prevent this as follows: 1473 1474 if (a) 1475 ACCESS_ONCE(b) = a; 1476 else 1477 ACCESS_ONCE(b) = 42; 1478 1479 The compiler can also invent loads. These are usually less 1480 damaging, but they can result in cache-line bouncing and thus in 1481 poor performance and scalability. Use ACCESS_ONCE() to prevent 1482 invented loads. 1483 1484 (*) For aligned memory locations whose size allows them to be accessed 1485 with a single memory-reference instruction, prevents "load tearing" 1486 and "store tearing," in which a single large access is replaced by 1487 multiple smaller accesses. For example, given an architecture having 1488 16-bit store instructions with 7-bit immediate fields, the compiler 1489 might be tempted to use two 16-bit store-immediate instructions to 1490 implement the following 32-bit store: 1491 1492 p = 0x00010002; 1493 1494 Please note that GCC really does use this sort of optimization, 1495 which is not surprising given that it would likely take more 1496 than two instructions to build the constant and then store it. 1497 This optimization can therefore be a win in single-threaded code. 1498 In fact, a recent bug (since fixed) caused GCC to incorrectly use 1499 this optimization in a volatile store. In the absence of such bugs, 1500 use of ACCESS_ONCE() prevents store tearing in the following example: 1501 1502 ACCESS_ONCE(p) = 0x00010002; 1503 1504 Use of packed structures can also result in load and store tearing, 1505 as in this example: 1506 1507 struct __attribute__((__packed__)) foo { 1508 short a; 1509 int b; 1510 short c; 1511 }; 1512 struct foo foo1, foo2; 1513 ... 1514 1515 foo2.a = foo1.a; 1516 foo2.b = foo1.b; 1517 foo2.c = foo1.c; 1518 1519 Because there are no ACCESS_ONCE() wrappers and no volatile markings, 1520 the compiler would be well within its rights to implement these three 1521 assignment statements as a pair of 32-bit loads followed by a pair 1522 of 32-bit stores. This would result in load tearing on 'foo1.b' 1523 and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing 1524 in this example: 1525 1526 foo2.a = foo1.a; 1527 ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b); 1528 foo2.c = foo1.c; 1529 1530All that aside, it is never necessary to use ACCESS_ONCE() on a variable 1531that has been marked volatile. For example, because 'jiffies' is marked 1532volatile, it is never necessary to say ACCESS_ONCE(jiffies). The reason 1533for this is that ACCESS_ONCE() is implemented as a volatile cast, which 1534has no effect when its argument is already marked volatile. 1535 1536Please note that these compiler barriers have no direct effect on the CPU, 1537which may then reorder things however it wishes. 1538 1539 1540CPU MEMORY BARRIERS 1541------------------- 1542 1543The Linux kernel has eight basic CPU memory barriers: 1544 1545 TYPE MANDATORY SMP CONDITIONAL 1546 =============== ======================= =========================== 1547 GENERAL mb() smp_mb() 1548 WRITE wmb() smp_wmb() 1549 READ rmb() smp_rmb() 1550 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() 1551 1552 1553All memory barriers except the data dependency barriers imply a compiler 1554barrier. Data dependencies do not impose any additional compiler ordering. 1555 1556Aside: In the case of data dependencies, the compiler would be expected to 1557issue the loads in the correct order (eg. `a[b]` would have to load the value 1558of b before loading a[b]), however there is no guarantee in the C specification 1559that the compiler may not speculate the value of b (eg. is equal to 1) and load 1560a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the 1561problem of a compiler reloading b after having loaded a[b], thus having a newer 1562copy of b than a[b]. A consensus has not yet been reached about these problems, 1563however the ACCESS_ONCE macro is a good place to start looking. 1564 1565SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1566systems because it is assumed that a CPU will appear to be self-consistent, 1567and will order overlapping accesses correctly with respect to itself. 1568 1569[!] Note that SMP memory barriers _must_ be used to control the ordering of 1570references to shared memory on SMP systems, though the use of locking instead 1571is sufficient. 1572 1573Mandatory barriers should not be used to control SMP effects, since mandatory 1574barriers unnecessarily impose overhead on UP systems. They may, however, be 1575used to control MMIO effects on accesses through relaxed memory I/O windows. 1576These are required even on non-SMP systems as they affect the order in which 1577memory operations appear to a device by prohibiting both the compiler and the 1578CPU from reordering them. 1579 1580 1581There are some more advanced barrier functions: 1582 1583 (*) set_mb(var, value) 1584 1585 This assigns the value to the variable and then inserts a full memory 1586 barrier after it, depending on the function. It isn't guaranteed to 1587 insert anything more than a compiler barrier in a UP compilation. 1588 1589 1590 (*) smp_mb__before_atomic(); 1591 (*) smp_mb__after_atomic(); 1592 1593 These are for use with atomic (such as add, subtract, increment and 1594 decrement) functions that don't return a value, especially when used for 1595 reference counting. These functions do not imply memory barriers. 1596 1597 These are also used for atomic bitop functions that do not return a 1598 value (such as set_bit and clear_bit). 1599 1600 As an example, consider a piece of code that marks an object as being dead 1601 and then decrements the object's reference count: 1602 1603 obj->dead = 1; 1604 smp_mb__before_atomic(); 1605 atomic_dec(&obj->ref_count); 1606 1607 This makes sure that the death mark on the object is perceived to be set 1608 *before* the reference counter is decremented. 1609 1610 See Documentation/atomic_ops.txt for more information. See the "Atomic 1611 operations" subsection for information on where to use these. 1612 1613 1614MMIO WRITE BARRIER 1615------------------ 1616 1617The Linux kernel also has a special barrier for use with memory-mapped I/O 1618writes: 1619 1620 mmiowb(); 1621 1622This is a variation on the mandatory write barrier that causes writes to weakly 1623ordered I/O regions to be partially ordered. Its effects may go beyond the 1624CPU->Hardware interface and actually affect the hardware at some level. 1625 1626See the subsection "Locks vs I/O accesses" for more information. 1627 1628 1629=============================== 1630IMPLICIT KERNEL MEMORY BARRIERS 1631=============================== 1632 1633Some of the other functions in the linux kernel imply memory barriers, amongst 1634which are locking and scheduling functions. 1635 1636This specification is a _minimum_ guarantee; any particular architecture may 1637provide more substantial guarantees, but these may not be relied upon outside 1638of arch specific code. 1639 1640 1641ACQUIRING FUNCTIONS 1642------------------- 1643 1644The Linux kernel has a number of locking constructs: 1645 1646 (*) spin locks 1647 (*) R/W spin locks 1648 (*) mutexes 1649 (*) semaphores 1650 (*) R/W semaphores 1651 (*) RCU 1652 1653In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations 1654for each construct. These operations all imply certain barriers: 1655 1656 (1) ACQUIRE operation implication: 1657 1658 Memory operations issued after the ACQUIRE will be completed after the 1659 ACQUIRE operation has completed. 1660 1661 Memory operations issued before the ACQUIRE may be completed after 1662 the ACQUIRE operation has completed. An smp_mb__before_spinlock(), 1663 combined with a following ACQUIRE, orders prior loads against 1664 subsequent loads and stores and also orders prior stores against 1665 subsequent stores. Note that this is weaker than smp_mb()! The 1666 smp_mb__before_spinlock() primitive is free on many architectures. 1667 1668 (2) RELEASE operation implication: 1669 1670 Memory operations issued before the RELEASE will be completed before the 1671 RELEASE operation has completed. 1672 1673 Memory operations issued after the RELEASE may be completed before the 1674 RELEASE operation has completed. 1675 1676 (3) ACQUIRE vs ACQUIRE implication: 1677 1678 All ACQUIRE operations issued before another ACQUIRE operation will be 1679 completed before that ACQUIRE operation. 1680 1681 (4) ACQUIRE vs RELEASE implication: 1682 1683 All ACQUIRE operations issued before a RELEASE operation will be 1684 completed before the RELEASE operation. 1685 1686 (5) Failed conditional ACQUIRE implication: 1687 1688 Certain locking variants of the ACQUIRE operation may fail, either due to 1689 being unable to get the lock immediately, or due to receiving an unblocked 1690 signal whilst asleep waiting for the lock to become available. Failed 1691 locks do not imply any sort of barrier. 1692 1693[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only 1694one-way barriers is that the effects of instructions outside of a critical 1695section may seep into the inside of the critical section. 1696 1697An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier 1698because it is possible for an access preceding the ACQUIRE to happen after the 1699ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and 1700the two accesses can themselves then cross: 1701 1702 *A = a; 1703 ACQUIRE M 1704 RELEASE M 1705 *B = b; 1706 1707may occur as: 1708 1709 ACQUIRE M, STORE *B, STORE *A, RELEASE M 1710 1711When the ACQUIRE and RELEASE are a lock acquisition and release, 1712respectively, this same reordering can occur if the lock's ACQUIRE and 1713RELEASE are to the same lock variable, but only from the perspective of 1714another CPU not holding that lock. In short, a ACQUIRE followed by an 1715RELEASE may -not- be assumed to be a full memory barrier. 1716 1717Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not 1718imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE 1719pair to produce a full barrier, the ACQUIRE can be followed by an 1720smp_mb__after_unlock_lock() invocation. This will produce a full barrier 1721if either (a) the RELEASE and the ACQUIRE are executed by the same 1722CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable. 1723The smp_mb__after_unlock_lock() primitive is free on many architectures. 1724Without smp_mb__after_unlock_lock(), the CPU's execution of the critical 1725sections corresponding to the RELEASE and the ACQUIRE can cross, so that: 1726 1727 *A = a; 1728 RELEASE M 1729 ACQUIRE N 1730 *B = b; 1731 1732could occur as: 1733 1734 ACQUIRE N, STORE *B, STORE *A, RELEASE M 1735 1736It might appear that this reordering could introduce a deadlock. 1737However, this cannot happen because if such a deadlock threatened, 1738the RELEASE would simply complete, thereby avoiding the deadlock. 1739 1740 Why does this work? 1741 1742 One key point is that we are only talking about the CPU doing 1743 the reordering, not the compiler. If the compiler (or, for 1744 that matter, the developer) switched the operations, deadlock 1745 -could- occur. 1746 1747 But suppose the CPU reordered the operations. In this case, 1748 the unlock precedes the lock in the assembly code. The CPU 1749 simply elected to try executing the later lock operation first. 1750 If there is a deadlock, this lock operation will simply spin (or 1751 try to sleep, but more on that later). The CPU will eventually 1752 execute the unlock operation (which preceded the lock operation 1753 in the assembly code), which will unravel the potential deadlock, 1754 allowing the lock operation to succeed. 1755 1756 But what if the lock is a sleeplock? In that case, the code will 1757 try to enter the scheduler, where it will eventually encounter 1758 a memory barrier, which will force the earlier unlock operation 1759 to complete, again unraveling the deadlock. There might be 1760 a sleep-unlock race, but the locking primitive needs to resolve 1761 such races properly in any case. 1762 1763With smp_mb__after_unlock_lock(), the two critical sections cannot overlap. 1764For example, with the following code, the store to *A will always be 1765seen by other CPUs before the store to *B: 1766 1767 *A = a; 1768 RELEASE M 1769 ACQUIRE N 1770 smp_mb__after_unlock_lock(); 1771 *B = b; 1772 1773The operations will always occur in one of the following orders: 1774 1775 STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B 1776 STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B 1777 ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B 1778 1779If the RELEASE and ACQUIRE were instead both operating on the same lock 1780variable, only the first of these alternatives can occur. In addition, 1781the more strongly ordered systems may rule out some of the above orders. 1782But in any case, as noted earlier, the smp_mb__after_unlock_lock() 1783ensures that the store to *A will always be seen as happening before 1784the store to *B. 1785 1786Locks and semaphores may not provide any guarantee of ordering on UP compiled 1787systems, and so cannot be counted on in such a situation to actually achieve 1788anything at all - especially with respect to I/O accesses - unless combined 1789with interrupt disabling operations. 1790 1791See also the section on "Inter-CPU locking barrier effects". 1792 1793 1794As an example, consider the following: 1795 1796 *A = a; 1797 *B = b; 1798 ACQUIRE 1799 *C = c; 1800 *D = d; 1801 RELEASE 1802 *E = e; 1803 *F = f; 1804 1805The following sequence of events is acceptable: 1806 1807 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE 1808 1809 [+] Note that {*F,*A} indicates a combined access. 1810 1811But none of the following are: 1812 1813 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E 1814 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F 1815 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F 1816 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E 1817 1818 1819 1820INTERRUPT DISABLING FUNCTIONS 1821----------------------------- 1822 1823Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts 1824(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O 1825barriers are required in such a situation, they must be provided from some 1826other means. 1827 1828 1829SLEEP AND WAKE-UP FUNCTIONS 1830--------------------------- 1831 1832Sleeping and waking on an event flagged in global data can be viewed as an 1833interaction between two pieces of data: the task state of the task waiting for 1834the event and the global data used to indicate the event. To make sure that 1835these appear to happen in the right order, the primitives to begin the process 1836of going to sleep, and the primitives to initiate a wake up imply certain 1837barriers. 1838 1839Firstly, the sleeper normally follows something like this sequence of events: 1840 1841 for (;;) { 1842 set_current_state(TASK_UNINTERRUPTIBLE); 1843 if (event_indicated) 1844 break; 1845 schedule(); 1846 } 1847 1848A general memory barrier is interpolated automatically by set_current_state() 1849after it has altered the task state: 1850 1851 CPU 1 1852 =============================== 1853 set_current_state(); 1854 set_mb(); 1855 STORE current->state 1856 <general barrier> 1857 LOAD event_indicated 1858 1859set_current_state() may be wrapped by: 1860 1861 prepare_to_wait(); 1862 prepare_to_wait_exclusive(); 1863 1864which therefore also imply a general memory barrier after setting the state. 1865The whole sequence above is available in various canned forms, all of which 1866interpolate the memory barrier in the right place: 1867 1868 wait_event(); 1869 wait_event_interruptible(); 1870 wait_event_interruptible_exclusive(); 1871 wait_event_interruptible_timeout(); 1872 wait_event_killable(); 1873 wait_event_timeout(); 1874 wait_on_bit(); 1875 wait_on_bit_lock(); 1876 1877 1878Secondly, code that performs a wake up normally follows something like this: 1879 1880 event_indicated = 1; 1881 wake_up(&event_wait_queue); 1882 1883or: 1884 1885 event_indicated = 1; 1886 wake_up_process(event_daemon); 1887 1888A write memory barrier is implied by wake_up() and co. if and only if they wake 1889something up. The barrier occurs before the task state is cleared, and so sits 1890between the STORE to indicate the event and the STORE to set TASK_RUNNING: 1891 1892 CPU 1 CPU 2 1893 =============================== =============================== 1894 set_current_state(); STORE event_indicated 1895 set_mb(); wake_up(); 1896 STORE current->state <write barrier> 1897 <general barrier> STORE current->state 1898 LOAD event_indicated 1899 1900To repeat, this write memory barrier is present if and only if something 1901is actually awakened. To see this, consider the following sequence of 1902events, where X and Y are both initially zero: 1903 1904 CPU 1 CPU 2 1905 =============================== =============================== 1906 X = 1; STORE event_indicated 1907 smp_mb(); wake_up(); 1908 Y = 1; wait_event(wq, Y == 1); 1909 wake_up(); load from Y sees 1, no memory barrier 1910 load from X might see 0 1911 1912In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed 1913to see 1. 1914 1915The available waker functions include: 1916 1917 complete(); 1918 wake_up(); 1919 wake_up_all(); 1920 wake_up_bit(); 1921 wake_up_interruptible(); 1922 wake_up_interruptible_all(); 1923 wake_up_interruptible_nr(); 1924 wake_up_interruptible_poll(); 1925 wake_up_interruptible_sync(); 1926 wake_up_interruptible_sync_poll(); 1927 wake_up_locked(); 1928 wake_up_locked_poll(); 1929 wake_up_nr(); 1930 wake_up_poll(); 1931 wake_up_process(); 1932 1933 1934[!] Note that the memory barriers implied by the sleeper and the waker do _not_ 1935order multiple stores before the wake-up with respect to loads of those stored 1936values after the sleeper has called set_current_state(). For instance, if the 1937sleeper does: 1938 1939 set_current_state(TASK_INTERRUPTIBLE); 1940 if (event_indicated) 1941 break; 1942 __set_current_state(TASK_RUNNING); 1943 do_something(my_data); 1944 1945and the waker does: 1946 1947 my_data = value; 1948 event_indicated = 1; 1949 wake_up(&event_wait_queue); 1950 1951there's no guarantee that the change to event_indicated will be perceived by 1952the sleeper as coming after the change to my_data. In such a circumstance, the 1953code on both sides must interpolate its own memory barriers between the 1954separate data accesses. Thus the above sleeper ought to do: 1955 1956 set_current_state(TASK_INTERRUPTIBLE); 1957 if (event_indicated) { 1958 smp_rmb(); 1959 do_something(my_data); 1960 } 1961 1962and the waker should do: 1963 1964 my_data = value; 1965 smp_wmb(); 1966 event_indicated = 1; 1967 wake_up(&event_wait_queue); 1968 1969 1970MISCELLANEOUS FUNCTIONS 1971----------------------- 1972 1973Other functions that imply barriers: 1974 1975 (*) schedule() and similar imply full memory barriers. 1976 1977 1978=================================== 1979INTER-CPU ACQUIRING BARRIER EFFECTS 1980=================================== 1981 1982On SMP systems locking primitives give a more substantial form of barrier: one 1983that does affect memory access ordering on other CPUs, within the context of 1984conflict on any particular lock. 1985 1986 1987ACQUIRES VS MEMORY ACCESSES 1988--------------------------- 1989 1990Consider the following: the system has a pair of spinlocks (M) and (Q), and 1991three CPUs; then should the following sequence of events occur: 1992 1993 CPU 1 CPU 2 1994 =============================== =============================== 1995 ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e; 1996 ACQUIRE M ACQUIRE Q 1997 ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f; 1998 ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g; 1999 RELEASE M RELEASE Q 2000 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h; 2001 2002Then there is no guarantee as to what order CPU 3 will see the accesses to *A 2003through *H occur in, other than the constraints imposed by the separate locks 2004on the separate CPUs. It might, for example, see: 2005 2006 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M 2007 2008But it won't see any of: 2009 2010 *B, *C or *D preceding ACQUIRE M 2011 *A, *B or *C following RELEASE M 2012 *F, *G or *H preceding ACQUIRE Q 2013 *E, *F or *G following RELEASE Q 2014 2015 2016However, if the following occurs: 2017 2018 CPU 1 CPU 2 2019 =============================== =============================== 2020 ACCESS_ONCE(*A) = a; 2021 ACQUIRE M [1] 2022 ACCESS_ONCE(*B) = b; 2023 ACCESS_ONCE(*C) = c; 2024 RELEASE M [1] 2025 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e; 2026 ACQUIRE M [2] 2027 smp_mb__after_unlock_lock(); 2028 ACCESS_ONCE(*F) = f; 2029 ACCESS_ONCE(*G) = g; 2030 RELEASE M [2] 2031 ACCESS_ONCE(*H) = h; 2032 2033CPU 3 might see: 2034 2035 *E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1], 2036 ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D 2037 2038But assuming CPU 1 gets the lock first, CPU 3 won't see any of: 2039 2040 *B, *C, *D, *F, *G or *H preceding ACQUIRE M [1] 2041 *A, *B or *C following RELEASE M [1] 2042 *F, *G or *H preceding ACQUIRE M [2] 2043 *A, *B, *C, *E, *F or *G following RELEASE M [2] 2044 2045Note that the smp_mb__after_unlock_lock() is critically important 2046here: Without it CPU 3 might see some of the above orderings. 2047Without smp_mb__after_unlock_lock(), the accesses are not guaranteed 2048to be seen in order unless CPU 3 holds lock M. 2049 2050 2051ACQUIRES VS I/O ACCESSES 2052------------------------ 2053 2054Under certain circumstances (especially involving NUMA), I/O accesses within 2055two spinlocked sections on two different CPUs may be seen as interleaved by the 2056PCI bridge, because the PCI bridge does not necessarily participate in the 2057cache-coherence protocol, and is therefore incapable of issuing the required 2058read memory barriers. 2059 2060For example: 2061 2062 CPU 1 CPU 2 2063 =============================== =============================== 2064 spin_lock(Q) 2065 writel(0, ADDR) 2066 writel(1, DATA); 2067 spin_unlock(Q); 2068 spin_lock(Q); 2069 writel(4, ADDR); 2070 writel(5, DATA); 2071 spin_unlock(Q); 2072 2073may be seen by the PCI bridge as follows: 2074 2075 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 2076 2077which would probably cause the hardware to malfunction. 2078 2079 2080What is necessary here is to intervene with an mmiowb() before dropping the 2081spinlock, for example: 2082 2083 CPU 1 CPU 2 2084 =============================== =============================== 2085 spin_lock(Q) 2086 writel(0, ADDR) 2087 writel(1, DATA); 2088 mmiowb(); 2089 spin_unlock(Q); 2090 spin_lock(Q); 2091 writel(4, ADDR); 2092 writel(5, DATA); 2093 mmiowb(); 2094 spin_unlock(Q); 2095 2096this will ensure that the two stores issued on CPU 1 appear at the PCI bridge 2097before either of the stores issued on CPU 2. 2098 2099 2100Furthermore, following a store by a load from the same device obviates the need 2101for the mmiowb(), because the load forces the store to complete before the load 2102is performed: 2103 2104 CPU 1 CPU 2 2105 =============================== =============================== 2106 spin_lock(Q) 2107 writel(0, ADDR) 2108 a = readl(DATA); 2109 spin_unlock(Q); 2110 spin_lock(Q); 2111 writel(4, ADDR); 2112 b = readl(DATA); 2113 spin_unlock(Q); 2114 2115 2116See Documentation/DocBook/deviceiobook.tmpl for more information. 2117 2118 2119================================= 2120WHERE ARE MEMORY BARRIERS NEEDED? 2121================================= 2122 2123Under normal operation, memory operation reordering is generally not going to 2124be a problem as a single-threaded linear piece of code will still appear to 2125work correctly, even if it's in an SMP kernel. There are, however, four 2126circumstances in which reordering definitely _could_ be a problem: 2127 2128 (*) Interprocessor interaction. 2129 2130 (*) Atomic operations. 2131 2132 (*) Accessing devices. 2133 2134 (*) Interrupts. 2135 2136 2137INTERPROCESSOR INTERACTION 2138-------------------------- 2139 2140When there's a system with more than one processor, more than one CPU in the 2141system may be working on the same data set at the same time. This can cause 2142synchronisation problems, and the usual way of dealing with them is to use 2143locks. Locks, however, are quite expensive, and so it may be preferable to 2144operate without the use of a lock if at all possible. In such a case 2145operations that affect both CPUs may have to be carefully ordered to prevent 2146a malfunction. 2147 2148Consider, for example, the R/W semaphore slow path. Here a waiting process is 2149queued on the semaphore, by virtue of it having a piece of its stack linked to 2150the semaphore's list of waiting processes: 2151 2152 struct rw_semaphore { 2153 ... 2154 spinlock_t lock; 2155 struct list_head waiters; 2156 }; 2157 2158 struct rwsem_waiter { 2159 struct list_head list; 2160 struct task_struct *task; 2161 }; 2162 2163To wake up a particular waiter, the up_read() or up_write() functions have to: 2164 2165 (1) read the next pointer from this waiter's record to know as to where the 2166 next waiter record is; 2167 2168 (2) read the pointer to the waiter's task structure; 2169 2170 (3) clear the task pointer to tell the waiter it has been given the semaphore; 2171 2172 (4) call wake_up_process() on the task; and 2173 2174 (5) release the reference held on the waiter's task struct. 2175 2176In other words, it has to perform this sequence of events: 2177 2178 LOAD waiter->list.next; 2179 LOAD waiter->task; 2180 STORE waiter->task; 2181 CALL wakeup 2182 RELEASE task 2183 2184and if any of these steps occur out of order, then the whole thing may 2185malfunction. 2186 2187Once it has queued itself and dropped the semaphore lock, the waiter does not 2188get the lock again; it instead just waits for its task pointer to be cleared 2189before proceeding. Since the record is on the waiter's stack, this means that 2190if the task pointer is cleared _before_ the next pointer in the list is read, 2191another CPU might start processing the waiter and might clobber the waiter's 2192stack before the up*() function has a chance to read the next pointer. 2193 2194Consider then what might happen to the above sequence of events: 2195 2196 CPU 1 CPU 2 2197 =============================== =============================== 2198 down_xxx() 2199 Queue waiter 2200 Sleep 2201 up_yyy() 2202 LOAD waiter->task; 2203 STORE waiter->task; 2204 Woken up by other event 2205 <preempt> 2206 Resume processing 2207 down_xxx() returns 2208 call foo() 2209 foo() clobbers *waiter 2210 </preempt> 2211 LOAD waiter->list.next; 2212 --- OOPS --- 2213 2214This could be dealt with using the semaphore lock, but then the down_xxx() 2215function has to needlessly get the spinlock again after being woken up. 2216 2217The way to deal with this is to insert a general SMP memory barrier: 2218 2219 LOAD waiter->list.next; 2220 LOAD waiter->task; 2221 smp_mb(); 2222 STORE waiter->task; 2223 CALL wakeup 2224 RELEASE task 2225 2226In this case, the barrier makes a guarantee that all memory accesses before the 2227barrier will appear to happen before all the memory accesses after the barrier 2228with respect to the other CPUs on the system. It does _not_ guarantee that all 2229the memory accesses before the barrier will be complete by the time the barrier 2230instruction itself is complete. 2231 2232On a UP system - where this wouldn't be a problem - the smp_mb() is just a 2233compiler barrier, thus making sure the compiler emits the instructions in the 2234right order without actually intervening in the CPU. Since there's only one 2235CPU, that CPU's dependency ordering logic will take care of everything else. 2236 2237 2238ATOMIC OPERATIONS 2239----------------- 2240 2241Whilst they are technically interprocessor interaction considerations, atomic 2242operations are noted specially as some of them imply full memory barriers and 2243some don't, but they're very heavily relied on as a group throughout the 2244kernel. 2245 2246Any atomic operation that modifies some state in memory and returns information 2247about the state (old or new) implies an SMP-conditional general memory barrier 2248(smp_mb()) on each side of the actual operation (with the exception of 2249explicit lock operations, described later). These include: 2250 2251 xchg(); 2252 cmpxchg(); 2253 atomic_xchg(); atomic_long_xchg(); 2254 atomic_cmpxchg(); atomic_long_cmpxchg(); 2255 atomic_inc_return(); atomic_long_inc_return(); 2256 atomic_dec_return(); atomic_long_dec_return(); 2257 atomic_add_return(); atomic_long_add_return(); 2258 atomic_sub_return(); atomic_long_sub_return(); 2259 atomic_inc_and_test(); atomic_long_inc_and_test(); 2260 atomic_dec_and_test(); atomic_long_dec_and_test(); 2261 atomic_sub_and_test(); atomic_long_sub_and_test(); 2262 atomic_add_negative(); atomic_long_add_negative(); 2263 test_and_set_bit(); 2264 test_and_clear_bit(); 2265 test_and_change_bit(); 2266 2267 /* when succeeds (returns 1) */ 2268 atomic_add_unless(); atomic_long_add_unless(); 2269 2270These are used for such things as implementing ACQUIRE-class and RELEASE-class 2271operations and adjusting reference counters towards object destruction, and as 2272such the implicit memory barrier effects are necessary. 2273 2274 2275The following operations are potential problems as they do _not_ imply memory 2276barriers, but might be used for implementing such things as RELEASE-class 2277operations: 2278 2279 atomic_set(); 2280 set_bit(); 2281 clear_bit(); 2282 change_bit(); 2283 2284With these the appropriate explicit memory barrier should be used if necessary 2285(smp_mb__before_atomic() for instance). 2286 2287 2288The following also do _not_ imply memory barriers, and so may require explicit 2289memory barriers under some circumstances (smp_mb__before_atomic() for 2290instance): 2291 2292 atomic_add(); 2293 atomic_sub(); 2294 atomic_inc(); 2295 atomic_dec(); 2296 2297If they're used for statistics generation, then they probably don't need memory 2298barriers, unless there's a coupling between statistical data. 2299 2300If they're used for reference counting on an object to control its lifetime, 2301they probably don't need memory barriers because either the reference count 2302will be adjusted inside a locked section, or the caller will already hold 2303sufficient references to make the lock, and thus a memory barrier unnecessary. 2304 2305If they're used for constructing a lock of some description, then they probably 2306do need memory barriers as a lock primitive generally has to do things in a 2307specific order. 2308 2309Basically, each usage case has to be carefully considered as to whether memory 2310barriers are needed or not. 2311 2312The following operations are special locking primitives: 2313 2314 test_and_set_bit_lock(); 2315 clear_bit_unlock(); 2316 __clear_bit_unlock(); 2317 2318These implement ACQUIRE-class and RELEASE-class operations. These should be used in 2319preference to other operations when implementing locking primitives, because 2320their implementations can be optimised on many architectures. 2321 2322[!] Note that special memory barrier primitives are available for these 2323situations because on some CPUs the atomic instructions used imply full memory 2324barriers, and so barrier instructions are superfluous in conjunction with them, 2325and in such cases the special barrier primitives will be no-ops. 2326 2327See Documentation/atomic_ops.txt for more information. 2328 2329 2330ACCESSING DEVICES 2331----------------- 2332 2333Many devices can be memory mapped, and so appear to the CPU as if they're just 2334a set of memory locations. To control such a device, the driver usually has to 2335make the right memory accesses in exactly the right order. 2336 2337However, having a clever CPU or a clever compiler creates a potential problem 2338in that the carefully sequenced accesses in the driver code won't reach the 2339device in the requisite order if the CPU or the compiler thinks it is more 2340efficient to reorder, combine or merge accesses - something that would cause 2341the device to malfunction. 2342 2343Inside of the Linux kernel, I/O should be done through the appropriate accessor 2344routines - such as inb() or writel() - which know how to make such accesses 2345appropriately sequential. Whilst this, for the most part, renders the explicit 2346use of memory barriers unnecessary, there are a couple of situations where they 2347might be needed: 2348 2349 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and 2350 so for _all_ general drivers locks should be used and mmiowb() must be 2351 issued prior to unlocking the critical section. 2352 2353 (2) If the accessor functions are used to refer to an I/O memory window with 2354 relaxed memory access properties, then _mandatory_ memory barriers are 2355 required to enforce ordering. 2356 2357See Documentation/DocBook/deviceiobook.tmpl for more information. 2358 2359 2360INTERRUPTS 2361---------- 2362 2363A driver may be interrupted by its own interrupt service routine, and thus the 2364two parts of the driver may interfere with each other's attempts to control or 2365access the device. 2366 2367This may be alleviated - at least in part - by disabling local interrupts (a 2368form of locking), such that the critical operations are all contained within 2369the interrupt-disabled section in the driver. Whilst the driver's interrupt 2370routine is executing, the driver's core may not run on the same CPU, and its 2371interrupt is not permitted to happen again until the current interrupt has been 2372handled, thus the interrupt handler does not need to lock against that. 2373 2374However, consider a driver that was talking to an ethernet card that sports an 2375address register and a data register. If that driver's core talks to the card 2376under interrupt-disablement and then the driver's interrupt handler is invoked: 2377 2378 LOCAL IRQ DISABLE 2379 writew(ADDR, 3); 2380 writew(DATA, y); 2381 LOCAL IRQ ENABLE 2382 <interrupt> 2383 writew(ADDR, 4); 2384 q = readw(DATA); 2385 </interrupt> 2386 2387The store to the data register might happen after the second store to the 2388address register if ordering rules are sufficiently relaxed: 2389 2390 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 2391 2392 2393If ordering rules are relaxed, it must be assumed that accesses done inside an 2394interrupt disabled section may leak outside of it and may interleave with 2395accesses performed in an interrupt - and vice versa - unless implicit or 2396explicit barriers are used. 2397 2398Normally this won't be a problem because the I/O accesses done inside such 2399sections will include synchronous load operations on strictly ordered I/O 2400registers that form implicit I/O barriers. If this isn't sufficient then an 2401mmiowb() may need to be used explicitly. 2402 2403 2404A similar situation may occur between an interrupt routine and two routines 2405running on separate CPUs that communicate with each other. If such a case is 2406likely, then interrupt-disabling locks should be used to guarantee ordering. 2407 2408 2409========================== 2410KERNEL I/O BARRIER EFFECTS 2411========================== 2412 2413When accessing I/O memory, drivers should use the appropriate accessor 2414functions: 2415 2416 (*) inX(), outX(): 2417 2418 These are intended to talk to I/O space rather than memory space, but 2419 that's primarily a CPU-specific concept. The i386 and x86_64 processors do 2420 indeed have special I/O space access cycles and instructions, but many 2421 CPUs don't have such a concept. 2422 2423 The PCI bus, amongst others, defines an I/O space concept which - on such 2424 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O 2425 space. However, it may also be mapped as a virtual I/O space in the CPU's 2426 memory map, particularly on those CPUs that don't support alternate I/O 2427 spaces. 2428 2429 Accesses to this space may be fully synchronous (as on i386), but 2430 intermediary bridges (such as the PCI host bridge) may not fully honour 2431 that. 2432 2433 They are guaranteed to be fully ordered with respect to each other. 2434 2435 They are not guaranteed to be fully ordered with respect to other types of 2436 memory and I/O operation. 2437 2438 (*) readX(), writeX(): 2439 2440 Whether these are guaranteed to be fully ordered and uncombined with 2441 respect to each other on the issuing CPU depends on the characteristics 2442 defined for the memory window through which they're accessing. On later 2443 i386 architecture machines, for example, this is controlled by way of the 2444 MTRR registers. 2445 2446 Ordinarily, these will be guaranteed to be fully ordered and uncombined, 2447 provided they're not accessing a prefetchable device. 2448 2449 However, intermediary hardware (such as a PCI bridge) may indulge in 2450 deferral if it so wishes; to flush a store, a load from the same location 2451 is preferred[*], but a load from the same device or from configuration 2452 space should suffice for PCI. 2453 2454 [*] NOTE! attempting to load from the same location as was written to may 2455 cause a malfunction - consider the 16550 Rx/Tx serial registers for 2456 example. 2457 2458 Used with prefetchable I/O memory, an mmiowb() barrier may be required to 2459 force stores to be ordered. 2460 2461 Please refer to the PCI specification for more information on interactions 2462 between PCI transactions. 2463 2464 (*) readX_relaxed() 2465 2466 These are similar to readX(), but are not guaranteed to be ordered in any 2467 way. Be aware that there is no I/O read barrier available. 2468 2469 (*) ioreadX(), iowriteX() 2470 2471 These will perform appropriately for the type of access they're actually 2472 doing, be it inX()/outX() or readX()/writeX(). 2473 2474 2475======================================== 2476ASSUMED MINIMUM EXECUTION ORDERING MODEL 2477======================================== 2478 2479It has to be assumed that the conceptual CPU is weakly-ordered but that it will 2480maintain the appearance of program causality with respect to itself. Some CPUs 2481(such as i386 or x86_64) are more constrained than others (such as powerpc or 2482frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 2483of arch-specific code. 2484 2485This means that it must be considered that the CPU will execute its instruction 2486stream in any order it feels like - or even in parallel - provided that if an 2487instruction in the stream depends on an earlier instruction, then that 2488earlier instruction must be sufficiently complete[*] before the later 2489instruction may proceed; in other words: provided that the appearance of 2490causality is maintained. 2491 2492 [*] Some instructions have more than one effect - such as changing the 2493 condition codes, changing registers or changing memory - and different 2494 instructions may depend on different effects. 2495 2496A CPU may also discard any instruction sequence that winds up having no 2497ultimate effect. For example, if two adjacent instructions both load an 2498immediate value into the same register, the first may be discarded. 2499 2500 2501Similarly, it has to be assumed that compiler might reorder the instruction 2502stream in any way it sees fit, again provided the appearance of causality is 2503maintained. 2504 2505 2506============================ 2507THE EFFECTS OF THE CPU CACHE 2508============================ 2509 2510The way cached memory operations are perceived across the system is affected to 2511a certain extent by the caches that lie between CPUs and memory, and by the 2512memory coherence system that maintains the consistency of state in the system. 2513 2514As far as the way a CPU interacts with another part of the system through the 2515caches goes, the memory system has to include the CPU's caches, and memory 2516barriers for the most part act at the interface between the CPU and its cache 2517(memory barriers logically act on the dotted line in the following diagram): 2518 2519 <--- CPU ---> : <----------- Memory -----------> 2520 : 2521 +--------+ +--------+ : +--------+ +-----------+ 2522 | | | | : | | | | +--------+ 2523 | CPU | | Memory | : | CPU | | | | | 2524 | Core |--->| Access |----->| Cache |<-->| | | | 2525 | | | Queue | : | | | |--->| Memory | 2526 | | | | : | | | | | | 2527 +--------+ +--------+ : +--------+ | | | | 2528 : | Cache | +--------+ 2529 : | Coherency | 2530 : | Mechanism | +--------+ 2531 +--------+ +--------+ : +--------+ | | | | 2532 | | | | : | | | | | | 2533 | CPU | | Memory | : | CPU | | |--->| Device | 2534 | Core |--->| Access |----->| Cache |<-->| | | | 2535 | | | Queue | : | | | | | | 2536 | | | | : | | | | +--------+ 2537 +--------+ +--------+ : +--------+ +-----------+ 2538 : 2539 : 2540 2541Although any particular load or store may not actually appear outside of the 2542CPU that issued it since it may have been satisfied within the CPU's own cache, 2543it will still appear as if the full memory access had taken place as far as the 2544other CPUs are concerned since the cache coherency mechanisms will migrate the 2545cacheline over to the accessing CPU and propagate the effects upon conflict. 2546 2547The CPU core may execute instructions in any order it deems fit, provided the 2548expected program causality appears to be maintained. Some of the instructions 2549generate load and store operations which then go into the queue of memory 2550accesses to be performed. The core may place these in the queue in any order 2551it wishes, and continue execution until it is forced to wait for an instruction 2552to complete. 2553 2554What memory barriers are concerned with is controlling the order in which 2555accesses cross from the CPU side of things to the memory side of things, and 2556the order in which the effects are perceived to happen by the other observers 2557in the system. 2558 2559[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 2560their own loads and stores as if they had happened in program order. 2561 2562[!] MMIO or other device accesses may bypass the cache system. This depends on 2563the properties of the memory window through which devices are accessed and/or 2564the use of any special device communication instructions the CPU may have. 2565 2566 2567CACHE COHERENCY 2568--------------- 2569 2570Life isn't quite as simple as it may appear above, however: for while the 2571caches are expected to be coherent, there's no guarantee that that coherency 2572will be ordered. This means that whilst changes made on one CPU will 2573eventually become visible on all CPUs, there's no guarantee that they will 2574become apparent in the same order on those other CPUs. 2575 2576 2577Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 2578has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 2579 2580 : 2581 : +--------+ 2582 : +---------+ | | 2583 +--------+ : +--->| Cache A |<------->| | 2584 | | : | +---------+ | | 2585 | CPU 1 |<---+ | | 2586 | | : | +---------+ | | 2587 +--------+ : +--->| Cache B |<------->| | 2588 : +---------+ | | 2589 : | Memory | 2590 : +---------+ | System | 2591 +--------+ : +--->| Cache C |<------->| | 2592 | | : | +---------+ | | 2593 | CPU 2 |<---+ | | 2594 | | : | +---------+ | | 2595 +--------+ : +--->| Cache D |<------->| | 2596 : +---------+ | | 2597 : +--------+ 2598 : 2599 2600Imagine the system has the following properties: 2601 2602 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 2603 resident in memory; 2604 2605 (*) an even-numbered cache line may be in cache B, cache D or it may still be 2606 resident in memory; 2607 2608 (*) whilst the CPU core is interrogating one cache, the other cache may be 2609 making use of the bus to access the rest of the system - perhaps to 2610 displace a dirty cacheline or to do a speculative load; 2611 2612 (*) each cache has a queue of operations that need to be applied to that cache 2613 to maintain coherency with the rest of the system; 2614 2615 (*) the coherency queue is not flushed by normal loads to lines already 2616 present in the cache, even though the contents of the queue may 2617 potentially affect those loads. 2618 2619Imagine, then, that two writes are made on the first CPU, with a write barrier 2620between them to guarantee that they will appear to reach that CPU's caches in 2621the requisite order: 2622 2623 CPU 1 CPU 2 COMMENT 2624 =============== =============== ======================================= 2625 u == 0, v == 1 and p == &u, q == &u 2626 v = 2; 2627 smp_wmb(); Make sure change to v is visible before 2628 change to p 2629 <A:modify v=2> v is now in cache A exclusively 2630 p = &v; 2631 <B:modify p=&v> p is now in cache B exclusively 2632 2633The write memory barrier forces the other CPUs in the system to perceive that 2634the local CPU's caches have apparently been updated in the correct order. But 2635now imagine that the second CPU wants to read those values: 2636 2637 CPU 1 CPU 2 COMMENT 2638 =============== =============== ======================================= 2639 ... 2640 q = p; 2641 x = *q; 2642 2643The above pair of reads may then fail to happen in the expected order, as the 2644cacheline holding p may get updated in one of the second CPU's caches whilst 2645the update to the cacheline holding v is delayed in the other of the second 2646CPU's caches by some other cache event: 2647 2648 CPU 1 CPU 2 COMMENT 2649 =============== =============== ======================================= 2650 u == 0, v == 1 and p == &u, q == &u 2651 v = 2; 2652 smp_wmb(); 2653 <A:modify v=2> <C:busy> 2654 <C:queue v=2> 2655 p = &v; q = p; 2656 <D:request p> 2657 <B:modify p=&v> <D:commit p=&v> 2658 <D:read p> 2659 x = *q; 2660 <C:read *q> Reads from v before v updated in cache 2661 <C:unbusy> 2662 <C:commit v=2> 2663 2664Basically, whilst both cachelines will be updated on CPU 2 eventually, there's 2665no guarantee that, without intervention, the order of update will be the same 2666as that committed on CPU 1. 2667 2668 2669To intervene, we need to interpolate a data dependency barrier or a read 2670barrier between the loads. This will force the cache to commit its coherency 2671queue before processing any further requests: 2672 2673 CPU 1 CPU 2 COMMENT 2674 =============== =============== ======================================= 2675 u == 0, v == 1 and p == &u, q == &u 2676 v = 2; 2677 smp_wmb(); 2678 <A:modify v=2> <C:busy> 2679 <C:queue v=2> 2680 p = &v; q = p; 2681 <D:request p> 2682 <B:modify p=&v> <D:commit p=&v> 2683 <D:read p> 2684 smp_read_barrier_depends() 2685 <C:unbusy> 2686 <C:commit v=2> 2687 x = *q; 2688 <C:read *q> Reads from v after v updated in cache 2689 2690 2691This sort of problem can be encountered on DEC Alpha processors as they have a 2692split cache that improves performance by making better use of the data bus. 2693Whilst most CPUs do imply a data dependency barrier on the read when a memory 2694access depends on a read, not all do, so it may not be relied on. 2695 2696Other CPUs may also have split caches, but must coordinate between the various 2697cachelets for normal memory accesses. The semantics of the Alpha removes the 2698need for coordination in the absence of memory barriers. 2699 2700 2701CACHE COHERENCY VS DMA 2702---------------------- 2703 2704Not all systems maintain cache coherency with respect to devices doing DMA. In 2705such cases, a device attempting DMA may obtain stale data from RAM because 2706dirty cache lines may be resident in the caches of various CPUs, and may not 2707have been written back to RAM yet. To deal with this, the appropriate part of 2708the kernel must flush the overlapping bits of cache on each CPU (and maybe 2709invalidate them as well). 2710 2711In addition, the data DMA'd to RAM by a device may be overwritten by dirty 2712cache lines being written back to RAM from a CPU's cache after the device has 2713installed its own data, or cache lines present in the CPU's cache may simply 2714obscure the fact that RAM has been updated, until at such time as the cacheline 2715is discarded from the CPU's cache and reloaded. To deal with this, the 2716appropriate part of the kernel must invalidate the overlapping bits of the 2717cache on each CPU. 2718 2719See Documentation/cachetlb.txt for more information on cache management. 2720 2721 2722CACHE COHERENCY VS MMIO 2723----------------------- 2724 2725Memory mapped I/O usually takes place through memory locations that are part of 2726a window in the CPU's memory space that has different properties assigned than 2727the usual RAM directed window. 2728 2729Amongst these properties is usually the fact that such accesses bypass the 2730caching entirely and go directly to the device buses. This means MMIO accesses 2731may, in effect, overtake accesses to cached memory that were emitted earlier. 2732A memory barrier isn't sufficient in such a case, but rather the cache must be 2733flushed between the cached memory write and the MMIO access if the two are in 2734any way dependent. 2735 2736 2737========================= 2738THE THINGS CPUS GET UP TO 2739========================= 2740 2741A programmer might take it for granted that the CPU will perform memory 2742operations in exactly the order specified, so that if the CPU is, for example, 2743given the following piece of code to execute: 2744 2745 a = ACCESS_ONCE(*A); 2746 ACCESS_ONCE(*B) = b; 2747 c = ACCESS_ONCE(*C); 2748 d = ACCESS_ONCE(*D); 2749 ACCESS_ONCE(*E) = e; 2750 2751they would then expect that the CPU will complete the memory operation for each 2752instruction before moving on to the next one, leading to a definite sequence of 2753operations as seen by external observers in the system: 2754 2755 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 2756 2757 2758Reality is, of course, much messier. With many CPUs and compilers, the above 2759assumption doesn't hold because: 2760 2761 (*) loads are more likely to need to be completed immediately to permit 2762 execution progress, whereas stores can often be deferred without a 2763 problem; 2764 2765 (*) loads may be done speculatively, and the result discarded should it prove 2766 to have been unnecessary; 2767 2768 (*) loads may be done speculatively, leading to the result having been fetched 2769 at the wrong time in the expected sequence of events; 2770 2771 (*) the order of the memory accesses may be rearranged to promote better use 2772 of the CPU buses and caches; 2773 2774 (*) loads and stores may be combined to improve performance when talking to 2775 memory or I/O hardware that can do batched accesses of adjacent locations, 2776 thus cutting down on transaction setup costs (memory and PCI devices may 2777 both be able to do this); and 2778 2779 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency 2780 mechanisms may alleviate this - once the store has actually hit the cache 2781 - there's no guarantee that the coherency management will be propagated in 2782 order to other CPUs. 2783 2784So what another CPU, say, might actually observe from the above piece of code 2785is: 2786 2787 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2788 2789 (Where "LOAD {*C,*D}" is a combined load) 2790 2791 2792However, it is guaranteed that a CPU will be self-consistent: it will see its 2793_own_ accesses appear to be correctly ordered, without the need for a memory 2794barrier. For instance with the following code: 2795 2796 U = ACCESS_ONCE(*A); 2797 ACCESS_ONCE(*A) = V; 2798 ACCESS_ONCE(*A) = W; 2799 X = ACCESS_ONCE(*A); 2800 ACCESS_ONCE(*A) = Y; 2801 Z = ACCESS_ONCE(*A); 2802 2803and assuming no intervention by an external influence, it can be assumed that 2804the final result will appear to be: 2805 2806 U == the original value of *A 2807 X == W 2808 Z == Y 2809 *A == Y 2810 2811The code above may cause the CPU to generate the full sequence of memory 2812accesses: 2813 2814 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2815 2816in that order, but, without intervention, the sequence may have almost any 2817combination of elements combined or discarded, provided the program's view of 2818the world remains consistent. Note that ACCESS_ONCE() is -not- optional 2819in the above example, as there are architectures where a given CPU might 2820reorder successive loads to the same location. On such architectures, 2821ACCESS_ONCE() does whatever is necessary to prevent this, for example, on 2822Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the 2823special ld.acq and st.rel instructions that prevent such reordering. 2824 2825The compiler may also combine, discard or defer elements of the sequence before 2826the CPU even sees them. 2827 2828For instance: 2829 2830 *A = V; 2831 *A = W; 2832 2833may be reduced to: 2834 2835 *A = W; 2836 2837since, without either a write barrier or an ACCESS_ONCE(), it can be 2838assumed that the effect of the storage of V to *A is lost. Similarly: 2839 2840 *A = Y; 2841 Z = *A; 2842 2843may, without a memory barrier or an ACCESS_ONCE(), be reduced to: 2844 2845 *A = Y; 2846 Z = Y; 2847 2848and the LOAD operation never appear outside of the CPU. 2849 2850 2851AND THEN THERE'S THE ALPHA 2852-------------------------- 2853 2854The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 2855some versions of the Alpha CPU have a split data cache, permitting them to have 2856two semantically-related cache lines updated at separate times. This is where 2857the data dependency barrier really becomes necessary as this synchronises both 2858caches with the memory coherence system, thus making it seem like pointer 2859changes vs new data occur in the right order. 2860 2861The Alpha defines the Linux kernel's memory barrier model. 2862 2863See the subsection on "Cache Coherency" above. 2864 2865 2866============ 2867EXAMPLE USES 2868============ 2869 2870CIRCULAR BUFFERS 2871---------------- 2872 2873Memory barriers can be used to implement circular buffering without the need 2874of a lock to serialise the producer with the consumer. See: 2875 2876 Documentation/circular-buffers.txt 2877 2878for details. 2879 2880 2881========== 2882REFERENCES 2883========== 2884 2885Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 2886Digital Press) 2887 Chapter 5.2: Physical Address Space Characteristics 2888 Chapter 5.4: Caches and Write Buffers 2889 Chapter 5.5: Data Sharing 2890 Chapter 5.6: Read/Write Ordering 2891 2892AMD64 Architecture Programmer's Manual Volume 2: System Programming 2893 Chapter 7.1: Memory-Access Ordering 2894 Chapter 7.4: Buffering and Combining Memory Writes 2895 2896IA-32 Intel Architecture Software Developer's Manual, Volume 3: 2897System Programming Guide 2898 Chapter 7.1: Locked Atomic Operations 2899 Chapter 7.2: Memory Ordering 2900 Chapter 7.4: Serializing Instructions 2901 2902The SPARC Architecture Manual, Version 9 2903 Chapter 8: Memory Models 2904 Appendix D: Formal Specification of the Memory Models 2905 Appendix J: Programming with the Memory Models 2906 2907UltraSPARC Programmer Reference Manual 2908 Chapter 5: Memory Accesses and Cacheability 2909 Chapter 15: Sparc-V9 Memory Models 2910 2911UltraSPARC III Cu User's Manual 2912 Chapter 9: Memory Models 2913 2914UltraSPARC IIIi Processor User's Manual 2915 Chapter 8: Memory Models 2916 2917UltraSPARC Architecture 2005 2918 Chapter 9: Memory 2919 Appendix D: Formal Specifications of the Memory Models 2920 2921UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 2922 Chapter 8: Memory Models 2923 Appendix F: Caches and Cache Coherency 2924 2925Solaris Internals, Core Kernel Architecture, p63-68: 2926 Chapter 3.3: Hardware Considerations for Locks and 2927 Synchronization 2928 2929Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 2930for Kernel Programmers: 2931 Chapter 13: Other Memory Models 2932 2933Intel Itanium Architecture Software Developer's Manual: Volume 1: 2934 Section 2.6: Speculation 2935 Section 4.4: Memory Access 2936