1 ============================ 2 LINUX KERNEL MEMORY BARRIERS 3 ============================ 4 5By: David Howells <dhowells@redhat.com> 6 Paul E. McKenney <paulmck@linux.vnet.ibm.com> 7 8Contents: 9 10 (*) Abstract memory access model. 11 12 - Device operations. 13 - Guarantees. 14 15 (*) What are memory barriers? 16 17 - Varieties of memory barrier. 18 - What may not be assumed about memory barriers? 19 - Data dependency barriers. 20 - Control dependencies. 21 - SMP barrier pairing. 22 - Examples of memory barrier sequences. 23 - Read memory barriers vs load speculation. 24 - Transitivity 25 26 (*) Explicit kernel barriers. 27 28 - Compiler barrier. 29 - CPU memory barriers. 30 - MMIO write barrier. 31 32 (*) Implicit kernel memory barriers. 33 34 - Locking functions. 35 - Interrupt disabling functions. 36 - Sleep and wake-up functions. 37 - Miscellaneous functions. 38 39 (*) Inter-CPU locking barrier effects. 40 41 - Locks vs memory accesses. 42 - Locks vs I/O accesses. 43 44 (*) Where are memory barriers needed? 45 46 - Interprocessor interaction. 47 - Atomic operations. 48 - Accessing devices. 49 - Interrupts. 50 51 (*) Kernel I/O barrier effects. 52 53 (*) Assumed minimum execution ordering model. 54 55 (*) The effects of the cpu cache. 56 57 - Cache coherency. 58 - Cache coherency vs DMA. 59 - Cache coherency vs MMIO. 60 61 (*) The things CPUs get up to. 62 63 - And then there's the Alpha. 64 65 (*) Example uses. 66 67 - Circular buffers. 68 69 (*) References. 70 71 72============================ 73ABSTRACT MEMORY ACCESS MODEL 74============================ 75 76Consider the following abstract model of the system: 77 78 : : 79 : : 80 : : 81 +-------+ : +--------+ : +-------+ 82 | | : | | : | | 83 | | : | | : | | 84 | CPU 1 |<----->| Memory |<----->| CPU 2 | 85 | | : | | : | | 86 | | : | | : | | 87 +-------+ : +--------+ : +-------+ 88 ^ : ^ : ^ 89 | : | : | 90 | : | : | 91 | : v : | 92 | : +--------+ : | 93 | : | | : | 94 | : | | : | 95 +---------->| Device |<----------+ 96 : | | : 97 : | | : 98 : +--------+ : 99 : : 100 101Each CPU executes a program that generates memory access operations. In the 102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually 103perform the memory operations in any order it likes, provided program causality 104appears to be maintained. Similarly, the compiler may also arrange the 105instructions it emits in any order it likes, provided it doesn't affect the 106apparent operation of the program. 107 108So in the above diagram, the effects of the memory operations performed by a 109CPU are perceived by the rest of the system as the operations cross the 110interface between the CPU and rest of the system (the dotted lines). 111 112 113For example, consider the following sequence of events: 114 115 CPU 1 CPU 2 116 =============== =============== 117 { A == 1; B == 2 } 118 A = 3; x = A; 119 B = 4; y = B; 120 121The set of accesses as seen by the memory system in the middle can be arranged 122in 24 different combinations: 123 124 STORE A=3, STORE B=4, x=LOAD A->3, y=LOAD B->4 125 STORE A=3, STORE B=4, y=LOAD B->4, x=LOAD A->3 126 STORE A=3, x=LOAD A->3, STORE B=4, y=LOAD B->4 127 STORE A=3, x=LOAD A->3, y=LOAD B->2, STORE B=4 128 STORE A=3, y=LOAD B->2, STORE B=4, x=LOAD A->3 129 STORE A=3, y=LOAD B->2, x=LOAD A->3, STORE B=4 130 STORE B=4, STORE A=3, x=LOAD A->3, y=LOAD B->4 131 STORE B=4, ... 132 ... 133 134and can thus result in four different combinations of values: 135 136 x == 1, y == 2 137 x == 1, y == 4 138 x == 3, y == 2 139 x == 3, y == 4 140 141 142Furthermore, the stores committed by a CPU to the memory system may not be 143perceived by the loads made by another CPU in the same order as the stores were 144committed. 145 146 147As a further example, consider this sequence of events: 148 149 CPU 1 CPU 2 150 =============== =============== 151 { A == 1, B == 2, C = 3, P == &A, Q == &C } 152 B = 4; Q = P; 153 P = &B D = *Q; 154 155There is an obvious data dependency here, as the value loaded into D depends on 156the address retrieved from P by CPU 2. At the end of the sequence, any of the 157following results are possible: 158 159 (Q == &A) and (D == 1) 160 (Q == &B) and (D == 2) 161 (Q == &B) and (D == 4) 162 163Note that CPU 2 will never try and load C into D because the CPU will load P 164into Q before issuing the load of *Q. 165 166 167DEVICE OPERATIONS 168----------------- 169 170Some devices present their control interfaces as collections of memory 171locations, but the order in which the control registers are accessed is very 172important. For instance, imagine an ethernet card with a set of internal 173registers that are accessed through an address port register (A) and a data 174port register (D). To read internal register 5, the following code might then 175be used: 176 177 *A = 5; 178 x = *D; 179 180but this might show up as either of the following two sequences: 181 182 STORE *A = 5, x = LOAD *D 183 x = LOAD *D, STORE *A = 5 184 185the second of which will almost certainly result in a malfunction, since it set 186the address _after_ attempting to read the register. 187 188 189GUARANTEES 190---------- 191 192There are some minimal guarantees that may be expected of a CPU: 193 194 (*) On any given CPU, dependent memory accesses will be issued in order, with 195 respect to itself. This means that for: 196 197 ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q); 198 199 the CPU will issue the following memory operations: 200 201 Q = LOAD P, D = LOAD *Q 202 203 and always in that order. On most systems, smp_read_barrier_depends() 204 does nothing, but it is required for DEC Alpha. The ACCESS_ONCE() 205 is required to prevent compiler mischief. Please note that you 206 should normally use something like rcu_dereference() instead of 207 open-coding smp_read_barrier_depends(). 208 209 (*) Overlapping loads and stores within a particular CPU will appear to be 210 ordered within that CPU. This means that for: 211 212 a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b; 213 214 the CPU will only issue the following sequence of memory operations: 215 216 a = LOAD *X, STORE *X = b 217 218 And for: 219 220 ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X); 221 222 the CPU will only issue: 223 224 STORE *X = c, d = LOAD *X 225 226 (Loads and stores overlap if they are targeted at overlapping pieces of 227 memory). 228 229And there are a number of things that _must_ or _must_not_ be assumed: 230 231 (*) It _must_not_ be assumed that the compiler will do what you want with 232 memory references that are not protected by ACCESS_ONCE(). Without 233 ACCESS_ONCE(), the compiler is within its rights to do all sorts 234 of "creative" transformations, which are covered in the Compiler 235 Barrier section. 236 237 (*) It _must_not_ be assumed that independent loads and stores will be issued 238 in the order given. This means that for: 239 240 X = *A; Y = *B; *D = Z; 241 242 we may get any of the following sequences: 243 244 X = LOAD *A, Y = LOAD *B, STORE *D = Z 245 X = LOAD *A, STORE *D = Z, Y = LOAD *B 246 Y = LOAD *B, X = LOAD *A, STORE *D = Z 247 Y = LOAD *B, STORE *D = Z, X = LOAD *A 248 STORE *D = Z, X = LOAD *A, Y = LOAD *B 249 STORE *D = Z, Y = LOAD *B, X = LOAD *A 250 251 (*) It _must_ be assumed that overlapping memory accesses may be merged or 252 discarded. This means that for: 253 254 X = *A; Y = *(A + 4); 255 256 we may get any one of the following sequences: 257 258 X = LOAD *A; Y = LOAD *(A + 4); 259 Y = LOAD *(A + 4); X = LOAD *A; 260 {X, Y} = LOAD {*A, *(A + 4) }; 261 262 And for: 263 264 *A = X; *(A + 4) = Y; 265 266 we may get any of: 267 268 STORE *A = X; STORE *(A + 4) = Y; 269 STORE *(A + 4) = Y; STORE *A = X; 270 STORE {*A, *(A + 4) } = {X, Y}; 271 272 273========================= 274WHAT ARE MEMORY BARRIERS? 275========================= 276 277As can be seen above, independent memory operations are effectively performed 278in random order, but this can be a problem for CPU-CPU interaction and for I/O. 279What is required is some way of intervening to instruct the compiler and the 280CPU to restrict the order. 281 282Memory barriers are such interventions. They impose a perceived partial 283ordering over the memory operations on either side of the barrier. 284 285Such enforcement is important because the CPUs and other devices in a system 286can use a variety of tricks to improve performance, including reordering, 287deferral and combination of memory operations; speculative loads; speculative 288branch prediction and various types of caching. Memory barriers are used to 289override or suppress these tricks, allowing the code to sanely control the 290interaction of multiple CPUs and/or devices. 291 292 293VARIETIES OF MEMORY BARRIER 294--------------------------- 295 296Memory barriers come in four basic varieties: 297 298 (1) Write (or store) memory barriers. 299 300 A write memory barrier gives a guarantee that all the STORE operations 301 specified before the barrier will appear to happen before all the STORE 302 operations specified after the barrier with respect to the other 303 components of the system. 304 305 A write barrier is a partial ordering on stores only; it is not required 306 to have any effect on loads. 307 308 A CPU can be viewed as committing a sequence of store operations to the 309 memory system as time progresses. All stores before a write barrier will 310 occur in the sequence _before_ all the stores after the write barrier. 311 312 [!] Note that write barriers should normally be paired with read or data 313 dependency barriers; see the "SMP barrier pairing" subsection. 314 315 316 (2) Data dependency barriers. 317 318 A data dependency barrier is a weaker form of read barrier. In the case 319 where two loads are performed such that the second depends on the result 320 of the first (eg: the first load retrieves the address to which the second 321 load will be directed), a data dependency barrier would be required to 322 make sure that the target of the second load is updated before the address 323 obtained by the first load is accessed. 324 325 A data dependency barrier is a partial ordering on interdependent loads 326 only; it is not required to have any effect on stores, independent loads 327 or overlapping loads. 328 329 As mentioned in (1), the other CPUs in the system can be viewed as 330 committing sequences of stores to the memory system that the CPU being 331 considered can then perceive. A data dependency barrier issued by the CPU 332 under consideration guarantees that for any load preceding it, if that 333 load touches one of a sequence of stores from another CPU, then by the 334 time the barrier completes, the effects of all the stores prior to that 335 touched by the load will be perceptible to any loads issued after the data 336 dependency barrier. 337 338 See the "Examples of memory barrier sequences" subsection for diagrams 339 showing the ordering constraints. 340 341 [!] Note that the first load really has to have a _data_ dependency and 342 not a control dependency. If the address for the second load is dependent 343 on the first load, but the dependency is through a conditional rather than 344 actually loading the address itself, then it's a _control_ dependency and 345 a full read barrier or better is required. See the "Control dependencies" 346 subsection for more information. 347 348 [!] Note that data dependency barriers should normally be paired with 349 write barriers; see the "SMP barrier pairing" subsection. 350 351 352 (3) Read (or load) memory barriers. 353 354 A read barrier is a data dependency barrier plus a guarantee that all the 355 LOAD operations specified before the barrier will appear to happen before 356 all the LOAD operations specified after the barrier with respect to the 357 other components of the system. 358 359 A read barrier is a partial ordering on loads only; it is not required to 360 have any effect on stores. 361 362 Read memory barriers imply data dependency barriers, and so can substitute 363 for them. 364 365 [!] Note that read barriers should normally be paired with write barriers; 366 see the "SMP barrier pairing" subsection. 367 368 369 (4) General memory barriers. 370 371 A general memory barrier gives a guarantee that all the LOAD and STORE 372 operations specified before the barrier will appear to happen before all 373 the LOAD and STORE operations specified after the barrier with respect to 374 the other components of the system. 375 376 A general memory barrier is a partial ordering over both loads and stores. 377 378 General memory barriers imply both read and write memory barriers, and so 379 can substitute for either. 380 381 382And a couple of implicit varieties: 383 384 (5) ACQUIRE operations. 385 386 This acts as a one-way permeable barrier. It guarantees that all memory 387 operations after the ACQUIRE operation will appear to happen after the 388 ACQUIRE operation with respect to the other components of the system. 389 ACQUIRE operations include LOCK operations and smp_load_acquire() 390 operations. 391 392 Memory operations that occur before an ACQUIRE operation may appear to 393 happen after it completes. 394 395 An ACQUIRE operation should almost always be paired with a RELEASE 396 operation. 397 398 399 (6) RELEASE operations. 400 401 This also acts as a one-way permeable barrier. It guarantees that all 402 memory operations before the RELEASE operation will appear to happen 403 before the RELEASE operation with respect to the other components of the 404 system. RELEASE operations include UNLOCK operations and 405 smp_store_release() operations. 406 407 Memory operations that occur after a RELEASE operation may appear to 408 happen before it completes. 409 410 The use of ACQUIRE and RELEASE operations generally precludes the need 411 for other sorts of memory barrier (but note the exceptions mentioned in 412 the subsection "MMIO write barrier"). In addition, a RELEASE+ACQUIRE 413 pair is -not- guaranteed to act as a full memory barrier. However, after 414 an ACQUIRE on a given variable, all memory accesses preceding any prior 415 RELEASE on that same variable are guaranteed to be visible. In other 416 words, within a given variable's critical section, all accesses of all 417 previous critical sections for that variable are guaranteed to have 418 completed. 419 420 This means that ACQUIRE acts as a minimal "acquire" operation and 421 RELEASE acts as a minimal "release" operation. 422 423 424Memory barriers are only required where there's a possibility of interaction 425between two CPUs or between a CPU and a device. If it can be guaranteed that 426there won't be any such interaction in any particular piece of code, then 427memory barriers are unnecessary in that piece of code. 428 429 430Note that these are the _minimum_ guarantees. Different architectures may give 431more substantial guarantees, but they may _not_ be relied upon outside of arch 432specific code. 433 434 435WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS? 436---------------------------------------------- 437 438There are certain things that the Linux kernel memory barriers do not guarantee: 439 440 (*) There is no guarantee that any of the memory accesses specified before a 441 memory barrier will be _complete_ by the completion of a memory barrier 442 instruction; the barrier can be considered to draw a line in that CPU's 443 access queue that accesses of the appropriate type may not cross. 444 445 (*) There is no guarantee that issuing a memory barrier on one CPU will have 446 any direct effect on another CPU or any other hardware in the system. The 447 indirect effect will be the order in which the second CPU sees the effects 448 of the first CPU's accesses occur, but see the next point: 449 450 (*) There is no guarantee that a CPU will see the correct order of effects 451 from a second CPU's accesses, even _if_ the second CPU uses a memory 452 barrier, unless the first CPU _also_ uses a matching memory barrier (see 453 the subsection on "SMP Barrier Pairing"). 454 455 (*) There is no guarantee that some intervening piece of off-the-CPU 456 hardware[*] will not reorder the memory accesses. CPU cache coherency 457 mechanisms should propagate the indirect effects of a memory barrier 458 between CPUs, but might not do so in order. 459 460 [*] For information on bus mastering DMA and coherency please read: 461 462 Documentation/PCI/pci.txt 463 Documentation/DMA-API-HOWTO.txt 464 Documentation/DMA-API.txt 465 466 467DATA DEPENDENCY BARRIERS 468------------------------ 469 470The usage requirements of data dependency barriers are a little subtle, and 471it's not always obvious that they're needed. To illustrate, consider the 472following sequence of events: 473 474 CPU 1 CPU 2 475 =============== =============== 476 { A == 1, B == 2, C = 3, P == &A, Q == &C } 477 B = 4; 478 <write barrier> 479 ACCESS_ONCE(P) = &B 480 Q = ACCESS_ONCE(P); 481 D = *Q; 482 483There's a clear data dependency here, and it would seem that by the end of the 484sequence, Q must be either &A or &B, and that: 485 486 (Q == &A) implies (D == 1) 487 (Q == &B) implies (D == 4) 488 489But! CPU 2's perception of P may be updated _before_ its perception of B, thus 490leading to the following situation: 491 492 (Q == &B) and (D == 2) ???? 493 494Whilst this may seem like a failure of coherency or causality maintenance, it 495isn't, and this behaviour can be observed on certain real CPUs (such as the DEC 496Alpha). 497 498To deal with this, a data dependency barrier or better must be inserted 499between the address load and the data load: 500 501 CPU 1 CPU 2 502 =============== =============== 503 { A == 1, B == 2, C = 3, P == &A, Q == &C } 504 B = 4; 505 <write barrier> 506 ACCESS_ONCE(P) = &B 507 Q = ACCESS_ONCE(P); 508 <data dependency barrier> 509 D = *Q; 510 511This enforces the occurrence of one of the two implications, and prevents the 512third possibility from arising. 513 514[!] Note that this extremely counterintuitive situation arises most easily on 515machines with split caches, so that, for example, one cache bank processes 516even-numbered cache lines and the other bank processes odd-numbered cache 517lines. The pointer P might be stored in an odd-numbered cache line, and the 518variable B might be stored in an even-numbered cache line. Then, if the 519even-numbered bank of the reading CPU's cache is extremely busy while the 520odd-numbered bank is idle, one can see the new value of the pointer P (&B), 521but the old value of the variable B (2). 522 523 524Another example of where data dependency barriers might be required is where a 525number is read from memory and then used to calculate the index for an array 526access: 527 528 CPU 1 CPU 2 529 =============== =============== 530 { M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 } 531 M[1] = 4; 532 <write barrier> 533 ACCESS_ONCE(P) = 1 534 Q = ACCESS_ONCE(P); 535 <data dependency barrier> 536 D = M[Q]; 537 538 539The data dependency barrier is very important to the RCU system, 540for example. See rcu_assign_pointer() and rcu_dereference() in 541include/linux/rcupdate.h. This permits the current target of an RCU'd 542pointer to be replaced with a new modified target, without the replacement 543target appearing to be incompletely initialised. 544 545See also the subsection on "Cache Coherency" for a more thorough example. 546 547 548CONTROL DEPENDENCIES 549-------------------- 550 551A control dependency requires a full read memory barrier, not simply a data 552dependency barrier to make it work correctly. Consider the following bit of 553code: 554 555 q = ACCESS_ONCE(a); 556 if (q) { 557 <data dependency barrier> /* BUG: No data dependency!!! */ 558 p = ACCESS_ONCE(b); 559 } 560 561This will not have the desired effect because there is no actual data 562dependency, but rather a control dependency that the CPU may short-circuit 563by attempting to predict the outcome in advance, so that other CPUs see 564the load from b as having happened before the load from a. In such a 565case what's actually required is: 566 567 q = ACCESS_ONCE(a); 568 if (q) { 569 <read barrier> 570 p = ACCESS_ONCE(b); 571 } 572 573However, stores are not speculated. This means that ordering -is- provided 574in the following example: 575 576 q = ACCESS_ONCE(a); 577 if (ACCESS_ONCE(q)) { 578 ACCESS_ONCE(b) = p; 579 } 580 581Please note that ACCESS_ONCE() is not optional! Without the ACCESS_ONCE(), 582the compiler is within its rights to transform this example: 583 584 q = a; 585 if (q) { 586 b = p; /* BUG: Compiler can reorder!!! */ 587 do_something(); 588 } else { 589 b = p; /* BUG: Compiler can reorder!!! */ 590 do_something_else(); 591 } 592 593into this, which of course defeats the ordering: 594 595 b = p; 596 q = a; 597 if (q) 598 do_something(); 599 else 600 do_something_else(); 601 602Worse yet, if the compiler is able to prove (say) that the value of 603variable 'a' is always non-zero, it would be well within its rights 604to optimize the original example by eliminating the "if" statement 605as follows: 606 607 q = a; 608 b = p; /* BUG: Compiler can reorder!!! */ 609 do_something(); 610 611The solution is again ACCESS_ONCE() and barrier(), which preserves the 612ordering between the load from variable 'a' and the store to variable 'b': 613 614 q = ACCESS_ONCE(a); 615 if (q) { 616 barrier(); 617 ACCESS_ONCE(b) = p; 618 do_something(); 619 } else { 620 barrier(); 621 ACCESS_ONCE(b) = p; 622 do_something_else(); 623 } 624 625The initial ACCESS_ONCE() is required to prevent the compiler from 626proving the value of 'a', and the pair of barrier() invocations are 627required to prevent the compiler from pulling the two identical stores 628to 'b' out from the legs of the "if" statement. 629 630It is important to note that control dependencies absolutely require a 631a conditional. For example, the following "optimized" version of 632the above example breaks ordering, which is why the barrier() invocations 633are absolutely required if you have identical stores in both legs of 634the "if" statement: 635 636 q = ACCESS_ONCE(a); 637 ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */ 638 if (q) { 639 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ 640 do_something(); 641 } else { 642 /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */ 643 do_something_else(); 644 } 645 646It is of course legal for the prior load to be part of the conditional, 647for example, as follows: 648 649 if (ACCESS_ONCE(a) > 0) { 650 barrier(); 651 ACCESS_ONCE(b) = q / 2; 652 do_something(); 653 } else { 654 barrier(); 655 ACCESS_ONCE(b) = q / 3; 656 do_something_else(); 657 } 658 659This will again ensure that the load from variable 'a' is ordered before the 660stores to variable 'b'. 661 662In addition, you need to be careful what you do with the local variable 'q', 663otherwise the compiler might be able to guess the value and again remove 664the needed conditional. For example: 665 666 q = ACCESS_ONCE(a); 667 if (q % MAX) { 668 barrier(); 669 ACCESS_ONCE(b) = p; 670 do_something(); 671 } else { 672 barrier(); 673 ACCESS_ONCE(b) = p; 674 do_something_else(); 675 } 676 677If MAX is defined to be 1, then the compiler knows that (q % MAX) is 678equal to zero, in which case the compiler is within its rights to 679transform the above code into the following: 680 681 q = ACCESS_ONCE(a); 682 ACCESS_ONCE(b) = p; 683 do_something_else(); 684 685This transformation loses the ordering between the load from variable 'a' 686and the store to variable 'b'. If you are relying on this ordering, you 687should do something like the following: 688 689 q = ACCESS_ONCE(a); 690 BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */ 691 if (q % MAX) { 692 ACCESS_ONCE(b) = p; 693 do_something(); 694 } else { 695 ACCESS_ONCE(b) = p; 696 do_something_else(); 697 } 698 699Finally, control dependencies do -not- provide transitivity. This is 700demonstrated by two related examples: 701 702 CPU 0 CPU 1 703 ===================== ===================== 704 r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y); 705 if (r1 >= 0) if (r2 >= 0) 706 ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1; 707 708 assert(!(r1 == 1 && r2 == 1)); 709 710The above two-CPU example will never trigger the assert(). However, 711if control dependencies guaranteed transitivity (which they do not), 712then adding the following two CPUs would guarantee a related assertion: 713 714 CPU 2 CPU 3 715 ===================== ===================== 716 ACCESS_ONCE(x) = 2; ACCESS_ONCE(y) = 2; 717 718 assert(!(r1 == 2 && r2 == 2 && x == 1 && y == 1)); /* FAILS!!! */ 719 720But because control dependencies do -not- provide transitivity, the 721above assertion can fail after the combined four-CPU example completes. 722If you need the four-CPU example to provide ordering, you will need 723smp_mb() between the loads and stores in the CPU 0 and CPU 1 code fragments. 724 725In summary: 726 727 (*) Control dependencies can order prior loads against later stores. 728 However, they do -not- guarantee any other sort of ordering: 729 Not prior loads against later loads, nor prior stores against 730 later anything. If you need these other forms of ordering, 731 use smb_rmb(), smp_wmb(), or, in the case of prior stores and 732 later loads, smp_mb(). 733 734 (*) If both legs of the "if" statement begin with identical stores 735 to the same variable, a barrier() statement is required at the 736 beginning of each leg of the "if" statement. 737 738 (*) Control dependencies require at least one run-time conditional 739 between the prior load and the subsequent store, and this 740 conditional must involve the prior load. If the compiler 741 is able to optimize the conditional away, it will have also 742 optimized away the ordering. Careful use of ACCESS_ONCE() can 743 help to preserve the needed conditional. 744 745 (*) Control dependencies require that the compiler avoid reordering the 746 dependency into nonexistence. Careful use of ACCESS_ONCE() or 747 barrier() can help to preserve your control dependency. Please 748 see the Compiler Barrier section for more information. 749 750 (*) Control dependencies do -not- provide transitivity. If you 751 need transitivity, use smp_mb(). 752 753 754SMP BARRIER PAIRING 755------------------- 756 757When dealing with CPU-CPU interactions, certain types of memory barrier should 758always be paired. A lack of appropriate pairing is almost certainly an error. 759 760A write barrier should always be paired with a data dependency barrier or read 761barrier, though a general barrier would also be viable. Similarly a read 762barrier or a data dependency barrier should always be paired with at least an 763write barrier, though, again, a general barrier is viable: 764 765 CPU 1 CPU 2 766 =============== =============== 767 ACCESS_ONCE(a) = 1; 768 <write barrier> 769 ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b); 770 <read barrier> 771 y = ACCESS_ONCE(a); 772 773Or: 774 775 CPU 1 CPU 2 776 =============== =============================== 777 a = 1; 778 <write barrier> 779 ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b); 780 <data dependency barrier> 781 y = *x; 782 783Basically, the read barrier always has to be there, even though it can be of 784the "weaker" type. 785 786[!] Note that the stores before the write barrier would normally be expected to 787match the loads after the read barrier or the data dependency barrier, and vice 788versa: 789 790 CPU 1 CPU 2 791 =================== =================== 792 ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c); 793 ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d); 794 <write barrier> \ <read barrier> 795 ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a); 796 ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b); 797 798 799EXAMPLES OF MEMORY BARRIER SEQUENCES 800------------------------------------ 801 802Firstly, write barriers act as partial orderings on store operations. 803Consider the following sequence of events: 804 805 CPU 1 806 ======================= 807 STORE A = 1 808 STORE B = 2 809 STORE C = 3 810 <write barrier> 811 STORE D = 4 812 STORE E = 5 813 814This sequence of events is committed to the memory coherence system in an order 815that the rest of the system might perceive as the unordered set of { STORE A, 816STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E 817}: 818 819 +-------+ : : 820 | | +------+ 821 | |------>| C=3 | } /\ 822 | | : +------+ }----- \ -----> Events perceptible to 823 | | : | A=1 | } \/ the rest of the system 824 | | : +------+ } 825 | CPU 1 | : | B=2 | } 826 | | +------+ } 827 | | wwwwwwwwwwwwwwww } <--- At this point the write barrier 828 | | +------+ } requires all stores prior to the 829 | | : | E=5 | } barrier to be committed before 830 | | : +------+ } further stores may take place 831 | |------>| D=4 | } 832 | | +------+ 833 +-------+ : : 834 | 835 | Sequence in which stores are committed to the 836 | memory system by CPU 1 837 V 838 839 840Secondly, data dependency barriers act as partial orderings on data-dependent 841loads. Consider the following sequence of events: 842 843 CPU 1 CPU 2 844 ======================= ======================= 845 { B = 7; X = 9; Y = 8; C = &Y } 846 STORE A = 1 847 STORE B = 2 848 <write barrier> 849 STORE C = &B LOAD X 850 STORE D = 4 LOAD C (gets &B) 851 LOAD *C (reads B) 852 853Without intervention, CPU 2 may perceive the events on CPU 1 in some 854effectively random order, despite the write barrier issued by CPU 1: 855 856 +-------+ : : : : 857 | | +------+ +-------+ | Sequence of update 858 | |------>| B=2 |----- --->| Y->8 | | of perception on 859 | | : +------+ \ +-------+ | CPU 2 860 | CPU 1 | : | A=1 | \ --->| C->&Y | V 861 | | +------+ | +-------+ 862 | | wwwwwwwwwwwwwwww | : : 863 | | +------+ | : : 864 | | : | C=&B |--- | : : +-------+ 865 | | : +------+ \ | +-------+ | | 866 | |------>| D=4 | ----------->| C->&B |------>| | 867 | | +------+ | +-------+ | | 868 +-------+ : : | : : | | 869 | : : | | 870 | : : | CPU 2 | 871 | +-------+ | | 872 Apparently incorrect ---> | | B->7 |------>| | 873 perception of B (!) | +-------+ | | 874 | : : | | 875 | +-------+ | | 876 The load of X holds ---> \ | X->9 |------>| | 877 up the maintenance \ +-------+ | | 878 of coherence of B ----->| B->2 | +-------+ 879 +-------+ 880 : : 881 882 883In the above example, CPU 2 perceives that B is 7, despite the load of *C 884(which would be B) coming after the LOAD of C. 885 886If, however, a data dependency barrier were to be placed between the load of C 887and the load of *C (ie: B) on CPU 2: 888 889 CPU 1 CPU 2 890 ======================= ======================= 891 { B = 7; X = 9; Y = 8; C = &Y } 892 STORE A = 1 893 STORE B = 2 894 <write barrier> 895 STORE C = &B LOAD X 896 STORE D = 4 LOAD C (gets &B) 897 <data dependency barrier> 898 LOAD *C (reads B) 899 900then the following will occur: 901 902 +-------+ : : : : 903 | | +------+ +-------+ 904 | |------>| B=2 |----- --->| Y->8 | 905 | | : +------+ \ +-------+ 906 | CPU 1 | : | A=1 | \ --->| C->&Y | 907 | | +------+ | +-------+ 908 | | wwwwwwwwwwwwwwww | : : 909 | | +------+ | : : 910 | | : | C=&B |--- | : : +-------+ 911 | | : +------+ \ | +-------+ | | 912 | |------>| D=4 | ----------->| C->&B |------>| | 913 | | +------+ | +-------+ | | 914 +-------+ : : | : : | | 915 | : : | | 916 | : : | CPU 2 | 917 | +-------+ | | 918 | | X->9 |------>| | 919 | +-------+ | | 920 Makes sure all effects ---> \ ddddddddddddddddd | | 921 prior to the store of C \ +-------+ | | 922 are perceptible to ----->| B->2 |------>| | 923 subsequent loads +-------+ | | 924 : : +-------+ 925 926 927And thirdly, a read barrier acts as a partial order on loads. Consider the 928following sequence of events: 929 930 CPU 1 CPU 2 931 ======================= ======================= 932 { A = 0, B = 9 } 933 STORE A=1 934 <write barrier> 935 STORE B=2 936 LOAD B 937 LOAD A 938 939Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in 940some effectively random order, despite the write barrier issued by CPU 1: 941 942 +-------+ : : : : 943 | | +------+ +-------+ 944 | |------>| A=1 |------ --->| A->0 | 945 | | +------+ \ +-------+ 946 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 947 | | +------+ | +-------+ 948 | |------>| B=2 |--- | : : 949 | | +------+ \ | : : +-------+ 950 +-------+ : : \ | +-------+ | | 951 ---------->| B->2 |------>| | 952 | +-------+ | CPU 2 | 953 | | A->0 |------>| | 954 | +-------+ | | 955 | : : +-------+ 956 \ : : 957 \ +-------+ 958 ---->| A->1 | 959 +-------+ 960 : : 961 962 963If, however, a read barrier were to be placed between the load of B and the 964load of A on CPU 2: 965 966 CPU 1 CPU 2 967 ======================= ======================= 968 { A = 0, B = 9 } 969 STORE A=1 970 <write barrier> 971 STORE B=2 972 LOAD B 973 <read barrier> 974 LOAD A 975 976then the partial ordering imposed by CPU 1 will be perceived correctly by CPU 9772: 978 979 +-------+ : : : : 980 | | +------+ +-------+ 981 | |------>| A=1 |------ --->| A->0 | 982 | | +------+ \ +-------+ 983 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 984 | | +------+ | +-------+ 985 | |------>| B=2 |--- | : : 986 | | +------+ \ | : : +-------+ 987 +-------+ : : \ | +-------+ | | 988 ---------->| B->2 |------>| | 989 | +-------+ | CPU 2 | 990 | : : | | 991 | : : | | 992 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 993 barrier causes all effects \ +-------+ | | 994 prior to the storage of B ---->| A->1 |------>| | 995 to be perceptible to CPU 2 +-------+ | | 996 : : +-------+ 997 998 999To illustrate this more completely, consider what could happen if the code 1000contained a load of A either side of the read barrier: 1001 1002 CPU 1 CPU 2 1003 ======================= ======================= 1004 { A = 0, B = 9 } 1005 STORE A=1 1006 <write barrier> 1007 STORE B=2 1008 LOAD B 1009 LOAD A [first load of A] 1010 <read barrier> 1011 LOAD A [second load of A] 1012 1013Even though the two loads of A both occur after the load of B, they may both 1014come up with different values: 1015 1016 +-------+ : : : : 1017 | | +------+ +-------+ 1018 | |------>| A=1 |------ --->| A->0 | 1019 | | +------+ \ +-------+ 1020 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1021 | | +------+ | +-------+ 1022 | |------>| B=2 |--- | : : 1023 | | +------+ \ | : : +-------+ 1024 +-------+ : : \ | +-------+ | | 1025 ---------->| B->2 |------>| | 1026 | +-------+ | CPU 2 | 1027 | : : | | 1028 | : : | | 1029 | +-------+ | | 1030 | | A->0 |------>| 1st | 1031 | +-------+ | | 1032 At this point the read ----> \ rrrrrrrrrrrrrrrrr | | 1033 barrier causes all effects \ +-------+ | | 1034 prior to the storage of B ---->| A->1 |------>| 2nd | 1035 to be perceptible to CPU 2 +-------+ | | 1036 : : +-------+ 1037 1038 1039But it may be that the update to A from CPU 1 becomes perceptible to CPU 2 1040before the read barrier completes anyway: 1041 1042 +-------+ : : : : 1043 | | +------+ +-------+ 1044 | |------>| A=1 |------ --->| A->0 | 1045 | | +------+ \ +-------+ 1046 | CPU 1 | wwwwwwwwwwwwwwww \ --->| B->9 | 1047 | | +------+ | +-------+ 1048 | |------>| B=2 |--- | : : 1049 | | +------+ \ | : : +-------+ 1050 +-------+ : : \ | +-------+ | | 1051 ---------->| B->2 |------>| | 1052 | +-------+ | CPU 2 | 1053 | : : | | 1054 \ : : | | 1055 \ +-------+ | | 1056 ---->| A->1 |------>| 1st | 1057 +-------+ | | 1058 rrrrrrrrrrrrrrrrr | | 1059 +-------+ | | 1060 | A->1 |------>| 2nd | 1061 +-------+ | | 1062 : : +-------+ 1063 1064 1065The guarantee is that the second load will always come up with A == 1 if the 1066load of B came up with B == 2. No such guarantee exists for the first load of 1067A; that may come up with either A == 0 or A == 1. 1068 1069 1070READ MEMORY BARRIERS VS LOAD SPECULATION 1071---------------------------------------- 1072 1073Many CPUs speculate with loads: that is they see that they will need to load an 1074item from memory, and they find a time where they're not using the bus for any 1075other loads, and so do the load in advance - even though they haven't actually 1076got to that point in the instruction execution flow yet. This permits the 1077actual load instruction to potentially complete immediately because the CPU 1078already has the value to hand. 1079 1080It may turn out that the CPU didn't actually need the value - perhaps because a 1081branch circumvented the load - in which case it can discard the value or just 1082cache it for later use. 1083 1084Consider: 1085 1086 CPU 1 CPU 2 1087 ======================= ======================= 1088 LOAD B 1089 DIVIDE } Divide instructions generally 1090 DIVIDE } take a long time to perform 1091 LOAD A 1092 1093Which might appear as this: 1094 1095 : : +-------+ 1096 +-------+ | | 1097 --->| B->2 |------>| | 1098 +-------+ | CPU 2 | 1099 : :DIVIDE | | 1100 +-------+ | | 1101 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1102 division speculates on the +-------+ ~ | | 1103 LOAD of A : : ~ | | 1104 : :DIVIDE | | 1105 : : ~ | | 1106 Once the divisions are complete --> : : ~-->| | 1107 the CPU can then perform the : : | | 1108 LOAD with immediate effect : : +-------+ 1109 1110 1111Placing a read barrier or a data dependency barrier just before the second 1112load: 1113 1114 CPU 1 CPU 2 1115 ======================= ======================= 1116 LOAD B 1117 DIVIDE 1118 DIVIDE 1119 <read barrier> 1120 LOAD A 1121 1122will force any value speculatively obtained to be reconsidered to an extent 1123dependent on the type of barrier used. If there was no change made to the 1124speculated memory location, then the speculated value will just be used: 1125 1126 : : +-------+ 1127 +-------+ | | 1128 --->| B->2 |------>| | 1129 +-------+ | CPU 2 | 1130 : :DIVIDE | | 1131 +-------+ | | 1132 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1133 division speculates on the +-------+ ~ | | 1134 LOAD of A : : ~ | | 1135 : :DIVIDE | | 1136 : : ~ | | 1137 : : ~ | | 1138 rrrrrrrrrrrrrrrr~ | | 1139 : : ~ | | 1140 : : ~-->| | 1141 : : | | 1142 : : +-------+ 1143 1144 1145but if there was an update or an invalidation from another CPU pending, then 1146the speculation will be cancelled and the value reloaded: 1147 1148 : : +-------+ 1149 +-------+ | | 1150 --->| B->2 |------>| | 1151 +-------+ | CPU 2 | 1152 : :DIVIDE | | 1153 +-------+ | | 1154 The CPU being busy doing a ---> --->| A->0 |~~~~ | | 1155 division speculates on the +-------+ ~ | | 1156 LOAD of A : : ~ | | 1157 : :DIVIDE | | 1158 : : ~ | | 1159 : : ~ | | 1160 rrrrrrrrrrrrrrrrr | | 1161 +-------+ | | 1162 The speculation is discarded ---> --->| A->1 |------>| | 1163 and an updated value is +-------+ | | 1164 retrieved : : +-------+ 1165 1166 1167TRANSITIVITY 1168------------ 1169 1170Transitivity is a deeply intuitive notion about ordering that is not 1171always provided by real computer systems. The following example 1172demonstrates transitivity (also called "cumulativity"): 1173 1174 CPU 1 CPU 2 CPU 3 1175 ======================= ======================= ======================= 1176 { X = 0, Y = 0 } 1177 STORE X=1 LOAD X STORE Y=1 1178 <general barrier> <general barrier> 1179 LOAD Y LOAD X 1180 1181Suppose that CPU 2's load from X returns 1 and its load from Y returns 0. 1182This indicates that CPU 2's load from X in some sense follows CPU 1's 1183store to X and that CPU 2's load from Y in some sense preceded CPU 3's 1184store to Y. The question is then "Can CPU 3's load from X return 0?" 1185 1186Because CPU 2's load from X in some sense came after CPU 1's store, it 1187is natural to expect that CPU 3's load from X must therefore return 1. 1188This expectation is an example of transitivity: if a load executing on 1189CPU A follows a load from the same variable executing on CPU B, then 1190CPU A's load must either return the same value that CPU B's load did, 1191or must return some later value. 1192 1193In the Linux kernel, use of general memory barriers guarantees 1194transitivity. Therefore, in the above example, if CPU 2's load from X 1195returns 1 and its load from Y returns 0, then CPU 3's load from X must 1196also return 1. 1197 1198However, transitivity is -not- guaranteed for read or write barriers. 1199For example, suppose that CPU 2's general barrier in the above example 1200is changed to a read barrier as shown below: 1201 1202 CPU 1 CPU 2 CPU 3 1203 ======================= ======================= ======================= 1204 { X = 0, Y = 0 } 1205 STORE X=1 LOAD X STORE Y=1 1206 <read barrier> <general barrier> 1207 LOAD Y LOAD X 1208 1209This substitution destroys transitivity: in this example, it is perfectly 1210legal for CPU 2's load from X to return 1, its load from Y to return 0, 1211and CPU 3's load from X to return 0. 1212 1213The key point is that although CPU 2's read barrier orders its pair 1214of loads, it does not guarantee to order CPU 1's store. Therefore, if 1215this example runs on a system where CPUs 1 and 2 share a store buffer 1216or a level of cache, CPU 2 might have early access to CPU 1's writes. 1217General barriers are therefore required to ensure that all CPUs agree 1218on the combined order of CPU 1's and CPU 2's accesses. 1219 1220To reiterate, if your code requires transitivity, use general barriers 1221throughout. 1222 1223 1224======================== 1225EXPLICIT KERNEL BARRIERS 1226======================== 1227 1228The Linux kernel has a variety of different barriers that act at different 1229levels: 1230 1231 (*) Compiler barrier. 1232 1233 (*) CPU memory barriers. 1234 1235 (*) MMIO write barrier. 1236 1237 1238COMPILER BARRIER 1239---------------- 1240 1241The Linux kernel has an explicit compiler barrier function that prevents the 1242compiler from moving the memory accesses either side of it to the other side: 1243 1244 barrier(); 1245 1246This is a general barrier -- there are no read-read or write-write variants 1247of barrier(). However, ACCESS_ONCE() can be thought of as a weak form 1248for barrier() that affects only the specific accesses flagged by the 1249ACCESS_ONCE(). 1250 1251The barrier() function has the following effects: 1252 1253 (*) Prevents the compiler from reordering accesses following the 1254 barrier() to precede any accesses preceding the barrier(). 1255 One example use for this property is to ease communication between 1256 interrupt-handler code and the code that was interrupted. 1257 1258 (*) Within a loop, forces the compiler to load the variables used 1259 in that loop's conditional on each pass through that loop. 1260 1261The ACCESS_ONCE() function can prevent any number of optimizations that, 1262while perfectly safe in single-threaded code, can be fatal in concurrent 1263code. Here are some examples of these sorts of optimizations: 1264 1265 (*) The compiler is within its rights to reorder loads and stores 1266 to the same variable, and in some cases, the CPU is within its 1267 rights to reorder loads to the same variable. This means that 1268 the following code: 1269 1270 a[0] = x; 1271 a[1] = x; 1272 1273 Might result in an older value of x stored in a[1] than in a[0]. 1274 Prevent both the compiler and the CPU from doing this as follows: 1275 1276 a[0] = ACCESS_ONCE(x); 1277 a[1] = ACCESS_ONCE(x); 1278 1279 In short, ACCESS_ONCE() provides cache coherence for accesses from 1280 multiple CPUs to a single variable. 1281 1282 (*) The compiler is within its rights to merge successive loads from 1283 the same variable. Such merging can cause the compiler to "optimize" 1284 the following code: 1285 1286 while (tmp = a) 1287 do_something_with(tmp); 1288 1289 into the following code, which, although in some sense legitimate 1290 for single-threaded code, is almost certainly not what the developer 1291 intended: 1292 1293 if (tmp = a) 1294 for (;;) 1295 do_something_with(tmp); 1296 1297 Use ACCESS_ONCE() to prevent the compiler from doing this to you: 1298 1299 while (tmp = ACCESS_ONCE(a)) 1300 do_something_with(tmp); 1301 1302 (*) The compiler is within its rights to reload a variable, for example, 1303 in cases where high register pressure prevents the compiler from 1304 keeping all data of interest in registers. The compiler might 1305 therefore optimize the variable 'tmp' out of our previous example: 1306 1307 while (tmp = a) 1308 do_something_with(tmp); 1309 1310 This could result in the following code, which is perfectly safe in 1311 single-threaded code, but can be fatal in concurrent code: 1312 1313 while (a) 1314 do_something_with(a); 1315 1316 For example, the optimized version of this code could result in 1317 passing a zero to do_something_with() in the case where the variable 1318 a was modified by some other CPU between the "while" statement and 1319 the call to do_something_with(). 1320 1321 Again, use ACCESS_ONCE() to prevent the compiler from doing this: 1322 1323 while (tmp = ACCESS_ONCE(a)) 1324 do_something_with(tmp); 1325 1326 Note that if the compiler runs short of registers, it might save 1327 tmp onto the stack. The overhead of this saving and later restoring 1328 is why compilers reload variables. Doing so is perfectly safe for 1329 single-threaded code, so you need to tell the compiler about cases 1330 where it is not safe. 1331 1332 (*) The compiler is within its rights to omit a load entirely if it knows 1333 what the value will be. For example, if the compiler can prove that 1334 the value of variable 'a' is always zero, it can optimize this code: 1335 1336 while (tmp = a) 1337 do_something_with(tmp); 1338 1339 Into this: 1340 1341 do { } while (0); 1342 1343 This transformation is a win for single-threaded code because it gets 1344 rid of a load and a branch. The problem is that the compiler will 1345 carry out its proof assuming that the current CPU is the only one 1346 updating variable 'a'. If variable 'a' is shared, then the compiler's 1347 proof will be erroneous. Use ACCESS_ONCE() to tell the compiler 1348 that it doesn't know as much as it thinks it does: 1349 1350 while (tmp = ACCESS_ONCE(a)) 1351 do_something_with(tmp); 1352 1353 But please note that the compiler is also closely watching what you 1354 do with the value after the ACCESS_ONCE(). For example, suppose you 1355 do the following and MAX is a preprocessor macro with the value 1: 1356 1357 while ((tmp = ACCESS_ONCE(a)) % MAX) 1358 do_something_with(tmp); 1359 1360 Then the compiler knows that the result of the "%" operator applied 1361 to MAX will always be zero, again allowing the compiler to optimize 1362 the code into near-nonexistence. (It will still load from the 1363 variable 'a'.) 1364 1365 (*) Similarly, the compiler is within its rights to omit a store entirely 1366 if it knows that the variable already has the value being stored. 1367 Again, the compiler assumes that the current CPU is the only one 1368 storing into the variable, which can cause the compiler to do the 1369 wrong thing for shared variables. For example, suppose you have 1370 the following: 1371 1372 a = 0; 1373 /* Code that does not store to variable a. */ 1374 a = 0; 1375 1376 The compiler sees that the value of variable 'a' is already zero, so 1377 it might well omit the second store. This would come as a fatal 1378 surprise if some other CPU might have stored to variable 'a' in the 1379 meantime. 1380 1381 Use ACCESS_ONCE() to prevent the compiler from making this sort of 1382 wrong guess: 1383 1384 ACCESS_ONCE(a) = 0; 1385 /* Code that does not store to variable a. */ 1386 ACCESS_ONCE(a) = 0; 1387 1388 (*) The compiler is within its rights to reorder memory accesses unless 1389 you tell it not to. For example, consider the following interaction 1390 between process-level code and an interrupt handler: 1391 1392 void process_level(void) 1393 { 1394 msg = get_message(); 1395 flag = true; 1396 } 1397 1398 void interrupt_handler(void) 1399 { 1400 if (flag) 1401 process_message(msg); 1402 } 1403 1404 There is nothing to prevent the compiler from transforming 1405 process_level() to the following, in fact, this might well be a 1406 win for single-threaded code: 1407 1408 void process_level(void) 1409 { 1410 flag = true; 1411 msg = get_message(); 1412 } 1413 1414 If the interrupt occurs between these two statement, then 1415 interrupt_handler() might be passed a garbled msg. Use ACCESS_ONCE() 1416 to prevent this as follows: 1417 1418 void process_level(void) 1419 { 1420 ACCESS_ONCE(msg) = get_message(); 1421 ACCESS_ONCE(flag) = true; 1422 } 1423 1424 void interrupt_handler(void) 1425 { 1426 if (ACCESS_ONCE(flag)) 1427 process_message(ACCESS_ONCE(msg)); 1428 } 1429 1430 Note that the ACCESS_ONCE() wrappers in interrupt_handler() 1431 are needed if this interrupt handler can itself be interrupted 1432 by something that also accesses 'flag' and 'msg', for example, 1433 a nested interrupt or an NMI. Otherwise, ACCESS_ONCE() is not 1434 needed in interrupt_handler() other than for documentation purposes. 1435 (Note also that nested interrupts do not typically occur in modern 1436 Linux kernels, in fact, if an interrupt handler returns with 1437 interrupts enabled, you will get a WARN_ONCE() splat.) 1438 1439 You should assume that the compiler can move ACCESS_ONCE() past 1440 code not containing ACCESS_ONCE(), barrier(), or similar primitives. 1441 1442 This effect could also be achieved using barrier(), but ACCESS_ONCE() 1443 is more selective: With ACCESS_ONCE(), the compiler need only forget 1444 the contents of the indicated memory locations, while with barrier() 1445 the compiler must discard the value of all memory locations that 1446 it has currented cached in any machine registers. Of course, 1447 the compiler must also respect the order in which the ACCESS_ONCE()s 1448 occur, though the CPU of course need not do so. 1449 1450 (*) The compiler is within its rights to invent stores to a variable, 1451 as in the following example: 1452 1453 if (a) 1454 b = a; 1455 else 1456 b = 42; 1457 1458 The compiler might save a branch by optimizing this as follows: 1459 1460 b = 42; 1461 if (a) 1462 b = a; 1463 1464 In single-threaded code, this is not only safe, but also saves 1465 a branch. Unfortunately, in concurrent code, this optimization 1466 could cause some other CPU to see a spurious value of 42 -- even 1467 if variable 'a' was never zero -- when loading variable 'b'. 1468 Use ACCESS_ONCE() to prevent this as follows: 1469 1470 if (a) 1471 ACCESS_ONCE(b) = a; 1472 else 1473 ACCESS_ONCE(b) = 42; 1474 1475 The compiler can also invent loads. These are usually less 1476 damaging, but they can result in cache-line bouncing and thus in 1477 poor performance and scalability. Use ACCESS_ONCE() to prevent 1478 invented loads. 1479 1480 (*) For aligned memory locations whose size allows them to be accessed 1481 with a single memory-reference instruction, prevents "load tearing" 1482 and "store tearing," in which a single large access is replaced by 1483 multiple smaller accesses. For example, given an architecture having 1484 16-bit store instructions with 7-bit immediate fields, the compiler 1485 might be tempted to use two 16-bit store-immediate instructions to 1486 implement the following 32-bit store: 1487 1488 p = 0x00010002; 1489 1490 Please note that GCC really does use this sort of optimization, 1491 which is not surprising given that it would likely take more 1492 than two instructions to build the constant and then store it. 1493 This optimization can therefore be a win in single-threaded code. 1494 In fact, a recent bug (since fixed) caused GCC to incorrectly use 1495 this optimization in a volatile store. In the absence of such bugs, 1496 use of ACCESS_ONCE() prevents store tearing in the following example: 1497 1498 ACCESS_ONCE(p) = 0x00010002; 1499 1500 Use of packed structures can also result in load and store tearing, 1501 as in this example: 1502 1503 struct __attribute__((__packed__)) foo { 1504 short a; 1505 int b; 1506 short c; 1507 }; 1508 struct foo foo1, foo2; 1509 ... 1510 1511 foo2.a = foo1.a; 1512 foo2.b = foo1.b; 1513 foo2.c = foo1.c; 1514 1515 Because there are no ACCESS_ONCE() wrappers and no volatile markings, 1516 the compiler would be well within its rights to implement these three 1517 assignment statements as a pair of 32-bit loads followed by a pair 1518 of 32-bit stores. This would result in load tearing on 'foo1.b' 1519 and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing 1520 in this example: 1521 1522 foo2.a = foo1.a; 1523 ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b); 1524 foo2.c = foo1.c; 1525 1526All that aside, it is never necessary to use ACCESS_ONCE() on a variable 1527that has been marked volatile. For example, because 'jiffies' is marked 1528volatile, it is never necessary to say ACCESS_ONCE(jiffies). The reason 1529for this is that ACCESS_ONCE() is implemented as a volatile cast, which 1530has no effect when its argument is already marked volatile. 1531 1532Please note that these compiler barriers have no direct effect on the CPU, 1533which may then reorder things however it wishes. 1534 1535 1536CPU MEMORY BARRIERS 1537------------------- 1538 1539The Linux kernel has eight basic CPU memory barriers: 1540 1541 TYPE MANDATORY SMP CONDITIONAL 1542 =============== ======================= =========================== 1543 GENERAL mb() smp_mb() 1544 WRITE wmb() smp_wmb() 1545 READ rmb() smp_rmb() 1546 DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() 1547 1548 1549All memory barriers except the data dependency barriers imply a compiler 1550barrier. Data dependencies do not impose any additional compiler ordering. 1551 1552Aside: In the case of data dependencies, the compiler would be expected to 1553issue the loads in the correct order (eg. `a[b]` would have to load the value 1554of b before loading a[b]), however there is no guarantee in the C specification 1555that the compiler may not speculate the value of b (eg. is equal to 1) and load 1556a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the 1557problem of a compiler reloading b after having loaded a[b], thus having a newer 1558copy of b than a[b]. A consensus has not yet been reached about these problems, 1559however the ACCESS_ONCE macro is a good place to start looking. 1560 1561SMP memory barriers are reduced to compiler barriers on uniprocessor compiled 1562systems because it is assumed that a CPU will appear to be self-consistent, 1563and will order overlapping accesses correctly with respect to itself. 1564 1565[!] Note that SMP memory barriers _must_ be used to control the ordering of 1566references to shared memory on SMP systems, though the use of locking instead 1567is sufficient. 1568 1569Mandatory barriers should not be used to control SMP effects, since mandatory 1570barriers unnecessarily impose overhead on UP systems. They may, however, be 1571used to control MMIO effects on accesses through relaxed memory I/O windows. 1572These are required even on non-SMP systems as they affect the order in which 1573memory operations appear to a device by prohibiting both the compiler and the 1574CPU from reordering them. 1575 1576 1577There are some more advanced barrier functions: 1578 1579 (*) set_mb(var, value) 1580 1581 This assigns the value to the variable and then inserts a full memory 1582 barrier after it, depending on the function. It isn't guaranteed to 1583 insert anything more than a compiler barrier in a UP compilation. 1584 1585 1586 (*) smp_mb__before_atomic_dec(); 1587 (*) smp_mb__after_atomic_dec(); 1588 (*) smp_mb__before_atomic_inc(); 1589 (*) smp_mb__after_atomic_inc(); 1590 1591 These are for use with atomic add, subtract, increment and decrement 1592 functions that don't return a value, especially when used for reference 1593 counting. These functions do not imply memory barriers. 1594 1595 As an example, consider a piece of code that marks an object as being dead 1596 and then decrements the object's reference count: 1597 1598 obj->dead = 1; 1599 smp_mb__before_atomic_dec(); 1600 atomic_dec(&obj->ref_count); 1601 1602 This makes sure that the death mark on the object is perceived to be set 1603 *before* the reference counter is decremented. 1604 1605 See Documentation/atomic_ops.txt for more information. See the "Atomic 1606 operations" subsection for information on where to use these. 1607 1608 1609 (*) smp_mb__before_clear_bit(void); 1610 (*) smp_mb__after_clear_bit(void); 1611 1612 These are for use similar to the atomic inc/dec barriers. These are 1613 typically used for bitwise unlocking operations, so care must be taken as 1614 there are no implicit memory barriers here either. 1615 1616 Consider implementing an unlock operation of some nature by clearing a 1617 locking bit. The clear_bit() would then need to be barriered like this: 1618 1619 smp_mb__before_clear_bit(); 1620 clear_bit( ... ); 1621 1622 This prevents memory operations before the clear leaking to after it. See 1623 the subsection on "Locking Functions" with reference to RELEASE operation 1624 implications. 1625 1626 See Documentation/atomic_ops.txt for more information. See the "Atomic 1627 operations" subsection for information on where to use these. 1628 1629 1630MMIO WRITE BARRIER 1631------------------ 1632 1633The Linux kernel also has a special barrier for use with memory-mapped I/O 1634writes: 1635 1636 mmiowb(); 1637 1638This is a variation on the mandatory write barrier that causes writes to weakly 1639ordered I/O regions to be partially ordered. Its effects may go beyond the 1640CPU->Hardware interface and actually affect the hardware at some level. 1641 1642See the subsection "Locks vs I/O accesses" for more information. 1643 1644 1645=============================== 1646IMPLICIT KERNEL MEMORY BARRIERS 1647=============================== 1648 1649Some of the other functions in the linux kernel imply memory barriers, amongst 1650which are locking and scheduling functions. 1651 1652This specification is a _minimum_ guarantee; any particular architecture may 1653provide more substantial guarantees, but these may not be relied upon outside 1654of arch specific code. 1655 1656 1657ACQUIRING FUNCTIONS 1658------------------- 1659 1660The Linux kernel has a number of locking constructs: 1661 1662 (*) spin locks 1663 (*) R/W spin locks 1664 (*) mutexes 1665 (*) semaphores 1666 (*) R/W semaphores 1667 (*) RCU 1668 1669In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations 1670for each construct. These operations all imply certain barriers: 1671 1672 (1) ACQUIRE operation implication: 1673 1674 Memory operations issued after the ACQUIRE will be completed after the 1675 ACQUIRE operation has completed. 1676 1677 Memory operations issued before the ACQUIRE may be completed after 1678 the ACQUIRE operation has completed. An smp_mb__before_spinlock(), 1679 combined with a following ACQUIRE, orders prior loads against 1680 subsequent loads and stores and also orders prior stores against 1681 subsequent stores. Note that this is weaker than smp_mb()! The 1682 smp_mb__before_spinlock() primitive is free on many architectures. 1683 1684 (2) RELEASE operation implication: 1685 1686 Memory operations issued before the RELEASE will be completed before the 1687 RELEASE operation has completed. 1688 1689 Memory operations issued after the RELEASE may be completed before the 1690 RELEASE operation has completed. 1691 1692 (3) ACQUIRE vs ACQUIRE implication: 1693 1694 All ACQUIRE operations issued before another ACQUIRE operation will be 1695 completed before that ACQUIRE operation. 1696 1697 (4) ACQUIRE vs RELEASE implication: 1698 1699 All ACQUIRE operations issued before a RELEASE operation will be 1700 completed before the RELEASE operation. 1701 1702 (5) Failed conditional ACQUIRE implication: 1703 1704 Certain locking variants of the ACQUIRE operation may fail, either due to 1705 being unable to get the lock immediately, or due to receiving an unblocked 1706 signal whilst asleep waiting for the lock to become available. Failed 1707 locks do not imply any sort of barrier. 1708 1709[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only 1710one-way barriers is that the effects of instructions outside of a critical 1711section may seep into the inside of the critical section. 1712 1713An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier 1714because it is possible for an access preceding the ACQUIRE to happen after the 1715ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and 1716the two accesses can themselves then cross: 1717 1718 *A = a; 1719 ACQUIRE M 1720 RELEASE M 1721 *B = b; 1722 1723may occur as: 1724 1725 ACQUIRE M, STORE *B, STORE *A, RELEASE M 1726 1727When the ACQUIRE and RELEASE are a lock acquisition and release, 1728respectively, this same reordering can occur if the lock's ACQUIRE and 1729RELEASE are to the same lock variable, but only from the perspective of 1730another CPU not holding that lock. In short, a ACQUIRE followed by an 1731RELEASE may -not- be assumed to be a full memory barrier. 1732 1733Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not 1734imply a full memory barrier. If it is necessary for a RELEASE-ACQUIRE 1735pair to produce a full barrier, the ACQUIRE can be followed by an 1736smp_mb__after_unlock_lock() invocation. This will produce a full barrier 1737if either (a) the RELEASE and the ACQUIRE are executed by the same 1738CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable. 1739The smp_mb__after_unlock_lock() primitive is free on many architectures. 1740Without smp_mb__after_unlock_lock(), the CPU's execution of the critical 1741sections corresponding to the RELEASE and the ACQUIRE can cross, so that: 1742 1743 *A = a; 1744 RELEASE M 1745 ACQUIRE N 1746 *B = b; 1747 1748could occur as: 1749 1750 ACQUIRE N, STORE *B, STORE *A, RELEASE M 1751 1752It might appear that this reordering could introduce a deadlock. 1753However, this cannot happen because if such a deadlock threatened, 1754the RELEASE would simply complete, thereby avoiding the deadlock. 1755 1756 Why does this work? 1757 1758 One key point is that we are only talking about the CPU doing 1759 the reordering, not the compiler. If the compiler (or, for 1760 that matter, the developer) switched the operations, deadlock 1761 -could- occur. 1762 1763 But suppose the CPU reordered the operations. In this case, 1764 the unlock precedes the lock in the assembly code. The CPU 1765 simply elected to try executing the later lock operation first. 1766 If there is a deadlock, this lock operation will simply spin (or 1767 try to sleep, but more on that later). The CPU will eventually 1768 execute the unlock operation (which preceded the lock operation 1769 in the assembly code), which will unravel the potential deadlock, 1770 allowing the lock operation to succeed. 1771 1772 But what if the lock is a sleeplock? In that case, the code will 1773 try to enter the scheduler, where it will eventually encounter 1774 a memory barrier, which will force the earlier unlock operation 1775 to complete, again unraveling the deadlock. There might be 1776 a sleep-unlock race, but the locking primitive needs to resolve 1777 such races properly in any case. 1778 1779With smp_mb__after_unlock_lock(), the two critical sections cannot overlap. 1780For example, with the following code, the store to *A will always be 1781seen by other CPUs before the store to *B: 1782 1783 *A = a; 1784 RELEASE M 1785 ACQUIRE N 1786 smp_mb__after_unlock_lock(); 1787 *B = b; 1788 1789The operations will always occur in one of the following orders: 1790 1791 STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B 1792 STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B 1793 ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B 1794 1795If the RELEASE and ACQUIRE were instead both operating on the same lock 1796variable, only the first of these alternatives can occur. In addition, 1797the more strongly ordered systems may rule out some of the above orders. 1798But in any case, as noted earlier, the smp_mb__after_unlock_lock() 1799ensures that the store to *A will always be seen as happening before 1800the store to *B. 1801 1802Locks and semaphores may not provide any guarantee of ordering on UP compiled 1803systems, and so cannot be counted on in such a situation to actually achieve 1804anything at all - especially with respect to I/O accesses - unless combined 1805with interrupt disabling operations. 1806 1807See also the section on "Inter-CPU locking barrier effects". 1808 1809 1810As an example, consider the following: 1811 1812 *A = a; 1813 *B = b; 1814 ACQUIRE 1815 *C = c; 1816 *D = d; 1817 RELEASE 1818 *E = e; 1819 *F = f; 1820 1821The following sequence of events is acceptable: 1822 1823 ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE 1824 1825 [+] Note that {*F,*A} indicates a combined access. 1826 1827But none of the following are: 1828 1829 {*F,*A}, *B, ACQUIRE, *C, *D, RELEASE, *E 1830 *A, *B, *C, ACQUIRE, *D, RELEASE, *E, *F 1831 *A, *B, ACQUIRE, *C, RELEASE, *D, *E, *F 1832 *B, ACQUIRE, *C, *D, RELEASE, {*F,*A}, *E 1833 1834 1835 1836INTERRUPT DISABLING FUNCTIONS 1837----------------------------- 1838 1839Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts 1840(RELEASE equivalent) will act as compiler barriers only. So if memory or I/O 1841barriers are required in such a situation, they must be provided from some 1842other means. 1843 1844 1845SLEEP AND WAKE-UP FUNCTIONS 1846--------------------------- 1847 1848Sleeping and waking on an event flagged in global data can be viewed as an 1849interaction between two pieces of data: the task state of the task waiting for 1850the event and the global data used to indicate the event. To make sure that 1851these appear to happen in the right order, the primitives to begin the process 1852of going to sleep, and the primitives to initiate a wake up imply certain 1853barriers. 1854 1855Firstly, the sleeper normally follows something like this sequence of events: 1856 1857 for (;;) { 1858 set_current_state(TASK_UNINTERRUPTIBLE); 1859 if (event_indicated) 1860 break; 1861 schedule(); 1862 } 1863 1864A general memory barrier is interpolated automatically by set_current_state() 1865after it has altered the task state: 1866 1867 CPU 1 1868 =============================== 1869 set_current_state(); 1870 set_mb(); 1871 STORE current->state 1872 <general barrier> 1873 LOAD event_indicated 1874 1875set_current_state() may be wrapped by: 1876 1877 prepare_to_wait(); 1878 prepare_to_wait_exclusive(); 1879 1880which therefore also imply a general memory barrier after setting the state. 1881The whole sequence above is available in various canned forms, all of which 1882interpolate the memory barrier in the right place: 1883 1884 wait_event(); 1885 wait_event_interruptible(); 1886 wait_event_interruptible_exclusive(); 1887 wait_event_interruptible_timeout(); 1888 wait_event_killable(); 1889 wait_event_timeout(); 1890 wait_on_bit(); 1891 wait_on_bit_lock(); 1892 1893 1894Secondly, code that performs a wake up normally follows something like this: 1895 1896 event_indicated = 1; 1897 wake_up(&event_wait_queue); 1898 1899or: 1900 1901 event_indicated = 1; 1902 wake_up_process(event_daemon); 1903 1904A write memory barrier is implied by wake_up() and co. if and only if they wake 1905something up. The barrier occurs before the task state is cleared, and so sits 1906between the STORE to indicate the event and the STORE to set TASK_RUNNING: 1907 1908 CPU 1 CPU 2 1909 =============================== =============================== 1910 set_current_state(); STORE event_indicated 1911 set_mb(); wake_up(); 1912 STORE current->state <write barrier> 1913 <general barrier> STORE current->state 1914 LOAD event_indicated 1915 1916The available waker functions include: 1917 1918 complete(); 1919 wake_up(); 1920 wake_up_all(); 1921 wake_up_bit(); 1922 wake_up_interruptible(); 1923 wake_up_interruptible_all(); 1924 wake_up_interruptible_nr(); 1925 wake_up_interruptible_poll(); 1926 wake_up_interruptible_sync(); 1927 wake_up_interruptible_sync_poll(); 1928 wake_up_locked(); 1929 wake_up_locked_poll(); 1930 wake_up_nr(); 1931 wake_up_poll(); 1932 wake_up_process(); 1933 1934 1935[!] Note that the memory barriers implied by the sleeper and the waker do _not_ 1936order multiple stores before the wake-up with respect to loads of those stored 1937values after the sleeper has called set_current_state(). For instance, if the 1938sleeper does: 1939 1940 set_current_state(TASK_INTERRUPTIBLE); 1941 if (event_indicated) 1942 break; 1943 __set_current_state(TASK_RUNNING); 1944 do_something(my_data); 1945 1946and the waker does: 1947 1948 my_data = value; 1949 event_indicated = 1; 1950 wake_up(&event_wait_queue); 1951 1952there's no guarantee that the change to event_indicated will be perceived by 1953the sleeper as coming after the change to my_data. In such a circumstance, the 1954code on both sides must interpolate its own memory barriers between the 1955separate data accesses. Thus the above sleeper ought to do: 1956 1957 set_current_state(TASK_INTERRUPTIBLE); 1958 if (event_indicated) { 1959 smp_rmb(); 1960 do_something(my_data); 1961 } 1962 1963and the waker should do: 1964 1965 my_data = value; 1966 smp_wmb(); 1967 event_indicated = 1; 1968 wake_up(&event_wait_queue); 1969 1970 1971MISCELLANEOUS FUNCTIONS 1972----------------------- 1973 1974Other functions that imply barriers: 1975 1976 (*) schedule() and similar imply full memory barriers. 1977 1978 1979=================================== 1980INTER-CPU ACQUIRING BARRIER EFFECTS 1981=================================== 1982 1983On SMP systems locking primitives give a more substantial form of barrier: one 1984that does affect memory access ordering on other CPUs, within the context of 1985conflict on any particular lock. 1986 1987 1988ACQUIRES VS MEMORY ACCESSES 1989--------------------------- 1990 1991Consider the following: the system has a pair of spinlocks (M) and (Q), and 1992three CPUs; then should the following sequence of events occur: 1993 1994 CPU 1 CPU 2 1995 =============================== =============================== 1996 ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e; 1997 ACQUIRE M ACQUIRE Q 1998 ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f; 1999 ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g; 2000 RELEASE M RELEASE Q 2001 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h; 2002 2003Then there is no guarantee as to what order CPU 3 will see the accesses to *A 2004through *H occur in, other than the constraints imposed by the separate locks 2005on the separate CPUs. It might, for example, see: 2006 2007 *E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M 2008 2009But it won't see any of: 2010 2011 *B, *C or *D preceding ACQUIRE M 2012 *A, *B or *C following RELEASE M 2013 *F, *G or *H preceding ACQUIRE Q 2014 *E, *F or *G following RELEASE Q 2015 2016 2017However, if the following occurs: 2018 2019 CPU 1 CPU 2 2020 =============================== =============================== 2021 ACCESS_ONCE(*A) = a; 2022 ACQUIRE M [1] 2023 ACCESS_ONCE(*B) = b; 2024 ACCESS_ONCE(*C) = c; 2025 RELEASE M [1] 2026 ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e; 2027 ACQUIRE M [2] 2028 smp_mb__after_unlock_lock(); 2029 ACCESS_ONCE(*F) = f; 2030 ACCESS_ONCE(*G) = g; 2031 RELEASE M [2] 2032 ACCESS_ONCE(*H) = h; 2033 2034CPU 3 might see: 2035 2036 *E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1], 2037 ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D 2038 2039But assuming CPU 1 gets the lock first, CPU 3 won't see any of: 2040 2041 *B, *C, *D, *F, *G or *H preceding ACQUIRE M [1] 2042 *A, *B or *C following RELEASE M [1] 2043 *F, *G or *H preceding ACQUIRE M [2] 2044 *A, *B, *C, *E, *F or *G following RELEASE M [2] 2045 2046Note that the smp_mb__after_unlock_lock() is critically important 2047here: Without it CPU 3 might see some of the above orderings. 2048Without smp_mb__after_unlock_lock(), the accesses are not guaranteed 2049to be seen in order unless CPU 3 holds lock M. 2050 2051 2052ACQUIRES VS I/O ACCESSES 2053------------------------ 2054 2055Under certain circumstances (especially involving NUMA), I/O accesses within 2056two spinlocked sections on two different CPUs may be seen as interleaved by the 2057PCI bridge, because the PCI bridge does not necessarily participate in the 2058cache-coherence protocol, and is therefore incapable of issuing the required 2059read memory barriers. 2060 2061For example: 2062 2063 CPU 1 CPU 2 2064 =============================== =============================== 2065 spin_lock(Q) 2066 writel(0, ADDR) 2067 writel(1, DATA); 2068 spin_unlock(Q); 2069 spin_lock(Q); 2070 writel(4, ADDR); 2071 writel(5, DATA); 2072 spin_unlock(Q); 2073 2074may be seen by the PCI bridge as follows: 2075 2076 STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5 2077 2078which would probably cause the hardware to malfunction. 2079 2080 2081What is necessary here is to intervene with an mmiowb() before dropping the 2082spinlock, for example: 2083 2084 CPU 1 CPU 2 2085 =============================== =============================== 2086 spin_lock(Q) 2087 writel(0, ADDR) 2088 writel(1, DATA); 2089 mmiowb(); 2090 spin_unlock(Q); 2091 spin_lock(Q); 2092 writel(4, ADDR); 2093 writel(5, DATA); 2094 mmiowb(); 2095 spin_unlock(Q); 2096 2097this will ensure that the two stores issued on CPU 1 appear at the PCI bridge 2098before either of the stores issued on CPU 2. 2099 2100 2101Furthermore, following a store by a load from the same device obviates the need 2102for the mmiowb(), because the load forces the store to complete before the load 2103is performed: 2104 2105 CPU 1 CPU 2 2106 =============================== =============================== 2107 spin_lock(Q) 2108 writel(0, ADDR) 2109 a = readl(DATA); 2110 spin_unlock(Q); 2111 spin_lock(Q); 2112 writel(4, ADDR); 2113 b = readl(DATA); 2114 spin_unlock(Q); 2115 2116 2117See Documentation/DocBook/deviceiobook.tmpl for more information. 2118 2119 2120================================= 2121WHERE ARE MEMORY BARRIERS NEEDED? 2122================================= 2123 2124Under normal operation, memory operation reordering is generally not going to 2125be a problem as a single-threaded linear piece of code will still appear to 2126work correctly, even if it's in an SMP kernel. There are, however, four 2127circumstances in which reordering definitely _could_ be a problem: 2128 2129 (*) Interprocessor interaction. 2130 2131 (*) Atomic operations. 2132 2133 (*) Accessing devices. 2134 2135 (*) Interrupts. 2136 2137 2138INTERPROCESSOR INTERACTION 2139-------------------------- 2140 2141When there's a system with more than one processor, more than one CPU in the 2142system may be working on the same data set at the same time. This can cause 2143synchronisation problems, and the usual way of dealing with them is to use 2144locks. Locks, however, are quite expensive, and so it may be preferable to 2145operate without the use of a lock if at all possible. In such a case 2146operations that affect both CPUs may have to be carefully ordered to prevent 2147a malfunction. 2148 2149Consider, for example, the R/W semaphore slow path. Here a waiting process is 2150queued on the semaphore, by virtue of it having a piece of its stack linked to 2151the semaphore's list of waiting processes: 2152 2153 struct rw_semaphore { 2154 ... 2155 spinlock_t lock; 2156 struct list_head waiters; 2157 }; 2158 2159 struct rwsem_waiter { 2160 struct list_head list; 2161 struct task_struct *task; 2162 }; 2163 2164To wake up a particular waiter, the up_read() or up_write() functions have to: 2165 2166 (1) read the next pointer from this waiter's record to know as to where the 2167 next waiter record is; 2168 2169 (2) read the pointer to the waiter's task structure; 2170 2171 (3) clear the task pointer to tell the waiter it has been given the semaphore; 2172 2173 (4) call wake_up_process() on the task; and 2174 2175 (5) release the reference held on the waiter's task struct. 2176 2177In other words, it has to perform this sequence of events: 2178 2179 LOAD waiter->list.next; 2180 LOAD waiter->task; 2181 STORE waiter->task; 2182 CALL wakeup 2183 RELEASE task 2184 2185and if any of these steps occur out of order, then the whole thing may 2186malfunction. 2187 2188Once it has queued itself and dropped the semaphore lock, the waiter does not 2189get the lock again; it instead just waits for its task pointer to be cleared 2190before proceeding. Since the record is on the waiter's stack, this means that 2191if the task pointer is cleared _before_ the next pointer in the list is read, 2192another CPU might start processing the waiter and might clobber the waiter's 2193stack before the up*() function has a chance to read the next pointer. 2194 2195Consider then what might happen to the above sequence of events: 2196 2197 CPU 1 CPU 2 2198 =============================== =============================== 2199 down_xxx() 2200 Queue waiter 2201 Sleep 2202 up_yyy() 2203 LOAD waiter->task; 2204 STORE waiter->task; 2205 Woken up by other event 2206 <preempt> 2207 Resume processing 2208 down_xxx() returns 2209 call foo() 2210 foo() clobbers *waiter 2211 </preempt> 2212 LOAD waiter->list.next; 2213 --- OOPS --- 2214 2215This could be dealt with using the semaphore lock, but then the down_xxx() 2216function has to needlessly get the spinlock again after being woken up. 2217 2218The way to deal with this is to insert a general SMP memory barrier: 2219 2220 LOAD waiter->list.next; 2221 LOAD waiter->task; 2222 smp_mb(); 2223 STORE waiter->task; 2224 CALL wakeup 2225 RELEASE task 2226 2227In this case, the barrier makes a guarantee that all memory accesses before the 2228barrier will appear to happen before all the memory accesses after the barrier 2229with respect to the other CPUs on the system. It does _not_ guarantee that all 2230the memory accesses before the barrier will be complete by the time the barrier 2231instruction itself is complete. 2232 2233On a UP system - where this wouldn't be a problem - the smp_mb() is just a 2234compiler barrier, thus making sure the compiler emits the instructions in the 2235right order without actually intervening in the CPU. Since there's only one 2236CPU, that CPU's dependency ordering logic will take care of everything else. 2237 2238 2239ATOMIC OPERATIONS 2240----------------- 2241 2242Whilst they are technically interprocessor interaction considerations, atomic 2243operations are noted specially as some of them imply full memory barriers and 2244some don't, but they're very heavily relied on as a group throughout the 2245kernel. 2246 2247Any atomic operation that modifies some state in memory and returns information 2248about the state (old or new) implies an SMP-conditional general memory barrier 2249(smp_mb()) on each side of the actual operation (with the exception of 2250explicit lock operations, described later). These include: 2251 2252 xchg(); 2253 cmpxchg(); 2254 atomic_xchg(); atomic_long_xchg(); 2255 atomic_cmpxchg(); atomic_long_cmpxchg(); 2256 atomic_inc_return(); atomic_long_inc_return(); 2257 atomic_dec_return(); atomic_long_dec_return(); 2258 atomic_add_return(); atomic_long_add_return(); 2259 atomic_sub_return(); atomic_long_sub_return(); 2260 atomic_inc_and_test(); atomic_long_inc_and_test(); 2261 atomic_dec_and_test(); atomic_long_dec_and_test(); 2262 atomic_sub_and_test(); atomic_long_sub_and_test(); 2263 atomic_add_negative(); atomic_long_add_negative(); 2264 test_and_set_bit(); 2265 test_and_clear_bit(); 2266 test_and_change_bit(); 2267 2268 /* when succeeds (returns 1) */ 2269 atomic_add_unless(); atomic_long_add_unless(); 2270 2271These are used for such things as implementing ACQUIRE-class and RELEASE-class 2272operations and adjusting reference counters towards object destruction, and as 2273such the implicit memory barrier effects are necessary. 2274 2275 2276The following operations are potential problems as they do _not_ imply memory 2277barriers, but might be used for implementing such things as RELEASE-class 2278operations: 2279 2280 atomic_set(); 2281 set_bit(); 2282 clear_bit(); 2283 change_bit(); 2284 2285With these the appropriate explicit memory barrier should be used if necessary 2286(smp_mb__before_clear_bit() for instance). 2287 2288 2289The following also do _not_ imply memory barriers, and so may require explicit 2290memory barriers under some circumstances (smp_mb__before_atomic_dec() for 2291instance): 2292 2293 atomic_add(); 2294 atomic_sub(); 2295 atomic_inc(); 2296 atomic_dec(); 2297 2298If they're used for statistics generation, then they probably don't need memory 2299barriers, unless there's a coupling between statistical data. 2300 2301If they're used for reference counting on an object to control its lifetime, 2302they probably don't need memory barriers because either the reference count 2303will be adjusted inside a locked section, or the caller will already hold 2304sufficient references to make the lock, and thus a memory barrier unnecessary. 2305 2306If they're used for constructing a lock of some description, then they probably 2307do need memory barriers as a lock primitive generally has to do things in a 2308specific order. 2309 2310Basically, each usage case has to be carefully considered as to whether memory 2311barriers are needed or not. 2312 2313The following operations are special locking primitives: 2314 2315 test_and_set_bit_lock(); 2316 clear_bit_unlock(); 2317 __clear_bit_unlock(); 2318 2319These implement ACQUIRE-class and RELEASE-class operations. These should be used in 2320preference to other operations when implementing locking primitives, because 2321their implementations can be optimised on many architectures. 2322 2323[!] Note that special memory barrier primitives are available for these 2324situations because on some CPUs the atomic instructions used imply full memory 2325barriers, and so barrier instructions are superfluous in conjunction with them, 2326and in such cases the special barrier primitives will be no-ops. 2327 2328See Documentation/atomic_ops.txt for more information. 2329 2330 2331ACCESSING DEVICES 2332----------------- 2333 2334Many devices can be memory mapped, and so appear to the CPU as if they're just 2335a set of memory locations. To control such a device, the driver usually has to 2336make the right memory accesses in exactly the right order. 2337 2338However, having a clever CPU or a clever compiler creates a potential problem 2339in that the carefully sequenced accesses in the driver code won't reach the 2340device in the requisite order if the CPU or the compiler thinks it is more 2341efficient to reorder, combine or merge accesses - something that would cause 2342the device to malfunction. 2343 2344Inside of the Linux kernel, I/O should be done through the appropriate accessor 2345routines - such as inb() or writel() - which know how to make such accesses 2346appropriately sequential. Whilst this, for the most part, renders the explicit 2347use of memory barriers unnecessary, there are a couple of situations where they 2348might be needed: 2349 2350 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and 2351 so for _all_ general drivers locks should be used and mmiowb() must be 2352 issued prior to unlocking the critical section. 2353 2354 (2) If the accessor functions are used to refer to an I/O memory window with 2355 relaxed memory access properties, then _mandatory_ memory barriers are 2356 required to enforce ordering. 2357 2358See Documentation/DocBook/deviceiobook.tmpl for more information. 2359 2360 2361INTERRUPTS 2362---------- 2363 2364A driver may be interrupted by its own interrupt service routine, and thus the 2365two parts of the driver may interfere with each other's attempts to control or 2366access the device. 2367 2368This may be alleviated - at least in part - by disabling local interrupts (a 2369form of locking), such that the critical operations are all contained within 2370the interrupt-disabled section in the driver. Whilst the driver's interrupt 2371routine is executing, the driver's core may not run on the same CPU, and its 2372interrupt is not permitted to happen again until the current interrupt has been 2373handled, thus the interrupt handler does not need to lock against that. 2374 2375However, consider a driver that was talking to an ethernet card that sports an 2376address register and a data register. If that driver's core talks to the card 2377under interrupt-disablement and then the driver's interrupt handler is invoked: 2378 2379 LOCAL IRQ DISABLE 2380 writew(ADDR, 3); 2381 writew(DATA, y); 2382 LOCAL IRQ ENABLE 2383 <interrupt> 2384 writew(ADDR, 4); 2385 q = readw(DATA); 2386 </interrupt> 2387 2388The store to the data register might happen after the second store to the 2389address register if ordering rules are sufficiently relaxed: 2390 2391 STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA 2392 2393 2394If ordering rules are relaxed, it must be assumed that accesses done inside an 2395interrupt disabled section may leak outside of it and may interleave with 2396accesses performed in an interrupt - and vice versa - unless implicit or 2397explicit barriers are used. 2398 2399Normally this won't be a problem because the I/O accesses done inside such 2400sections will include synchronous load operations on strictly ordered I/O 2401registers that form implicit I/O barriers. If this isn't sufficient then an 2402mmiowb() may need to be used explicitly. 2403 2404 2405A similar situation may occur between an interrupt routine and two routines 2406running on separate CPUs that communicate with each other. If such a case is 2407likely, then interrupt-disabling locks should be used to guarantee ordering. 2408 2409 2410========================== 2411KERNEL I/O BARRIER EFFECTS 2412========================== 2413 2414When accessing I/O memory, drivers should use the appropriate accessor 2415functions: 2416 2417 (*) inX(), outX(): 2418 2419 These are intended to talk to I/O space rather than memory space, but 2420 that's primarily a CPU-specific concept. The i386 and x86_64 processors do 2421 indeed have special I/O space access cycles and instructions, but many 2422 CPUs don't have such a concept. 2423 2424 The PCI bus, amongst others, defines an I/O space concept which - on such 2425 CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O 2426 space. However, it may also be mapped as a virtual I/O space in the CPU's 2427 memory map, particularly on those CPUs that don't support alternate I/O 2428 spaces. 2429 2430 Accesses to this space may be fully synchronous (as on i386), but 2431 intermediary bridges (such as the PCI host bridge) may not fully honour 2432 that. 2433 2434 They are guaranteed to be fully ordered with respect to each other. 2435 2436 They are not guaranteed to be fully ordered with respect to other types of 2437 memory and I/O operation. 2438 2439 (*) readX(), writeX(): 2440 2441 Whether these are guaranteed to be fully ordered and uncombined with 2442 respect to each other on the issuing CPU depends on the characteristics 2443 defined for the memory window through which they're accessing. On later 2444 i386 architecture machines, for example, this is controlled by way of the 2445 MTRR registers. 2446 2447 Ordinarily, these will be guaranteed to be fully ordered and uncombined, 2448 provided they're not accessing a prefetchable device. 2449 2450 However, intermediary hardware (such as a PCI bridge) may indulge in 2451 deferral if it so wishes; to flush a store, a load from the same location 2452 is preferred[*], but a load from the same device or from configuration 2453 space should suffice for PCI. 2454 2455 [*] NOTE! attempting to load from the same location as was written to may 2456 cause a malfunction - consider the 16550 Rx/Tx serial registers for 2457 example. 2458 2459 Used with prefetchable I/O memory, an mmiowb() barrier may be required to 2460 force stores to be ordered. 2461 2462 Please refer to the PCI specification for more information on interactions 2463 between PCI transactions. 2464 2465 (*) readX_relaxed() 2466 2467 These are similar to readX(), but are not guaranteed to be ordered in any 2468 way. Be aware that there is no I/O read barrier available. 2469 2470 (*) ioreadX(), iowriteX() 2471 2472 These will perform appropriately for the type of access they're actually 2473 doing, be it inX()/outX() or readX()/writeX(). 2474 2475 2476======================================== 2477ASSUMED MINIMUM EXECUTION ORDERING MODEL 2478======================================== 2479 2480It has to be assumed that the conceptual CPU is weakly-ordered but that it will 2481maintain the appearance of program causality with respect to itself. Some CPUs 2482(such as i386 or x86_64) are more constrained than others (such as powerpc or 2483frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside 2484of arch-specific code. 2485 2486This means that it must be considered that the CPU will execute its instruction 2487stream in any order it feels like - or even in parallel - provided that if an 2488instruction in the stream depends on an earlier instruction, then that 2489earlier instruction must be sufficiently complete[*] before the later 2490instruction may proceed; in other words: provided that the appearance of 2491causality is maintained. 2492 2493 [*] Some instructions have more than one effect - such as changing the 2494 condition codes, changing registers or changing memory - and different 2495 instructions may depend on different effects. 2496 2497A CPU may also discard any instruction sequence that winds up having no 2498ultimate effect. For example, if two adjacent instructions both load an 2499immediate value into the same register, the first may be discarded. 2500 2501 2502Similarly, it has to be assumed that compiler might reorder the instruction 2503stream in any way it sees fit, again provided the appearance of causality is 2504maintained. 2505 2506 2507============================ 2508THE EFFECTS OF THE CPU CACHE 2509============================ 2510 2511The way cached memory operations are perceived across the system is affected to 2512a certain extent by the caches that lie between CPUs and memory, and by the 2513memory coherence system that maintains the consistency of state in the system. 2514 2515As far as the way a CPU interacts with another part of the system through the 2516caches goes, the memory system has to include the CPU's caches, and memory 2517barriers for the most part act at the interface between the CPU and its cache 2518(memory barriers logically act on the dotted line in the following diagram): 2519 2520 <--- CPU ---> : <----------- Memory -----------> 2521 : 2522 +--------+ +--------+ : +--------+ +-----------+ 2523 | | | | : | | | | +--------+ 2524 | CPU | | Memory | : | CPU | | | | | 2525 | Core |--->| Access |----->| Cache |<-->| | | | 2526 | | | Queue | : | | | |--->| Memory | 2527 | | | | : | | | | | | 2528 +--------+ +--------+ : +--------+ | | | | 2529 : | Cache | +--------+ 2530 : | Coherency | 2531 : | Mechanism | +--------+ 2532 +--------+ +--------+ : +--------+ | | | | 2533 | | | | : | | | | | | 2534 | CPU | | Memory | : | CPU | | |--->| Device | 2535 | Core |--->| Access |----->| Cache |<-->| | | | 2536 | | | Queue | : | | | | | | 2537 | | | | : | | | | +--------+ 2538 +--------+ +--------+ : +--------+ +-----------+ 2539 : 2540 : 2541 2542Although any particular load or store may not actually appear outside of the 2543CPU that issued it since it may have been satisfied within the CPU's own cache, 2544it will still appear as if the full memory access had taken place as far as the 2545other CPUs are concerned since the cache coherency mechanisms will migrate the 2546cacheline over to the accessing CPU and propagate the effects upon conflict. 2547 2548The CPU core may execute instructions in any order it deems fit, provided the 2549expected program causality appears to be maintained. Some of the instructions 2550generate load and store operations which then go into the queue of memory 2551accesses to be performed. The core may place these in the queue in any order 2552it wishes, and continue execution until it is forced to wait for an instruction 2553to complete. 2554 2555What memory barriers are concerned with is controlling the order in which 2556accesses cross from the CPU side of things to the memory side of things, and 2557the order in which the effects are perceived to happen by the other observers 2558in the system. 2559 2560[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see 2561their own loads and stores as if they had happened in program order. 2562 2563[!] MMIO or other device accesses may bypass the cache system. This depends on 2564the properties of the memory window through which devices are accessed and/or 2565the use of any special device communication instructions the CPU may have. 2566 2567 2568CACHE COHERENCY 2569--------------- 2570 2571Life isn't quite as simple as it may appear above, however: for while the 2572caches are expected to be coherent, there's no guarantee that that coherency 2573will be ordered. This means that whilst changes made on one CPU will 2574eventually become visible on all CPUs, there's no guarantee that they will 2575become apparent in the same order on those other CPUs. 2576 2577 2578Consider dealing with a system that has a pair of CPUs (1 & 2), each of which 2579has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): 2580 2581 : 2582 : +--------+ 2583 : +---------+ | | 2584 +--------+ : +--->| Cache A |<------->| | 2585 | | : | +---------+ | | 2586 | CPU 1 |<---+ | | 2587 | | : | +---------+ | | 2588 +--------+ : +--->| Cache B |<------->| | 2589 : +---------+ | | 2590 : | Memory | 2591 : +---------+ | System | 2592 +--------+ : +--->| Cache C |<------->| | 2593 | | : | +---------+ | | 2594 | CPU 2 |<---+ | | 2595 | | : | +---------+ | | 2596 +--------+ : +--->| Cache D |<------->| | 2597 : +---------+ | | 2598 : +--------+ 2599 : 2600 2601Imagine the system has the following properties: 2602 2603 (*) an odd-numbered cache line may be in cache A, cache C or it may still be 2604 resident in memory; 2605 2606 (*) an even-numbered cache line may be in cache B, cache D or it may still be 2607 resident in memory; 2608 2609 (*) whilst the CPU core is interrogating one cache, the other cache may be 2610 making use of the bus to access the rest of the system - perhaps to 2611 displace a dirty cacheline or to do a speculative load; 2612 2613 (*) each cache has a queue of operations that need to be applied to that cache 2614 to maintain coherency with the rest of the system; 2615 2616 (*) the coherency queue is not flushed by normal loads to lines already 2617 present in the cache, even though the contents of the queue may 2618 potentially affect those loads. 2619 2620Imagine, then, that two writes are made on the first CPU, with a write barrier 2621between them to guarantee that they will appear to reach that CPU's caches in 2622the requisite order: 2623 2624 CPU 1 CPU 2 COMMENT 2625 =============== =============== ======================================= 2626 u == 0, v == 1 and p == &u, q == &u 2627 v = 2; 2628 smp_wmb(); Make sure change to v is visible before 2629 change to p 2630 <A:modify v=2> v is now in cache A exclusively 2631 p = &v; 2632 <B:modify p=&v> p is now in cache B exclusively 2633 2634The write memory barrier forces the other CPUs in the system to perceive that 2635the local CPU's caches have apparently been updated in the correct order. But 2636now imagine that the second CPU wants to read those values: 2637 2638 CPU 1 CPU 2 COMMENT 2639 =============== =============== ======================================= 2640 ... 2641 q = p; 2642 x = *q; 2643 2644The above pair of reads may then fail to happen in the expected order, as the 2645cacheline holding p may get updated in one of the second CPU's caches whilst 2646the update to the cacheline holding v is delayed in the other of the second 2647CPU's caches by some other cache event: 2648 2649 CPU 1 CPU 2 COMMENT 2650 =============== =============== ======================================= 2651 u == 0, v == 1 and p == &u, q == &u 2652 v = 2; 2653 smp_wmb(); 2654 <A:modify v=2> <C:busy> 2655 <C:queue v=2> 2656 p = &v; q = p; 2657 <D:request p> 2658 <B:modify p=&v> <D:commit p=&v> 2659 <D:read p> 2660 x = *q; 2661 <C:read *q> Reads from v before v updated in cache 2662 <C:unbusy> 2663 <C:commit v=2> 2664 2665Basically, whilst both cachelines will be updated on CPU 2 eventually, there's 2666no guarantee that, without intervention, the order of update will be the same 2667as that committed on CPU 1. 2668 2669 2670To intervene, we need to interpolate a data dependency barrier or a read 2671barrier between the loads. This will force the cache to commit its coherency 2672queue before processing any further requests: 2673 2674 CPU 1 CPU 2 COMMENT 2675 =============== =============== ======================================= 2676 u == 0, v == 1 and p == &u, q == &u 2677 v = 2; 2678 smp_wmb(); 2679 <A:modify v=2> <C:busy> 2680 <C:queue v=2> 2681 p = &v; q = p; 2682 <D:request p> 2683 <B:modify p=&v> <D:commit p=&v> 2684 <D:read p> 2685 smp_read_barrier_depends() 2686 <C:unbusy> 2687 <C:commit v=2> 2688 x = *q; 2689 <C:read *q> Reads from v after v updated in cache 2690 2691 2692This sort of problem can be encountered on DEC Alpha processors as they have a 2693split cache that improves performance by making better use of the data bus. 2694Whilst most CPUs do imply a data dependency barrier on the read when a memory 2695access depends on a read, not all do, so it may not be relied on. 2696 2697Other CPUs may also have split caches, but must coordinate between the various 2698cachelets for normal memory accesses. The semantics of the Alpha removes the 2699need for coordination in the absence of memory barriers. 2700 2701 2702CACHE COHERENCY VS DMA 2703---------------------- 2704 2705Not all systems maintain cache coherency with respect to devices doing DMA. In 2706such cases, a device attempting DMA may obtain stale data from RAM because 2707dirty cache lines may be resident in the caches of various CPUs, and may not 2708have been written back to RAM yet. To deal with this, the appropriate part of 2709the kernel must flush the overlapping bits of cache on each CPU (and maybe 2710invalidate them as well). 2711 2712In addition, the data DMA'd to RAM by a device may be overwritten by dirty 2713cache lines being written back to RAM from a CPU's cache after the device has 2714installed its own data, or cache lines present in the CPU's cache may simply 2715obscure the fact that RAM has been updated, until at such time as the cacheline 2716is discarded from the CPU's cache and reloaded. To deal with this, the 2717appropriate part of the kernel must invalidate the overlapping bits of the 2718cache on each CPU. 2719 2720See Documentation/cachetlb.txt for more information on cache management. 2721 2722 2723CACHE COHERENCY VS MMIO 2724----------------------- 2725 2726Memory mapped I/O usually takes place through memory locations that are part of 2727a window in the CPU's memory space that has different properties assigned than 2728the usual RAM directed window. 2729 2730Amongst these properties is usually the fact that such accesses bypass the 2731caching entirely and go directly to the device buses. This means MMIO accesses 2732may, in effect, overtake accesses to cached memory that were emitted earlier. 2733A memory barrier isn't sufficient in such a case, but rather the cache must be 2734flushed between the cached memory write and the MMIO access if the two are in 2735any way dependent. 2736 2737 2738========================= 2739THE THINGS CPUS GET UP TO 2740========================= 2741 2742A programmer might take it for granted that the CPU will perform memory 2743operations in exactly the order specified, so that if the CPU is, for example, 2744given the following piece of code to execute: 2745 2746 a = ACCESS_ONCE(*A); 2747 ACCESS_ONCE(*B) = b; 2748 c = ACCESS_ONCE(*C); 2749 d = ACCESS_ONCE(*D); 2750 ACCESS_ONCE(*E) = e; 2751 2752they would then expect that the CPU will complete the memory operation for each 2753instruction before moving on to the next one, leading to a definite sequence of 2754operations as seen by external observers in the system: 2755 2756 LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E. 2757 2758 2759Reality is, of course, much messier. With many CPUs and compilers, the above 2760assumption doesn't hold because: 2761 2762 (*) loads are more likely to need to be completed immediately to permit 2763 execution progress, whereas stores can often be deferred without a 2764 problem; 2765 2766 (*) loads may be done speculatively, and the result discarded should it prove 2767 to have been unnecessary; 2768 2769 (*) loads may be done speculatively, leading to the result having been fetched 2770 at the wrong time in the expected sequence of events; 2771 2772 (*) the order of the memory accesses may be rearranged to promote better use 2773 of the CPU buses and caches; 2774 2775 (*) loads and stores may be combined to improve performance when talking to 2776 memory or I/O hardware that can do batched accesses of adjacent locations, 2777 thus cutting down on transaction setup costs (memory and PCI devices may 2778 both be able to do this); and 2779 2780 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency 2781 mechanisms may alleviate this - once the store has actually hit the cache 2782 - there's no guarantee that the coherency management will be propagated in 2783 order to other CPUs. 2784 2785So what another CPU, say, might actually observe from the above piece of code 2786is: 2787 2788 LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B 2789 2790 (Where "LOAD {*C,*D}" is a combined load) 2791 2792 2793However, it is guaranteed that a CPU will be self-consistent: it will see its 2794_own_ accesses appear to be correctly ordered, without the need for a memory 2795barrier. For instance with the following code: 2796 2797 U = ACCESS_ONCE(*A); 2798 ACCESS_ONCE(*A) = V; 2799 ACCESS_ONCE(*A) = W; 2800 X = ACCESS_ONCE(*A); 2801 ACCESS_ONCE(*A) = Y; 2802 Z = ACCESS_ONCE(*A); 2803 2804and assuming no intervention by an external influence, it can be assumed that 2805the final result will appear to be: 2806 2807 U == the original value of *A 2808 X == W 2809 Z == Y 2810 *A == Y 2811 2812The code above may cause the CPU to generate the full sequence of memory 2813accesses: 2814 2815 U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A 2816 2817in that order, but, without intervention, the sequence may have almost any 2818combination of elements combined or discarded, provided the program's view of 2819the world remains consistent. Note that ACCESS_ONCE() is -not- optional 2820in the above example, as there are architectures where a given CPU might 2821reorder successive loads to the same location. On such architectures, 2822ACCESS_ONCE() does whatever is necessary to prevent this, for example, on 2823Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the 2824special ld.acq and st.rel instructions that prevent such reordering. 2825 2826The compiler may also combine, discard or defer elements of the sequence before 2827the CPU even sees them. 2828 2829For instance: 2830 2831 *A = V; 2832 *A = W; 2833 2834may be reduced to: 2835 2836 *A = W; 2837 2838since, without either a write barrier or an ACCESS_ONCE(), it can be 2839assumed that the effect of the storage of V to *A is lost. Similarly: 2840 2841 *A = Y; 2842 Z = *A; 2843 2844may, without a memory barrier or an ACCESS_ONCE(), be reduced to: 2845 2846 *A = Y; 2847 Z = Y; 2848 2849and the LOAD operation never appear outside of the CPU. 2850 2851 2852AND THEN THERE'S THE ALPHA 2853-------------------------- 2854 2855The DEC Alpha CPU is one of the most relaxed CPUs there is. Not only that, 2856some versions of the Alpha CPU have a split data cache, permitting them to have 2857two semantically-related cache lines updated at separate times. This is where 2858the data dependency barrier really becomes necessary as this synchronises both 2859caches with the memory coherence system, thus making it seem like pointer 2860changes vs new data occur in the right order. 2861 2862The Alpha defines the Linux kernel's memory barrier model. 2863 2864See the subsection on "Cache Coherency" above. 2865 2866 2867============ 2868EXAMPLE USES 2869============ 2870 2871CIRCULAR BUFFERS 2872---------------- 2873 2874Memory barriers can be used to implement circular buffering without the need 2875of a lock to serialise the producer with the consumer. See: 2876 2877 Documentation/circular-buffers.txt 2878 2879for details. 2880 2881 2882========== 2883REFERENCES 2884========== 2885 2886Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek, 2887Digital Press) 2888 Chapter 5.2: Physical Address Space Characteristics 2889 Chapter 5.4: Caches and Write Buffers 2890 Chapter 5.5: Data Sharing 2891 Chapter 5.6: Read/Write Ordering 2892 2893AMD64 Architecture Programmer's Manual Volume 2: System Programming 2894 Chapter 7.1: Memory-Access Ordering 2895 Chapter 7.4: Buffering and Combining Memory Writes 2896 2897IA-32 Intel Architecture Software Developer's Manual, Volume 3: 2898System Programming Guide 2899 Chapter 7.1: Locked Atomic Operations 2900 Chapter 7.2: Memory Ordering 2901 Chapter 7.4: Serializing Instructions 2902 2903The SPARC Architecture Manual, Version 9 2904 Chapter 8: Memory Models 2905 Appendix D: Formal Specification of the Memory Models 2906 Appendix J: Programming with the Memory Models 2907 2908UltraSPARC Programmer Reference Manual 2909 Chapter 5: Memory Accesses and Cacheability 2910 Chapter 15: Sparc-V9 Memory Models 2911 2912UltraSPARC III Cu User's Manual 2913 Chapter 9: Memory Models 2914 2915UltraSPARC IIIi Processor User's Manual 2916 Chapter 8: Memory Models 2917 2918UltraSPARC Architecture 2005 2919 Chapter 9: Memory 2920 Appendix D: Formal Specifications of the Memory Models 2921 2922UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005 2923 Chapter 8: Memory Models 2924 Appendix F: Caches and Cache Coherency 2925 2926Solaris Internals, Core Kernel Architecture, p63-68: 2927 Chapter 3.3: Hardware Considerations for Locks and 2928 Synchronization 2929 2930Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching 2931for Kernel Programmers: 2932 Chapter 13: Other Memory Models 2933 2934Intel Itanium Architecture Software Developer's Manual: Volume 1: 2935 Section 2.6: Speculation 2936 Section 4.4: Memory Access 2937