1Explanation of the Linux-Kernel Memory Consistency Model 2~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 3 4:Author: Alan Stern <stern@rowland.harvard.edu> 5:Created: October 2017 6 7.. Contents 8 9 1. INTRODUCTION 10 2. BACKGROUND 11 3. A SIMPLE EXAMPLE 12 4. A SELECTION OF MEMORY MODELS 13 5. ORDERING AND CYCLES 14 6. EVENTS 15 7. THE PROGRAM ORDER RELATION: po AND po-loc 16 8. A WARNING 17 9. DEPENDENCY RELATIONS: data, addr, and ctrl 18 10. THE READS-FROM RELATION: rf, rfi, and rfe 19 11. CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe 20 12. THE FROM-READS RELATION: fr, fri, and fre 21 13. AN OPERATIONAL MODEL 22 14. PROPAGATION ORDER RELATION: cumul-fence 23 15. DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL 24 16. SEQUENTIAL CONSISTENCY PER VARIABLE 25 17. ATOMIC UPDATES: rmw 26 18. THE PRESERVED PROGRAM ORDER RELATION: ppo 27 19. AND THEN THERE WAS ALPHA 28 20. THE HAPPENS-BEFORE RELATION: hb 29 21. THE PROPAGATES-BEFORE RELATION: pb 30 22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb 31 23. LOCKING 32 24. PLAIN ACCESSES AND DATA RACES 33 25. ODDS AND ENDS 34 35 36 37INTRODUCTION 38------------ 39 40The Linux-kernel memory consistency model (LKMM) is rather complex and 41obscure. This is particularly evident if you read through the 42linux-kernel.bell and linux-kernel.cat files that make up the formal 43version of the model; they are extremely terse and their meanings are 44far from clear. 45 46This document describes the ideas underlying the LKMM. It is meant 47for people who want to understand how the model was designed. It does 48not go into the details of the code in the .bell and .cat files; 49rather, it explains in English what the code expresses symbolically. 50 51Sections 2 (BACKGROUND) through 5 (ORDERING AND CYCLES) are aimed 52toward beginners; they explain what memory consistency models are and 53the basic notions shared by all such models. People already familiar 54with these concepts can skim or skip over them. Sections 6 (EVENTS) 55through 12 (THE FROM_READS RELATION) describe the fundamental 56relations used in many models. Starting in Section 13 (AN OPERATIONAL 57MODEL), the workings of the LKMM itself are covered. 58 59Warning: The code examples in this document are not written in the 60proper format for litmus tests. They don't include a header line, the 61initializations are not enclosed in braces, the global variables are 62not passed by pointers, and they don't have an "exists" clause at the 63end. Converting them to the right format is left as an exercise for 64the reader. 65 66 67BACKGROUND 68---------- 69 70A memory consistency model (or just memory model, for short) is 71something which predicts, given a piece of computer code running on a 72particular kind of system, what values may be obtained by the code's 73load instructions. The LKMM makes these predictions for code running 74as part of the Linux kernel. 75 76In practice, people tend to use memory models the other way around. 77That is, given a piece of code and a collection of values specified 78for the loads, the model will predict whether it is possible for the 79code to run in such a way that the loads will indeed obtain the 80specified values. Of course, this is just another way of expressing 81the same idea. 82 83For code running on a uniprocessor system, the predictions are easy: 84Each load instruction must obtain the value written by the most recent 85store instruction accessing the same location (we ignore complicating 86factors such as DMA and mixed-size accesses.) But on multiprocessor 87systems, with multiple CPUs making concurrent accesses to shared 88memory locations, things aren't so simple. 89 90Different architectures have differing memory models, and the Linux 91kernel supports a variety of architectures. The LKMM has to be fairly 92permissive, in the sense that any behavior allowed by one of these 93architectures also has to be allowed by the LKMM. 94 95 96A SIMPLE EXAMPLE 97---------------- 98 99Here is a simple example to illustrate the basic concepts. Consider 100some code running as part of a device driver for an input device. The 101driver might contain an interrupt handler which collects data from the 102device, stores it in a buffer, and sets a flag to indicate the buffer 103is full. Running concurrently on a different CPU might be a part of 104the driver code being executed by a process in the midst of a read(2) 105system call. This code tests the flag to see whether the buffer is 106ready, and if it is, copies the data back to userspace. The buffer 107and the flag are memory locations shared between the two CPUs. 108 109We can abstract out the important pieces of the driver code as follows 110(the reason for using WRITE_ONCE() and READ_ONCE() instead of simple 111assignment statements is discussed later): 112 113 int buf = 0, flag = 0; 114 115 P0() 116 { 117 WRITE_ONCE(buf, 1); 118 WRITE_ONCE(flag, 1); 119 } 120 121 P1() 122 { 123 int r1; 124 int r2 = 0; 125 126 r1 = READ_ONCE(flag); 127 if (r1) 128 r2 = READ_ONCE(buf); 129 } 130 131Here the P0() function represents the interrupt handler running on one 132CPU and P1() represents the read() routine running on another. The 133value 1 stored in buf represents input data collected from the device. 134Thus, P0 stores the data in buf and then sets flag. Meanwhile, P1 135reads flag into the private variable r1, and if it is set, reads the 136data from buf into a second private variable r2 for copying to 137userspace. (Presumably if flag is not set then the driver will wait a 138while and try again.) 139 140This pattern of memory accesses, where one CPU stores values to two 141shared memory locations and another CPU loads from those locations in 142the opposite order, is widely known as the "Message Passing" or MP 143pattern. It is typical of memory access patterns in the kernel. 144 145Please note that this example code is a simplified abstraction. Real 146buffers are usually larger than a single integer, real device drivers 147usually use sleep and wakeup mechanisms rather than polling for I/O 148completion, and real code generally doesn't bother to copy values into 149private variables before using them. All that is beside the point; 150the idea here is simply to illustrate the overall pattern of memory 151accesses by the CPUs. 152 153A memory model will predict what values P1 might obtain for its loads 154from flag and buf, or equivalently, what values r1 and r2 might end up 155with after the code has finished running. 156 157Some predictions are trivial. For instance, no sane memory model would 158predict that r1 = 42 or r2 = -7, because neither of those values ever 159gets stored in flag or buf. 160 161Some nontrivial predictions are nonetheless quite simple. For 162instance, P1 might run entirely before P0 begins, in which case r1 and 163r2 will both be 0 at the end. Or P0 might run entirely before P1 164begins, in which case r1 and r2 will both be 1. 165 166The interesting predictions concern what might happen when the two 167routines run concurrently. One possibility is that P1 runs after P0's 168store to buf but before the store to flag. In this case, r1 and r2 169will again both be 0. (If P1 had been designed to read buf 170unconditionally then we would instead have r1 = 0 and r2 = 1.) 171 172However, the most interesting possibility is where r1 = 1 and r2 = 0. 173If this were to occur it would mean the driver contains a bug, because 174incorrect data would get sent to the user: 0 instead of 1. As it 175happens, the LKMM does predict this outcome can occur, and the example 176driver code shown above is indeed buggy. 177 178 179A SELECTION OF MEMORY MODELS 180---------------------------- 181 182The first widely cited memory model, and the simplest to understand, 183is Sequential Consistency. According to this model, systems behave as 184if each CPU executed its instructions in order but with unspecified 185timing. In other words, the instructions from the various CPUs get 186interleaved in a nondeterministic way, always according to some single 187global order that agrees with the order of the instructions in the 188program source for each CPU. The model says that the value obtained 189by each load is simply the value written by the most recently executed 190store to the same memory location, from any CPU. 191 192For the MP example code shown above, Sequential Consistency predicts 193that the undesired result r1 = 1, r2 = 0 cannot occur. The reasoning 194goes like this: 195 196 Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from 197 it, as loads can obtain values only from earlier stores. 198 199 P1 loads from flag before loading from buf, since CPUs execute 200 their instructions in order. 201 202 P1 must load 0 from buf before P0 stores 1 to it; otherwise r2 203 would be 1 since a load obtains its value from the most recent 204 store to the same address. 205 206 P0 stores 1 to buf before storing 1 to flag, since it executes 207 its instructions in order. 208 209 Since an instruction (in this case, P0's store to flag) cannot 210 execute before itself, the specified outcome is impossible. 211 212However, real computer hardware almost never follows the Sequential 213Consistency memory model; doing so would rule out too many valuable 214performance optimizations. On ARM and PowerPC architectures, for 215instance, the MP example code really does sometimes yield r1 = 1 and 216r2 = 0. 217 218x86 and SPARC follow yet a different memory model: TSO (Total Store 219Ordering). This model predicts that the undesired outcome for the MP 220pattern cannot occur, but in other respects it differs from Sequential 221Consistency. One example is the Store Buffer (SB) pattern, in which 222each CPU stores to its own shared location and then loads from the 223other CPU's location: 224 225 int x = 0, y = 0; 226 227 P0() 228 { 229 int r0; 230 231 WRITE_ONCE(x, 1); 232 r0 = READ_ONCE(y); 233 } 234 235 P1() 236 { 237 int r1; 238 239 WRITE_ONCE(y, 1); 240 r1 = READ_ONCE(x); 241 } 242 243Sequential Consistency predicts that the outcome r0 = 0, r1 = 0 is 244impossible. (Exercise: Figure out the reasoning.) But TSO allows 245this outcome to occur, and in fact it does sometimes occur on x86 and 246SPARC systems. 247 248The LKMM was inspired by the memory models followed by PowerPC, ARM, 249x86, Alpha, and other architectures. However, it is different in 250detail from each of them. 251 252 253ORDERING AND CYCLES 254------------------- 255 256Memory models are all about ordering. Often this is temporal ordering 257(i.e., the order in which certain events occur) but it doesn't have to 258be; consider for example the order of instructions in a program's 259source code. We saw above that Sequential Consistency makes an 260important assumption that CPUs execute instructions in the same order 261as those instructions occur in the code, and there are many other 262instances of ordering playing central roles in memory models. 263 264The counterpart to ordering is a cycle. Ordering rules out cycles: 265It's not possible to have X ordered before Y, Y ordered before Z, and 266Z ordered before X, because this would mean that X is ordered before 267itself. The analysis of the MP example under Sequential Consistency 268involved just such an impossible cycle: 269 270 W: P0 stores 1 to flag executes before 271 X: P1 loads 1 from flag executes before 272 Y: P1 loads 0 from buf executes before 273 Z: P0 stores 1 to buf executes before 274 W: P0 stores 1 to flag. 275 276In short, if a memory model requires certain accesses to be ordered, 277and a certain outcome for the loads in a piece of code can happen only 278if those accesses would form a cycle, then the memory model predicts 279that outcome cannot occur. 280 281The LKMM is defined largely in terms of cycles, as we will see. 282 283 284EVENTS 285------ 286 287The LKMM does not work directly with the C statements that make up 288kernel source code. Instead it considers the effects of those 289statements in a more abstract form, namely, events. The model 290includes three types of events: 291 292 Read events correspond to loads from shared memory, such as 293 calls to READ_ONCE(), smp_load_acquire(), or 294 rcu_dereference(). 295 296 Write events correspond to stores to shared memory, such as 297 calls to WRITE_ONCE(), smp_store_release(), or atomic_set(). 298 299 Fence events correspond to memory barriers (also known as 300 fences), such as calls to smp_rmb() or rcu_read_lock(). 301 302These categories are not exclusive; a read or write event can also be 303a fence. This happens with functions like smp_load_acquire() or 304spin_lock(). However, no single event can be both a read and a write. 305Atomic read-modify-write accesses, such as atomic_inc() or xchg(), 306correspond to a pair of events: a read followed by a write. (The 307write event is omitted for executions where it doesn't occur, such as 308a cmpxchg() where the comparison fails.) 309 310Other parts of the code, those which do not involve interaction with 311shared memory, do not give rise to events. Thus, arithmetic and 312logical computations, control-flow instructions, or accesses to 313private memory or CPU registers are not of central interest to the 314memory model. They only affect the model's predictions indirectly. 315For example, an arithmetic computation might determine the value that 316gets stored to a shared memory location (or in the case of an array 317index, the address where the value gets stored), but the memory model 318is concerned only with the store itself -- its value and its address 319-- not the computation leading up to it. 320 321Events in the LKMM can be linked by various relations, which we will 322describe in the following sections. The memory model requires certain 323of these relations to be orderings, that is, it requires them not to 324have any cycles. 325 326 327THE PROGRAM ORDER RELATION: po AND po-loc 328----------------------------------------- 329 330The most important relation between events is program order (po). You 331can think of it as the order in which statements occur in the source 332code after branches are taken into account and loops have been 333unrolled. A better description might be the order in which 334instructions are presented to a CPU's execution unit. Thus, we say 335that X is po-before Y (written as "X ->po Y" in formulas) if X occurs 336before Y in the instruction stream. 337 338This is inherently a single-CPU relation; two instructions executing 339on different CPUs are never linked by po. Also, it is by definition 340an ordering so it cannot have any cycles. 341 342po-loc is a sub-relation of po. It links two memory accesses when the 343first comes before the second in program order and they access the 344same memory location (the "-loc" suffix). 345 346Although this may seem straightforward, there is one subtle aspect to 347program order we need to explain. The LKMM was inspired by low-level 348architectural memory models which describe the behavior of machine 349code, and it retains their outlook to a considerable extent. The 350read, write, and fence events used by the model are close in spirit to 351individual machine instructions. Nevertheless, the LKMM describes 352kernel code written in C, and the mapping from C to machine code can 353be extremely complex. 354 355Optimizing compilers have great freedom in the way they translate 356source code to object code. They are allowed to apply transformations 357that add memory accesses, eliminate accesses, combine them, split them 358into pieces, or move them around. The use of READ_ONCE(), WRITE_ONCE(), 359or one of the other atomic or synchronization primitives prevents a 360large number of compiler optimizations. In particular, it is guaranteed 361that the compiler will not remove such accesses from the generated code 362(unless it can prove the accesses will never be executed), it will not 363change the order in which they occur in the code (within limits imposed 364by the C standard), and it will not introduce extraneous accesses. 365 366The MP and SB examples above used READ_ONCE() and WRITE_ONCE() rather 367than ordinary memory accesses. Thanks to this usage, we can be certain 368that in the MP example, the compiler won't reorder P0's write event to 369buf and P0's write event to flag, and similarly for the other shared 370memory accesses in the examples. 371 372Since private variables are not shared between CPUs, they can be 373accessed normally without READ_ONCE() or WRITE_ONCE(). In fact, they 374need not even be stored in normal memory at all -- in principle a 375private variable could be stored in a CPU register (hence the convention 376that these variables have names starting with the letter 'r'). 377 378 379A WARNING 380--------- 381 382The protections provided by READ_ONCE(), WRITE_ONCE(), and others are 383not perfect; and under some circumstances it is possible for the 384compiler to undermine the memory model. Here is an example. Suppose 385both branches of an "if" statement store the same value to the same 386location: 387 388 r1 = READ_ONCE(x); 389 if (r1) { 390 WRITE_ONCE(y, 2); 391 ... /* do something */ 392 } else { 393 WRITE_ONCE(y, 2); 394 ... /* do something else */ 395 } 396 397For this code, the LKMM predicts that the load from x will always be 398executed before either of the stores to y. However, a compiler could 399lift the stores out of the conditional, transforming the code into 400something resembling: 401 402 r1 = READ_ONCE(x); 403 WRITE_ONCE(y, 2); 404 if (r1) { 405 ... /* do something */ 406 } else { 407 ... /* do something else */ 408 } 409 410Given this version of the code, the LKMM would predict that the load 411from x could be executed after the store to y. Thus, the memory 412model's original prediction could be invalidated by the compiler. 413 414Another issue arises from the fact that in C, arguments to many 415operators and function calls can be evaluated in any order. For 416example: 417 418 r1 = f(5) + g(6); 419 420The object code might call f(5) either before or after g(6); the 421memory model cannot assume there is a fixed program order relation 422between them. (In fact, if the function calls are inlined then the 423compiler might even interleave their object code.) 424 425 426DEPENDENCY RELATIONS: data, addr, and ctrl 427------------------------------------------ 428 429We say that two events are linked by a dependency relation when the 430execution of the second event depends in some way on a value obtained 431from memory by the first. The first event must be a read, and the 432value it obtains must somehow affect what the second event does. 433There are three kinds of dependencies: data, address (addr), and 434control (ctrl). 435 436A read and a write event are linked by a data dependency if the value 437obtained by the read affects the value stored by the write. As a very 438simple example: 439 440 int x, y; 441 442 r1 = READ_ONCE(x); 443 WRITE_ONCE(y, r1 + 5); 444 445The value stored by the WRITE_ONCE obviously depends on the value 446loaded by the READ_ONCE. Such dependencies can wind through 447arbitrarily complicated computations, and a write can depend on the 448values of multiple reads. 449 450A read event and another memory access event are linked by an address 451dependency if the value obtained by the read affects the location 452accessed by the other event. The second event can be either a read or 453a write. Here's another simple example: 454 455 int a[20]; 456 int i; 457 458 r1 = READ_ONCE(i); 459 r2 = READ_ONCE(a[r1]); 460 461Here the location accessed by the second READ_ONCE() depends on the 462index value loaded by the first. Pointer indirection also gives rise 463to address dependencies, since the address of a location accessed 464through a pointer will depend on the value read earlier from that 465pointer. 466 467Finally, a read event X and a write event Y are linked by a control 468dependency if Y syntactically lies within an arm of an if statement and 469X affects the evaluation of the if condition via a data or address 470dependency (or similarly for a switch statement). Simple example: 471 472 int x, y; 473 474 r1 = READ_ONCE(x); 475 if (r1) 476 WRITE_ONCE(y, 1984); 477 478Execution of the WRITE_ONCE() is controlled by a conditional expression 479which depends on the value obtained by the READ_ONCE(); hence there is 480a control dependency from the load to the store. 481 482It should be pretty obvious that events can only depend on reads that 483come earlier in program order. Symbolically, if we have R ->data X, 484R ->addr X, or R ->ctrl X (where R is a read event), then we must also 485have R ->po X. It wouldn't make sense for a computation to depend 486somehow on a value that doesn't get loaded from shared memory until 487later in the code! 488 489Here's a trick question: When is a dependency not a dependency? Answer: 490When it is purely syntactic rather than semantic. We say a dependency 491between two accesses is purely syntactic if the second access doesn't 492actually depend on the result of the first. Here is a trivial example: 493 494 r1 = READ_ONCE(x); 495 WRITE_ONCE(y, r1 * 0); 496 497There appears to be a data dependency from the load of x to the store 498of y, since the value to be stored is computed from the value that was 499loaded. But in fact, the value stored does not really depend on 500anything since it will always be 0. Thus the data dependency is only 501syntactic (it appears to exist in the code) but not semantic (the 502second access will always be the same, regardless of the value of the 503first access). Given code like this, a compiler could simply discard 504the value returned by the load from x, which would certainly destroy 505any dependency. (The compiler is not permitted to eliminate entirely 506the load generated for a READ_ONCE() -- that's one of the nice 507properties of READ_ONCE() -- but it is allowed to ignore the load's 508value.) 509 510It's natural to object that no one in their right mind would write 511code like the above. However, macro expansions can easily give rise 512to this sort of thing, in ways that often are not apparent to the 513programmer. 514 515Another mechanism that can lead to purely syntactic dependencies is 516related to the notion of "undefined behavior". Certain program 517behaviors are called "undefined" in the C language specification, 518which means that when they occur there are no guarantees at all about 519the outcome. Consider the following example: 520 521 int a[1]; 522 int i; 523 524 r1 = READ_ONCE(i); 525 r2 = READ_ONCE(a[r1]); 526 527Access beyond the end or before the beginning of an array is one kind 528of undefined behavior. Therefore the compiler doesn't have to worry 529about what will happen if r1 is nonzero, and it can assume that r1 530will always be zero regardless of the value actually loaded from i. 531(If the assumption turns out to be wrong the resulting behavior will 532be undefined anyway, so the compiler doesn't care!) Thus the value 533from the load can be discarded, breaking the address dependency. 534 535The LKMM is unaware that purely syntactic dependencies are different 536from semantic dependencies and therefore mistakenly predicts that the 537accesses in the two examples above will be ordered. This is another 538example of how the compiler can undermine the memory model. Be warned. 539 540 541THE READS-FROM RELATION: rf, rfi, and rfe 542----------------------------------------- 543 544The reads-from relation (rf) links a write event to a read event when 545the value loaded by the read is the value that was stored by the 546write. In colloquial terms, the load "reads from" the store. We 547write W ->rf R to indicate that the load R reads from the store W. We 548further distinguish the cases where the load and the store occur on 549the same CPU (internal reads-from, or rfi) and where they occur on 550different CPUs (external reads-from, or rfe). 551 552For our purposes, a memory location's initial value is treated as 553though it had been written there by an imaginary initial store that 554executes on a separate CPU before the main program runs. 555 556Usage of the rf relation implicitly assumes that loads will always 557read from a single store. It doesn't apply properly in the presence 558of load-tearing, where a load obtains some of its bits from one store 559and some of them from another store. Fortunately, use of READ_ONCE() 560and WRITE_ONCE() will prevent load-tearing; it's not possible to have: 561 562 int x = 0; 563 564 P0() 565 { 566 WRITE_ONCE(x, 0x1234); 567 } 568 569 P1() 570 { 571 int r1; 572 573 r1 = READ_ONCE(x); 574 } 575 576and end up with r1 = 0x1200 (partly from x's initial value and partly 577from the value stored by P0). 578 579On the other hand, load-tearing is unavoidable when mixed-size 580accesses are used. Consider this example: 581 582 union { 583 u32 w; 584 u16 h[2]; 585 } x; 586 587 P0() 588 { 589 WRITE_ONCE(x.h[0], 0x1234); 590 WRITE_ONCE(x.h[1], 0x5678); 591 } 592 593 P1() 594 { 595 int r1; 596 597 r1 = READ_ONCE(x.w); 598 } 599 600If r1 = 0x56781234 (little-endian!) at the end, then P1 must have read 601from both of P0's stores. It is possible to handle mixed-size and 602unaligned accesses in a memory model, but the LKMM currently does not 603attempt to do so. It requires all accesses to be properly aligned and 604of the location's actual size. 605 606 607CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe 608------------------------------------------------------------------ 609 610Cache coherence is a general principle requiring that in a 611multi-processor system, the CPUs must share a consistent view of the 612memory contents. Specifically, it requires that for each location in 613shared memory, the stores to that location must form a single global 614ordering which all the CPUs agree on (the coherence order), and this 615ordering must be consistent with the program order for accesses to 616that location. 617 618To put it another way, for any variable x, the coherence order (co) of 619the stores to x is simply the order in which the stores overwrite one 620another. The imaginary store which establishes x's initial value 621comes first in the coherence order; the store which directly 622overwrites the initial value comes second; the store which overwrites 623that value comes third, and so on. 624 625You can think of the coherence order as being the order in which the 626stores reach x's location in memory (or if you prefer a more 627hardware-centric view, the order in which the stores get written to 628x's cache line). We write W ->co W' if W comes before W' in the 629coherence order, that is, if the value stored by W gets overwritten, 630directly or indirectly, by the value stored by W'. 631 632Coherence order is required to be consistent with program order. This 633requirement takes the form of four coherency rules: 634 635 Write-write coherence: If W ->po-loc W' (i.e., W comes before 636 W' in program order and they access the same location), where W 637 and W' are two stores, then W ->co W'. 638 639 Write-read coherence: If W ->po-loc R, where W is a store and R 640 is a load, then R must read from W or from some other store 641 which comes after W in the coherence order. 642 643 Read-write coherence: If R ->po-loc W, where R is a load and W 644 is a store, then the store which R reads from must come before 645 W in the coherence order. 646 647 Read-read coherence: If R ->po-loc R', where R and R' are two 648 loads, then either they read from the same store or else the 649 store read by R comes before the store read by R' in the 650 coherence order. 651 652This is sometimes referred to as sequential consistency per variable, 653because it means that the accesses to any single memory location obey 654the rules of the Sequential Consistency memory model. (According to 655Wikipedia, sequential consistency per variable and cache coherence 656mean the same thing except that cache coherence includes an extra 657requirement that every store eventually becomes visible to every CPU.) 658 659Any reasonable memory model will include cache coherence. Indeed, our 660expectation of cache coherence is so deeply ingrained that violations 661of its requirements look more like hardware bugs than programming 662errors: 663 664 int x; 665 666 P0() 667 { 668 WRITE_ONCE(x, 17); 669 WRITE_ONCE(x, 23); 670 } 671 672If the final value stored in x after this code ran was 17, you would 673think your computer was broken. It would be a violation of the 674write-write coherence rule: Since the store of 23 comes later in 675program order, it must also come later in x's coherence order and 676thus must overwrite the store of 17. 677 678 int x = 0; 679 680 P0() 681 { 682 int r1; 683 684 r1 = READ_ONCE(x); 685 WRITE_ONCE(x, 666); 686 } 687 688If r1 = 666 at the end, this would violate the read-write coherence 689rule: The READ_ONCE() load comes before the WRITE_ONCE() store in 690program order, so it must not read from that store but rather from one 691coming earlier in the coherence order (in this case, x's initial 692value). 693 694 int x = 0; 695 696 P0() 697 { 698 WRITE_ONCE(x, 5); 699 } 700 701 P1() 702 { 703 int r1, r2; 704 705 r1 = READ_ONCE(x); 706 r2 = READ_ONCE(x); 707 } 708 709If r1 = 5 (reading from P0's store) and r2 = 0 (reading from the 710imaginary store which establishes x's initial value) at the end, this 711would violate the read-read coherence rule: The r1 load comes before 712the r2 load in program order, so it must not read from a store that 713comes later in the coherence order. 714 715(As a minor curiosity, if this code had used normal loads instead of 716READ_ONCE() in P1, on Itanium it sometimes could end up with r1 = 5 717and r2 = 0! This results from parallel execution of the operations 718encoded in Itanium's Very-Long-Instruction-Word format, and it is yet 719another motivation for using READ_ONCE() when accessing shared memory 720locations.) 721 722Just like the po relation, co is inherently an ordering -- it is not 723possible for a store to directly or indirectly overwrite itself! And 724just like with the rf relation, we distinguish between stores that 725occur on the same CPU (internal coherence order, or coi) and stores 726that occur on different CPUs (external coherence order, or coe). 727 728On the other hand, stores to different memory locations are never 729related by co, just as instructions on different CPUs are never 730related by po. Coherence order is strictly per-location, or if you 731prefer, each location has its own independent coherence order. 732 733 734THE FROM-READS RELATION: fr, fri, and fre 735----------------------------------------- 736 737The from-reads relation (fr) can be a little difficult for people to 738grok. It describes the situation where a load reads a value that gets 739overwritten by a store. In other words, we have R ->fr W when the 740value that R reads is overwritten (directly or indirectly) by W, or 741equivalently, when R reads from a store which comes earlier than W in 742the coherence order. 743 744For example: 745 746 int x = 0; 747 748 P0() 749 { 750 int r1; 751 752 r1 = READ_ONCE(x); 753 WRITE_ONCE(x, 2); 754 } 755 756The value loaded from x will be 0 (assuming cache coherence!), and it 757gets overwritten by the value 2. Thus there is an fr link from the 758READ_ONCE() to the WRITE_ONCE(). If the code contained any later 759stores to x, there would also be fr links from the READ_ONCE() to 760them. 761 762As with rf, rfi, and rfe, we subdivide the fr relation into fri (when 763the load and the store are on the same CPU) and fre (when they are on 764different CPUs). 765 766Note that the fr relation is determined entirely by the rf and co 767relations; it is not independent. Given a read event R and a write 768event W for the same location, we will have R ->fr W if and only if 769the write which R reads from is co-before W. In symbols, 770 771 (R ->fr W) := (there exists W' with W' ->rf R and W' ->co W). 772 773 774AN OPERATIONAL MODEL 775-------------------- 776 777The LKMM is based on various operational memory models, meaning that 778the models arise from an abstract view of how a computer system 779operates. Here are the main ideas, as incorporated into the LKMM. 780 781The system as a whole is divided into the CPUs and a memory subsystem. 782The CPUs are responsible for executing instructions (not necessarily 783in program order), and they communicate with the memory subsystem. 784For the most part, executing an instruction requires a CPU to perform 785only internal operations. However, loads, stores, and fences involve 786more. 787 788When CPU C executes a store instruction, it tells the memory subsystem 789to store a certain value at a certain location. The memory subsystem 790propagates the store to all the other CPUs as well as to RAM. (As a 791special case, we say that the store propagates to its own CPU at the 792time it is executed.) The memory subsystem also determines where the 793store falls in the location's coherence order. In particular, it must 794arrange for the store to be co-later than (i.e., to overwrite) any 795other store to the same location which has already propagated to CPU C. 796 797When a CPU executes a load instruction R, it first checks to see 798whether there are any as-yet unexecuted store instructions, for the 799same location, that come before R in program order. If there are, it 800uses the value of the po-latest such store as the value obtained by R, 801and we say that the store's value is forwarded to R. Otherwise, the 802CPU asks the memory subsystem for the value to load and we say that R 803is satisfied from memory. The memory subsystem hands back the value 804of the co-latest store to the location in question which has already 805propagated to that CPU. 806 807(In fact, the picture needs to be a little more complicated than this. 808CPUs have local caches, and propagating a store to a CPU really means 809propagating it to the CPU's local cache. A local cache can take some 810time to process the stores that it receives, and a store can't be used 811to satisfy one of the CPU's loads until it has been processed. On 812most architectures, the local caches process stores in 813First-In-First-Out order, and consequently the processing delay 814doesn't matter for the memory model. But on Alpha, the local caches 815have a partitioned design that results in non-FIFO behavior. We will 816discuss this in more detail later.) 817 818Note that load instructions may be executed speculatively and may be 819restarted under certain circumstances. The memory model ignores these 820premature executions; we simply say that the load executes at the 821final time it is forwarded or satisfied. 822 823Executing a fence (or memory barrier) instruction doesn't require a 824CPU to do anything special other than informing the memory subsystem 825about the fence. However, fences do constrain the way CPUs and the 826memory subsystem handle other instructions, in two respects. 827 828First, a fence forces the CPU to execute various instructions in 829program order. Exactly which instructions are ordered depends on the 830type of fence: 831 832 Strong fences, including smp_mb() and synchronize_rcu(), force 833 the CPU to execute all po-earlier instructions before any 834 po-later instructions; 835 836 smp_rmb() forces the CPU to execute all po-earlier loads 837 before any po-later loads; 838 839 smp_wmb() forces the CPU to execute all po-earlier stores 840 before any po-later stores; 841 842 Acquire fences, such as smp_load_acquire(), force the CPU to 843 execute the load associated with the fence (e.g., the load 844 part of an smp_load_acquire()) before any po-later 845 instructions; 846 847 Release fences, such as smp_store_release(), force the CPU to 848 execute all po-earlier instructions before the store 849 associated with the fence (e.g., the store part of an 850 smp_store_release()). 851 852Second, some types of fence affect the way the memory subsystem 853propagates stores. When a fence instruction is executed on CPU C: 854 855 For each other CPU C', smp_wmb() forces all po-earlier stores 856 on C to propagate to C' before any po-later stores do. 857 858 For each other CPU C', any store which propagates to C before 859 a release fence is executed (including all po-earlier 860 stores executed on C) is forced to propagate to C' before the 861 store associated with the release fence does. 862 863 Any store which propagates to C before a strong fence is 864 executed (including all po-earlier stores on C) is forced to 865 propagate to all other CPUs before any instructions po-after 866 the strong fence are executed on C. 867 868The propagation ordering enforced by release fences and strong fences 869affects stores from other CPUs that propagate to CPU C before the 870fence is executed, as well as stores that are executed on C before the 871fence. We describe this property by saying that release fences and 872strong fences are A-cumulative. By contrast, smp_wmb() fences are not 873A-cumulative; they only affect the propagation of stores that are 874executed on C before the fence (i.e., those which precede the fence in 875program order). 876 877rcu_read_lock(), rcu_read_unlock(), and synchronize_rcu() fences have 878other properties which we discuss later. 879 880 881PROPAGATION ORDER RELATION: cumul-fence 882--------------------------------------- 883 884The fences which affect propagation order (i.e., strong, release, and 885smp_wmb() fences) are collectively referred to as cumul-fences, even 886though smp_wmb() isn't A-cumulative. The cumul-fence relation is 887defined to link memory access events E and F whenever: 888 889 E and F are both stores on the same CPU and an smp_wmb() fence 890 event occurs between them in program order; or 891 892 F is a release fence and some X comes before F in program order, 893 where either X = E or else E ->rf X; or 894 895 A strong fence event occurs between some X and F in program 896 order, where either X = E or else E ->rf X. 897 898The operational model requires that whenever W and W' are both stores 899and W ->cumul-fence W', then W must propagate to any given CPU 900before W' does. However, for different CPUs C and C', it does not 901require W to propagate to C before W' propagates to C'. 902 903 904DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL 905------------------------------------------------- 906 907The LKMM is derived from the restrictions imposed by the design 908outlined above. These restrictions involve the necessity of 909maintaining cache coherence and the fact that a CPU can't operate on a 910value before it knows what that value is, among other things. 911 912The formal version of the LKMM is defined by six requirements, or 913axioms: 914 915 Sequential consistency per variable: This requires that the 916 system obey the four coherency rules. 917 918 Atomicity: This requires that atomic read-modify-write 919 operations really are atomic, that is, no other stores can 920 sneak into the middle of such an update. 921 922 Happens-before: This requires that certain instructions are 923 executed in a specific order. 924 925 Propagation: This requires that certain stores propagate to 926 CPUs and to RAM in a specific order. 927 928 Rcu: This requires that RCU read-side critical sections and 929 grace periods obey the rules of RCU, in particular, the 930 Grace-Period Guarantee. 931 932 Plain-coherence: This requires that plain memory accesses 933 (those not using READ_ONCE(), WRITE_ONCE(), etc.) must obey 934 the operational model's rules regarding cache coherence. 935 936The first and second are quite common; they can be found in many 937memory models (such as those for C11/C++11). The "happens-before" and 938"propagation" axioms have analogs in other memory models as well. The 939"rcu" and "plain-coherence" axioms are specific to the LKMM. 940 941Each of these axioms is discussed below. 942 943 944SEQUENTIAL CONSISTENCY PER VARIABLE 945----------------------------------- 946 947According to the principle of cache coherence, the stores to any fixed 948shared location in memory form a global ordering. We can imagine 949inserting the loads from that location into this ordering, by placing 950each load between the store that it reads from and the following 951store. This leaves the relative positions of loads that read from the 952same store unspecified; let's say they are inserted in program order, 953first for CPU 0, then CPU 1, etc. 954 955You can check that the four coherency rules imply that the rf, co, fr, 956and po-loc relations agree with this global ordering; in other words, 957whenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the 958X event comes before the Y event in the global ordering. The LKMM's 959"coherence" axiom expresses this by requiring the union of these 960relations not to have any cycles. This means it must not be possible 961to find events 962 963 X0 -> X1 -> X2 -> ... -> Xn -> X0, 964 965where each of the links is either rf, co, fr, or po-loc. This has to 966hold if the accesses to the fixed memory location can be ordered as 967cache coherence demands. 968 969Although it is not obvious, it can be shown that the converse is also 970true: This LKMM axiom implies that the four coherency rules are 971obeyed. 972 973 974ATOMIC UPDATES: rmw 975------------------- 976 977What does it mean to say that a read-modify-write (rmw) update, such 978as atomic_inc(&x), is atomic? It means that the memory location (x in 979this case) does not get altered between the read and the write events 980making up the atomic operation. In particular, if two CPUs perform 981atomic_inc(&x) concurrently, it must be guaranteed that the final 982value of x will be the initial value plus two. We should never have 983the following sequence of events: 984 985 CPU 0 loads x obtaining 13; 986 CPU 1 loads x obtaining 13; 987 CPU 0 stores 14 to x; 988 CPU 1 stores 14 to x; 989 990where the final value of x is wrong (14 rather than 15). 991 992In this example, CPU 0's increment effectively gets lost because it 993occurs in between CPU 1's load and store. To put it another way, the 994problem is that the position of CPU 0's store in x's coherence order 995is between the store that CPU 1 reads from and the store that CPU 1 996performs. 997 998The same analysis applies to all atomic update operations. Therefore, 999to enforce atomicity the LKMM requires that atomic updates follow this 1000rule: Whenever R and W are the read and write events composing an 1001atomic read-modify-write and W' is the write event which R reads from, 1002there must not be any stores coming between W' and W in the coherence 1003order. Equivalently, 1004 1005 (R ->rmw W) implies (there is no X with R ->fr X and X ->co W), 1006 1007where the rmw relation links the read and write events making up each 1008atomic update. This is what the LKMM's "atomic" axiom says. 1009 1010Atomic rmw updates play one more role in the LKMM: They can form "rmw 1011sequences". An rmw sequence is simply a bunch of atomic updates where 1012each update reads from the previous one. Written using events, it 1013looks like this: 1014 1015 Z0 ->rf Y1 ->rmw Z1 ->rf ... ->rf Yn ->rmw Zn, 1016 1017where Z0 is some store event and n can be any number (even 0, in the 1018degenerate case). We write this relation as: Z0 ->rmw-sequence Zn. 1019Note that this implies Z0 and Zn are stores to the same variable. 1020 1021Rmw sequences have a special property in the LKMM: They can extend the 1022cumul-fence relation. That is, if we have: 1023 1024 U ->cumul-fence X -> rmw-sequence Y 1025 1026then also U ->cumul-fence Y. Thinking about this in terms of the 1027operational model, U ->cumul-fence X says that the store U propagates 1028to each CPU before the store X does. Then the fact that X and Y are 1029linked by an rmw sequence means that U also propagates to each CPU 1030before Y does. In an analogous way, rmw sequences can also extend 1031the w-post-bounded relation defined below in the PLAIN ACCESSES AND 1032DATA RACES section. 1033 1034(The notion of rmw sequences in the LKMM is similar to, but not quite 1035the same as, that of release sequences in the C11 memory model. They 1036were added to the LKMM to fix an obscure bug; without them, atomic 1037updates with full-barrier semantics did not always guarantee ordering 1038at least as strong as atomic updates with release-barrier semantics.) 1039 1040 1041THE PRESERVED PROGRAM ORDER RELATION: ppo 1042----------------------------------------- 1043 1044There are many situations where a CPU is obliged to execute two 1045instructions in program order. We amalgamate them into the ppo (for 1046"preserved program order") relation, which links the po-earlier 1047instruction to the po-later instruction and is thus a sub-relation of 1048po. 1049 1050The operational model already includes a description of one such 1051situation: Fences are a source of ppo links. Suppose X and Y are 1052memory accesses with X ->po Y; then the CPU must execute X before Y if 1053any of the following hold: 1054 1055 A strong (smp_mb() or synchronize_rcu()) fence occurs between 1056 X and Y; 1057 1058 X and Y are both stores and an smp_wmb() fence occurs between 1059 them; 1060 1061 X and Y are both loads and an smp_rmb() fence occurs between 1062 them; 1063 1064 X is also an acquire fence, such as smp_load_acquire(); 1065 1066 Y is also a release fence, such as smp_store_release(). 1067 1068Another possibility, not mentioned earlier but discussed in the next 1069section, is: 1070 1071 X and Y are both loads, X ->addr Y (i.e., there is an address 1072 dependency from X to Y), and X is a READ_ONCE() or an atomic 1073 access. 1074 1075Dependencies can also cause instructions to be executed in program 1076order. This is uncontroversial when the second instruction is a 1077store; either a data, address, or control dependency from a load R to 1078a store W will force the CPU to execute R before W. This is very 1079simply because the CPU cannot tell the memory subsystem about W's 1080store before it knows what value should be stored (in the case of a 1081data dependency), what location it should be stored into (in the case 1082of an address dependency), or whether the store should actually take 1083place (in the case of a control dependency). 1084 1085Dependencies to load instructions are more problematic. To begin with, 1086there is no such thing as a data dependency to a load. Next, a CPU 1087has no reason to respect a control dependency to a load, because it 1088can always satisfy the second load speculatively before the first, and 1089then ignore the result if it turns out that the second load shouldn't 1090be executed after all. And lastly, the real difficulties begin when 1091we consider address dependencies to loads. 1092 1093To be fair about it, all Linux-supported architectures do execute 1094loads in program order if there is an address dependency between them. 1095After all, a CPU cannot ask the memory subsystem to load a value from 1096a particular location before it knows what that location is. However, 1097the split-cache design used by Alpha can cause it to behave in a way 1098that looks as if the loads were executed out of order (see the next 1099section for more details). The kernel includes a workaround for this 1100problem when the loads come from READ_ONCE(), and therefore the LKMM 1101includes address dependencies to loads in the ppo relation. 1102 1103On the other hand, dependencies can indirectly affect the ordering of 1104two loads. This happens when there is a dependency from a load to a 1105store and a second, po-later load reads from that store: 1106 1107 R ->dep W ->rfi R', 1108 1109where the dep link can be either an address or a data dependency. In 1110this situation we know it is possible for the CPU to execute R' before 1111W, because it can forward the value that W will store to R'. But it 1112cannot execute R' before R, because it cannot forward the value before 1113it knows what that value is, or that W and R' do access the same 1114location. However, if there is merely a control dependency between R 1115and W then the CPU can speculatively forward W to R' before executing 1116R; if the speculation turns out to be wrong then the CPU merely has to 1117restart or abandon R'. 1118 1119(In theory, a CPU might forward a store to a load when it runs across 1120an address dependency like this: 1121 1122 r1 = READ_ONCE(ptr); 1123 WRITE_ONCE(*r1, 17); 1124 r2 = READ_ONCE(*r1); 1125 1126because it could tell that the store and the second load access the 1127same location even before it knows what the location's address is. 1128However, none of the architectures supported by the Linux kernel do 1129this.) 1130 1131Two memory accesses of the same location must always be executed in 1132program order if the second access is a store. Thus, if we have 1133 1134 R ->po-loc W 1135 1136(the po-loc link says that R comes before W in program order and they 1137access the same location), the CPU is obliged to execute W after R. 1138If it executed W first then the memory subsystem would respond to R's 1139read request with the value stored by W (or an even later store), in 1140violation of the read-write coherence rule. Similarly, if we had 1141 1142 W ->po-loc W' 1143 1144and the CPU executed W' before W, then the memory subsystem would put 1145W' before W in the coherence order. It would effectively cause W to 1146overwrite W', in violation of the write-write coherence rule. 1147(Interestingly, an early ARMv8 memory model, now obsolete, proposed 1148allowing out-of-order writes like this to occur. The model avoided 1149violating the write-write coherence rule by requiring the CPU not to 1150send the W write to the memory subsystem at all!) 1151 1152 1153AND THEN THERE WAS ALPHA 1154------------------------ 1155 1156As mentioned above, the Alpha architecture is unique in that it does 1157not appear to respect address dependencies to loads. This means that 1158code such as the following: 1159 1160 int x = 0; 1161 int y = -1; 1162 int *ptr = &y; 1163 1164 P0() 1165 { 1166 WRITE_ONCE(x, 1); 1167 smp_wmb(); 1168 WRITE_ONCE(ptr, &x); 1169 } 1170 1171 P1() 1172 { 1173 int *r1; 1174 int r2; 1175 1176 r1 = ptr; 1177 r2 = READ_ONCE(*r1); 1178 } 1179 1180can malfunction on Alpha systems (notice that P1 uses an ordinary load 1181to read ptr instead of READ_ONCE()). It is quite possible that r1 = &x 1182and r2 = 0 at the end, in spite of the address dependency. 1183 1184At first glance this doesn't seem to make sense. We know that the 1185smp_wmb() forces P0's store to x to propagate to P1 before the store 1186to ptr does. And since P1 can't execute its second load 1187until it knows what location to load from, i.e., after executing its 1188first load, the value x = 1 must have propagated to P1 before the 1189second load executed. So why doesn't r2 end up equal to 1? 1190 1191The answer lies in the Alpha's split local caches. Although the two 1192stores do reach P1's local cache in the proper order, it can happen 1193that the first store is processed by a busy part of the cache while 1194the second store is processed by an idle part. As a result, the x = 1 1195value may not become available for P1's CPU to read until after the 1196ptr = &x value does, leading to the undesirable result above. The 1197final effect is that even though the two loads really are executed in 1198program order, it appears that they aren't. 1199 1200This could not have happened if the local cache had processed the 1201incoming stores in FIFO order. By contrast, other architectures 1202maintain at least the appearance of FIFO order. 1203 1204In practice, this difficulty is solved by inserting a special fence 1205between P1's two loads when the kernel is compiled for the Alpha 1206architecture. In fact, as of version 4.15, the kernel automatically 1207adds this fence after every READ_ONCE() and atomic load on Alpha. The 1208effect of the fence is to cause the CPU not to execute any po-later 1209instructions until after the local cache has finished processing all 1210the stores it has already received. Thus, if the code was changed to: 1211 1212 P1() 1213 { 1214 int *r1; 1215 int r2; 1216 1217 r1 = READ_ONCE(ptr); 1218 r2 = READ_ONCE(*r1); 1219 } 1220 1221then we would never get r1 = &x and r2 = 0. By the time P1 executed 1222its second load, the x = 1 store would already be fully processed by 1223the local cache and available for satisfying the read request. Thus 1224we have yet another reason why shared data should always be read with 1225READ_ONCE() or another synchronization primitive rather than accessed 1226directly. 1227 1228The LKMM requires that smp_rmb(), acquire fences, and strong fences 1229share this property: They do not allow the CPU to execute any po-later 1230instructions (or po-later loads in the case of smp_rmb()) until all 1231outstanding stores have been processed by the local cache. In the 1232case of a strong fence, the CPU first has to wait for all of its 1233po-earlier stores to propagate to every other CPU in the system; then 1234it has to wait for the local cache to process all the stores received 1235as of that time -- not just the stores received when the strong fence 1236began. 1237 1238And of course, none of this matters for any architecture other than 1239Alpha. 1240 1241 1242THE HAPPENS-BEFORE RELATION: hb 1243------------------------------- 1244 1245The happens-before relation (hb) links memory accesses that have to 1246execute in a certain order. hb includes the ppo relation and two 1247others, one of which is rfe. 1248 1249W ->rfe R implies that W and R are on different CPUs. It also means 1250that W's store must have propagated to R's CPU before R executed; 1251otherwise R could not have read the value stored by W. Therefore W 1252must have executed before R, and so we have W ->hb R. 1253 1254The equivalent fact need not hold if W ->rfi R (i.e., W and R are on 1255the same CPU). As we have already seen, the operational model allows 1256W's value to be forwarded to R in such cases, meaning that R may well 1257execute before W does. 1258 1259It's important to understand that neither coe nor fre is included in 1260hb, despite their similarities to rfe. For example, suppose we have 1261W ->coe W'. This means that W and W' are stores to the same location, 1262they execute on different CPUs, and W comes before W' in the coherence 1263order (i.e., W' overwrites W). Nevertheless, it is possible for W' to 1264execute before W, because the decision as to which store overwrites 1265the other is made later by the memory subsystem. When the stores are 1266nearly simultaneous, either one can come out on top. Similarly, 1267R ->fre W means that W overwrites the value which R reads, but it 1268doesn't mean that W has to execute after R. All that's necessary is 1269for the memory subsystem not to propagate W to R's CPU until after R 1270has executed, which is possible if W executes shortly before R. 1271 1272The third relation included in hb is like ppo, in that it only links 1273events that are on the same CPU. However it is more difficult to 1274explain, because it arises only indirectly from the requirement of 1275cache coherence. The relation is called prop, and it links two events 1276on CPU C in situations where a store from some other CPU comes after 1277the first event in the coherence order and propagates to C before the 1278second event executes. 1279 1280This is best explained with some examples. The simplest case looks 1281like this: 1282 1283 int x; 1284 1285 P0() 1286 { 1287 int r1; 1288 1289 WRITE_ONCE(x, 1); 1290 r1 = READ_ONCE(x); 1291 } 1292 1293 P1() 1294 { 1295 WRITE_ONCE(x, 8); 1296 } 1297 1298If r1 = 8 at the end then P0's accesses must have executed in program 1299order. We can deduce this from the operational model; if P0's load 1300had executed before its store then the value of the store would have 1301been forwarded to the load, so r1 would have ended up equal to 1, not 13028. In this case there is a prop link from P0's write event to its read 1303event, because P1's store came after P0's store in x's coherence 1304order, and P1's store propagated to P0 before P0's load executed. 1305 1306An equally simple case involves two loads of the same location that 1307read from different stores: 1308 1309 int x = 0; 1310 1311 P0() 1312 { 1313 int r1, r2; 1314 1315 r1 = READ_ONCE(x); 1316 r2 = READ_ONCE(x); 1317 } 1318 1319 P1() 1320 { 1321 WRITE_ONCE(x, 9); 1322 } 1323 1324If r1 = 0 and r2 = 9 at the end then P0's accesses must have executed 1325in program order. If the second load had executed before the first 1326then the x = 9 store must have been propagated to P0 before the first 1327load executed, and so r1 would have been 9 rather than 0. In this 1328case there is a prop link from P0's first read event to its second, 1329because P1's store overwrote the value read by P0's first load, and 1330P1's store propagated to P0 before P0's second load executed. 1331 1332Less trivial examples of prop all involve fences. Unlike the simple 1333examples above, they can require that some instructions are executed 1334out of program order. This next one should look familiar: 1335 1336 int buf = 0, flag = 0; 1337 1338 P0() 1339 { 1340 WRITE_ONCE(buf, 1); 1341 smp_wmb(); 1342 WRITE_ONCE(flag, 1); 1343 } 1344 1345 P1() 1346 { 1347 int r1; 1348 int r2; 1349 1350 r1 = READ_ONCE(flag); 1351 r2 = READ_ONCE(buf); 1352 } 1353 1354This is the MP pattern again, with an smp_wmb() fence between the two 1355stores. If r1 = 1 and r2 = 0 at the end then there is a prop link 1356from P1's second load to its first (backwards!). The reason is 1357similar to the previous examples: The value P1 loads from buf gets 1358overwritten by P0's store to buf, the fence guarantees that the store 1359to buf will propagate to P1 before the store to flag does, and the 1360store to flag propagates to P1 before P1 reads flag. 1361 1362The prop link says that in order to obtain the r1 = 1, r2 = 0 result, 1363P1 must execute its second load before the first. Indeed, if the load 1364from flag were executed first, then the buf = 1 store would already 1365have propagated to P1 by the time P1's load from buf executed, so r2 1366would have been 1 at the end, not 0. (The reasoning holds even for 1367Alpha, although the details are more complicated and we will not go 1368into them.) 1369 1370But what if we put an smp_rmb() fence between P1's loads? The fence 1371would force the two loads to be executed in program order, and it 1372would generate a cycle in the hb relation: The fence would create a ppo 1373link (hence an hb link) from the first load to the second, and the 1374prop relation would give an hb link from the second load to the first. 1375Since an instruction can't execute before itself, we are forced to 1376conclude that if an smp_rmb() fence is added, the r1 = 1, r2 = 0 1377outcome is impossible -- as it should be. 1378 1379The formal definition of the prop relation involves a coe or fre link, 1380followed by an arbitrary number of cumul-fence links, ending with an 1381rfe link. You can concoct more exotic examples, containing more than 1382one fence, although this quickly leads to diminishing returns in terms 1383of complexity. For instance, here's an example containing a coe link 1384followed by two cumul-fences and an rfe link, utilizing the fact that 1385release fences are A-cumulative: 1386 1387 int x, y, z; 1388 1389 P0() 1390 { 1391 int r0; 1392 1393 WRITE_ONCE(x, 1); 1394 r0 = READ_ONCE(z); 1395 } 1396 1397 P1() 1398 { 1399 WRITE_ONCE(x, 2); 1400 smp_wmb(); 1401 WRITE_ONCE(y, 1); 1402 } 1403 1404 P2() 1405 { 1406 int r2; 1407 1408 r2 = READ_ONCE(y); 1409 smp_store_release(&z, 1); 1410 } 1411 1412If x = 2, r0 = 1, and r2 = 1 after this code runs then there is a prop 1413link from P0's store to its load. This is because P0's store gets 1414overwritten by P1's store since x = 2 at the end (a coe link), the 1415smp_wmb() ensures that P1's store to x propagates to P2 before the 1416store to y does (the first cumul-fence), the store to y propagates to P2 1417before P2's load and store execute, P2's smp_store_release() 1418guarantees that the stores to x and y both propagate to P0 before the 1419store to z does (the second cumul-fence), and P0's load executes after the 1420store to z has propagated to P0 (an rfe link). 1421 1422In summary, the fact that the hb relation links memory access events 1423in the order they execute means that it must not have cycles. This 1424requirement is the content of the LKMM's "happens-before" axiom. 1425 1426The LKMM defines yet another relation connected to times of 1427instruction execution, but it is not included in hb. It relies on the 1428particular properties of strong fences, which we cover in the next 1429section. 1430 1431 1432THE PROPAGATES-BEFORE RELATION: pb 1433---------------------------------- 1434 1435The propagates-before (pb) relation capitalizes on the special 1436features of strong fences. It links two events E and F whenever some 1437store is coherence-later than E and propagates to every CPU and to RAM 1438before F executes. The formal definition requires that E be linked to 1439F via a coe or fre link, an arbitrary number of cumul-fences, an 1440optional rfe link, a strong fence, and an arbitrary number of hb 1441links. Let's see how this definition works out. 1442 1443Consider first the case where E is a store (implying that the sequence 1444of links begins with coe). Then there are events W, X, Y, and Z such 1445that: 1446 1447 E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F, 1448 1449where the * suffix indicates an arbitrary number of links of the 1450specified type, and the ? suffix indicates the link is optional (Y may 1451be equal to X). Because of the cumul-fence links, we know that W will 1452propagate to Y's CPU before X does, hence before Y executes and hence 1453before the strong fence executes. Because this fence is strong, we 1454know that W will propagate to every CPU and to RAM before Z executes. 1455And because of the hb links, we know that Z will execute before F. 1456Thus W, which comes later than E in the coherence order, will 1457propagate to every CPU and to RAM before F executes. 1458 1459The case where E is a load is exactly the same, except that the first 1460link in the sequence is fre instead of coe. 1461 1462The existence of a pb link from E to F implies that E must execute 1463before F. To see why, suppose that F executed first. Then W would 1464have propagated to E's CPU before E executed. If E was a store, the 1465memory subsystem would then be forced to make E come after W in the 1466coherence order, contradicting the fact that E ->coe W. If E was a 1467load, the memory subsystem would then be forced to satisfy E's read 1468request with the value stored by W or an even later store, 1469contradicting the fact that E ->fre W. 1470 1471A good example illustrating how pb works is the SB pattern with strong 1472fences: 1473 1474 int x = 0, y = 0; 1475 1476 P0() 1477 { 1478 int r0; 1479 1480 WRITE_ONCE(x, 1); 1481 smp_mb(); 1482 r0 = READ_ONCE(y); 1483 } 1484 1485 P1() 1486 { 1487 int r1; 1488 1489 WRITE_ONCE(y, 1); 1490 smp_mb(); 1491 r1 = READ_ONCE(x); 1492 } 1493 1494If r0 = 0 at the end then there is a pb link from P0's load to P1's 1495load: an fre link from P0's load to P1's store (which overwrites the 1496value read by P0), and a strong fence between P1's store and its load. 1497In this example, the sequences of cumul-fence and hb links are empty. 1498Note that this pb link is not included in hb as an instance of prop, 1499because it does not start and end on the same CPU. 1500 1501Similarly, if r1 = 0 at the end then there is a pb link from P1's load 1502to P0's. This means that if both r1 and r2 were 0 there would be a 1503cycle in pb, which is not possible since an instruction cannot execute 1504before itself. Thus, adding smp_mb() fences to the SB pattern 1505prevents the r0 = 0, r1 = 0 outcome. 1506 1507In summary, the fact that the pb relation links events in the order 1508they execute means that it cannot have cycles. This requirement is 1509the content of the LKMM's "propagation" axiom. 1510 1511 1512RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb 1513------------------------------------------------------------------------ 1514 1515RCU (Read-Copy-Update) is a powerful synchronization mechanism. It 1516rests on two concepts: grace periods and read-side critical sections. 1517 1518A grace period is the span of time occupied by a call to 1519synchronize_rcu(). A read-side critical section (or just critical 1520section, for short) is a region of code delimited by rcu_read_lock() 1521at the start and rcu_read_unlock() at the end. Critical sections can 1522be nested, although we won't make use of this fact. 1523 1524As far as memory models are concerned, RCU's main feature is its 1525Grace-Period Guarantee, which states that a critical section can never 1526span a full grace period. In more detail, the Guarantee says: 1527 1528 For any critical section C and any grace period G, at least 1529 one of the following statements must hold: 1530 1531(1) C ends before G does, and in addition, every store that 1532 propagates to C's CPU before the end of C must propagate to 1533 every CPU before G ends. 1534 1535(2) G starts before C does, and in addition, every store that 1536 propagates to G's CPU before the start of G must propagate 1537 to every CPU before C starts. 1538 1539In particular, it is not possible for a critical section to both start 1540before and end after a grace period. 1541 1542Here is a simple example of RCU in action: 1543 1544 int x, y; 1545 1546 P0() 1547 { 1548 rcu_read_lock(); 1549 WRITE_ONCE(x, 1); 1550 WRITE_ONCE(y, 1); 1551 rcu_read_unlock(); 1552 } 1553 1554 P1() 1555 { 1556 int r1, r2; 1557 1558 r1 = READ_ONCE(x); 1559 synchronize_rcu(); 1560 r2 = READ_ONCE(y); 1561 } 1562 1563The Grace Period Guarantee tells us that when this code runs, it will 1564never end with r1 = 1 and r2 = 0. The reasoning is as follows. r1 = 1 1565means that P0's store to x propagated to P1 before P1 called 1566synchronize_rcu(), so P0's critical section must have started before 1567P1's grace period, contrary to part (2) of the Guarantee. On the 1568other hand, r2 = 0 means that P0's store to y, which occurs before the 1569end of the critical section, did not propagate to P1 before the end of 1570the grace period, contrary to part (1). Together the results violate 1571the Guarantee. 1572 1573In the kernel's implementations of RCU, the requirements for stores 1574to propagate to every CPU are fulfilled by placing strong fences at 1575suitable places in the RCU-related code. Thus, if a critical section 1576starts before a grace period does then the critical section's CPU will 1577execute an smp_mb() fence after the end of the critical section and 1578some time before the grace period's synchronize_rcu() call returns. 1579And if a critical section ends after a grace period does then the 1580synchronize_rcu() routine will execute an smp_mb() fence at its start 1581and some time before the critical section's opening rcu_read_lock() 1582executes. 1583 1584What exactly do we mean by saying that a critical section "starts 1585before" or "ends after" a grace period? Some aspects of the meaning 1586are pretty obvious, as in the example above, but the details aren't 1587entirely clear. The LKMM formalizes this notion by means of the 1588rcu-link relation. rcu-link encompasses a very general notion of 1589"before": If E and F are RCU fence events (i.e., rcu_read_lock(), 1590rcu_read_unlock(), or synchronize_rcu()) then among other things, 1591E ->rcu-link F includes cases where E is po-before some memory-access 1592event X, F is po-after some memory-access event Y, and we have any of 1593X ->rfe Y, X ->co Y, or X ->fr Y. 1594 1595The formal definition of the rcu-link relation is more than a little 1596obscure, and we won't give it here. It is closely related to the pb 1597relation, and the details don't matter unless you want to comb through 1598a somewhat lengthy formal proof. Pretty much all you need to know 1599about rcu-link is the information in the preceding paragraph. 1600 1601The LKMM also defines the rcu-gp and rcu-rscsi relations. They bring 1602grace periods and read-side critical sections into the picture, in the 1603following way: 1604 1605 E ->rcu-gp F means that E and F are in fact the same event, 1606 and that event is a synchronize_rcu() fence (i.e., a grace 1607 period). 1608 1609 E ->rcu-rscsi F means that E and F are the rcu_read_unlock() 1610 and rcu_read_lock() fence events delimiting some read-side 1611 critical section. (The 'i' at the end of the name emphasizes 1612 that this relation is "inverted": It links the end of the 1613 critical section to the start.) 1614 1615If we think of the rcu-link relation as standing for an extended 1616"before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a 1617grace period which ends before Z begins. (In fact it covers more than 1618this, because it also includes cases where some store propagates to 1619Z's CPU before Z begins but doesn't propagate to some other CPU until 1620after X ends.) Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is 1621the end of a critical section which starts before Z begins. 1622 1623The LKMM goes on to define the rcu-order relation as a sequence of 1624rcu-gp and rcu-rscsi links separated by rcu-link links, in which the 1625number of rcu-gp links is >= the number of rcu-rscsi links. For 1626example: 1627 1628 X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V 1629 1630would imply that X ->rcu-order V, because this sequence contains two 1631rcu-gp links and one rcu-rscsi link. (It also implies that 1632X ->rcu-order T and Z ->rcu-order V.) On the other hand: 1633 1634 X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V 1635 1636does not imply X ->rcu-order V, because the sequence contains only 1637one rcu-gp link but two rcu-rscsi links. 1638 1639The rcu-order relation is important because the Grace Period Guarantee 1640means that rcu-order links act kind of like strong fences. In 1641particular, E ->rcu-order F implies not only that E begins before F 1642ends, but also that any write po-before E will propagate to every CPU 1643before any instruction po-after F can execute. (However, it does not 1644imply that E must execute before F; in fact, each synchronize_rcu() 1645fence event is linked to itself by rcu-order as a degenerate case.) 1646 1647To prove this in full generality requires some intellectual effort. 1648We'll consider just a very simple case: 1649 1650 G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F. 1651 1652This formula means that G and W are the same event (a grace period), 1653and there are events X, Y and a read-side critical section C such that: 1654 1655 1. G = W is po-before or equal to X; 1656 1657 2. X comes "before" Y in some sense (including rfe, co and fr); 1658 1659 3. Y is po-before Z; 1660 1661 4. Z is the rcu_read_unlock() event marking the end of C; 1662 1663 5. F is the rcu_read_lock() event marking the start of C. 1664 1665From 1 - 4 we deduce that the grace period G ends before the critical 1666section C. Then part (2) of the Grace Period Guarantee says not only 1667that G starts before C does, but also that any write which executes on 1668G's CPU before G starts must propagate to every CPU before C starts. 1669In particular, the write propagates to every CPU before F finishes 1670executing and hence before any instruction po-after F can execute. 1671This sort of reasoning can be extended to handle all the situations 1672covered by rcu-order. 1673 1674The rcu-fence relation is a simple extension of rcu-order. While 1675rcu-order only links certain fence events (calls to synchronize_rcu(), 1676rcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events 1677that are separated by an rcu-order link. This is analogous to the way 1678the strong-fence relation links events that are separated by an 1679smp_mb() fence event (as mentioned above, rcu-order links act kind of 1680like strong fences). Written symbolically, X ->rcu-fence Y means 1681there are fence events E and F such that: 1682 1683 X ->po E ->rcu-order F ->po Y. 1684 1685From the discussion above, we see this implies not only that X 1686executes before Y, but also (if X is a store) that X propagates to 1687every CPU before Y executes. Thus rcu-fence is sort of a 1688"super-strong" fence: Unlike the original strong fences (smp_mb() and 1689synchronize_rcu()), rcu-fence is able to link events on different 1690CPUs. (Perhaps this fact should lead us to say that rcu-fence isn't 1691really a fence at all!) 1692 1693Finally, the LKMM defines the RCU-before (rb) relation in terms of 1694rcu-fence. This is done in essentially the same way as the pb 1695relation was defined in terms of strong-fence. We will omit the 1696details; the end result is that E ->rb F implies E must execute 1697before F, just as E ->pb F does (and for much the same reasons). 1698 1699Putting this all together, the LKMM expresses the Grace Period 1700Guarantee by requiring that the rb relation does not contain a cycle. 1701Equivalently, this "rcu" axiom requires that there are no events E 1702and F with E ->rcu-link F ->rcu-order E. Or to put it a third way, 1703the axiom requires that there are no cycles consisting of rcu-gp and 1704rcu-rscsi alternating with rcu-link, where the number of rcu-gp links 1705is >= the number of rcu-rscsi links. 1706 1707Justifying the axiom isn't easy, but it is in fact a valid 1708formalization of the Grace Period Guarantee. We won't attempt to go 1709through the detailed argument, but the following analysis gives a 1710taste of what is involved. Suppose both parts of the Guarantee are 1711violated: A critical section starts before a grace period, and some 1712store propagates to the critical section's CPU before the end of the 1713critical section but doesn't propagate to some other CPU until after 1714the end of the grace period. 1715 1716Putting symbols to these ideas, let L and U be the rcu_read_lock() and 1717rcu_read_unlock() fence events delimiting the critical section in 1718question, and let S be the synchronize_rcu() fence event for the grace 1719period. Saying that the critical section starts before S means there 1720are events Q and R where Q is po-after L (which marks the start of the 1721critical section), Q is "before" R in the sense used by the rcu-link 1722relation, and R is po-before the grace period S. Thus we have: 1723 1724 L ->rcu-link S. 1725 1726Let W be the store mentioned above, let Y come before the end of the 1727critical section and witness that W propagates to the critical 1728section's CPU by reading from W, and let Z on some arbitrary CPU be a 1729witness that W has not propagated to that CPU, where Z happens after 1730some event X which is po-after S. Symbolically, this amounts to: 1731 1732 S ->po X ->hb* Z ->fr W ->rf Y ->po U. 1733 1734The fr link from Z to W indicates that W has not propagated to Z's CPU 1735at the time that Z executes. From this, it can be shown (see the 1736discussion of the rcu-link relation earlier) that S and U are related 1737by rcu-link: 1738 1739 S ->rcu-link U. 1740 1741Since S is a grace period we have S ->rcu-gp S, and since L and U are 1742the start and end of the critical section C we have U ->rcu-rscsi L. 1743From this we obtain: 1744 1745 S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S, 1746 1747a forbidden cycle. Thus the "rcu" axiom rules out this violation of 1748the Grace Period Guarantee. 1749 1750For something a little more down-to-earth, let's see how the axiom 1751works out in practice. Consider the RCU code example from above, this 1752time with statement labels added: 1753 1754 int x, y; 1755 1756 P0() 1757 { 1758 L: rcu_read_lock(); 1759 X: WRITE_ONCE(x, 1); 1760 Y: WRITE_ONCE(y, 1); 1761 U: rcu_read_unlock(); 1762 } 1763 1764 P1() 1765 { 1766 int r1, r2; 1767 1768 Z: r1 = READ_ONCE(x); 1769 S: synchronize_rcu(); 1770 W: r2 = READ_ONCE(y); 1771 } 1772 1773 1774If r2 = 0 at the end then P0's store at Y overwrites the value that 1775P1's load at W reads from, so we have W ->fre Y. Since S ->po W and 1776also Y ->po U, we get S ->rcu-link U. In addition, S ->rcu-gp S 1777because S is a grace period. 1778 1779If r1 = 1 at the end then P1's load at Z reads from P0's store at X, 1780so we have X ->rfe Z. Together with L ->po X and Z ->po S, this 1781yields L ->rcu-link S. And since L and U are the start and end of a 1782critical section, we have U ->rcu-rscsi L. 1783 1784Then U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a 1785forbidden cycle, violating the "rcu" axiom. Hence the outcome is not 1786allowed by the LKMM, as we would expect. 1787 1788For contrast, let's see what can happen in a more complicated example: 1789 1790 int x, y, z; 1791 1792 P0() 1793 { 1794 int r0; 1795 1796 L0: rcu_read_lock(); 1797 r0 = READ_ONCE(x); 1798 WRITE_ONCE(y, 1); 1799 U0: rcu_read_unlock(); 1800 } 1801 1802 P1() 1803 { 1804 int r1; 1805 1806 r1 = READ_ONCE(y); 1807 S1: synchronize_rcu(); 1808 WRITE_ONCE(z, 1); 1809 } 1810 1811 P2() 1812 { 1813 int r2; 1814 1815 L2: rcu_read_lock(); 1816 r2 = READ_ONCE(z); 1817 WRITE_ONCE(x, 1); 1818 U2: rcu_read_unlock(); 1819 } 1820 1821If r0 = r1 = r2 = 1 at the end, then similar reasoning to before shows 1822that U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi 1823L2 ->rcu-link U0. However this cycle is not forbidden, because the 1824sequence of relations contains fewer instances of rcu-gp (one) than of 1825rcu-rscsi (two). Consequently the outcome is allowed by the LKMM. 1826The following instruction timing diagram shows how it might actually 1827occur: 1828 1829P0 P1 P2 1830-------------------- -------------------- -------------------- 1831rcu_read_lock() 1832WRITE_ONCE(y, 1) 1833 r1 = READ_ONCE(y) 1834 synchronize_rcu() starts 1835 . rcu_read_lock() 1836 . WRITE_ONCE(x, 1) 1837r0 = READ_ONCE(x) . 1838rcu_read_unlock() . 1839 synchronize_rcu() ends 1840 WRITE_ONCE(z, 1) 1841 r2 = READ_ONCE(z) 1842 rcu_read_unlock() 1843 1844This requires P0 and P2 to execute their loads and stores out of 1845program order, but of course they are allowed to do so. And as you 1846can see, the Grace Period Guarantee is not violated: The critical 1847section in P0 both starts before P1's grace period does and ends 1848before it does, and the critical section in P2 both starts after P1's 1849grace period does and ends after it does. 1850 1851Addendum: The LKMM now supports SRCU (Sleepable Read-Copy-Update) in 1852addition to normal RCU. The ideas involved are much the same as 1853above, with new relations srcu-gp and srcu-rscsi added to represent 1854SRCU grace periods and read-side critical sections. There is a 1855restriction on the srcu-gp and srcu-rscsi links that can appear in an 1856rcu-order sequence (the srcu-rscsi links must be paired with srcu-gp 1857links having the same SRCU domain with proper nesting); the details 1858are relatively unimportant. 1859 1860 1861LOCKING 1862------- 1863 1864The LKMM includes locking. In fact, there is special code for locking 1865in the formal model, added in order to make tools run faster. 1866However, this special code is intended to be more or less equivalent 1867to concepts we have already covered. A spinlock_t variable is treated 1868the same as an int, and spin_lock(&s) is treated almost the same as: 1869 1870 while (cmpxchg_acquire(&s, 0, 1) != 0) 1871 cpu_relax(); 1872 1873This waits until s is equal to 0 and then atomically sets it to 1, 1874and the read part of the cmpxchg operation acts as an acquire fence. 1875An alternate way to express the same thing would be: 1876 1877 r = xchg_acquire(&s, 1); 1878 1879along with a requirement that at the end, r = 0. Similarly, 1880spin_trylock(&s) is treated almost the same as: 1881 1882 return !cmpxchg_acquire(&s, 0, 1); 1883 1884which atomically sets s to 1 if it is currently equal to 0 and returns 1885true if it succeeds (the read part of the cmpxchg operation acts as an 1886acquire fence only if the operation is successful). spin_unlock(&s) 1887is treated almost the same as: 1888 1889 smp_store_release(&s, 0); 1890 1891The "almost" qualifiers above need some explanation. In the LKMM, the 1892store-release in a spin_unlock() and the load-acquire which forms the 1893first half of the atomic rmw update in a spin_lock() or a successful 1894spin_trylock() -- we can call these things lock-releases and 1895lock-acquires -- have two properties beyond those of ordinary releases 1896and acquires. 1897 1898First, when a lock-acquire reads from or is po-after a lock-release, 1899the LKMM requires that every instruction po-before the lock-release 1900must execute before any instruction po-after the lock-acquire. This 1901would naturally hold if the release and acquire operations were on 1902different CPUs and accessed the same lock variable, but the LKMM says 1903it also holds when they are on the same CPU, even if they access 1904different lock variables. For example: 1905 1906 int x, y; 1907 spinlock_t s, t; 1908 1909 P0() 1910 { 1911 int r1, r2; 1912 1913 spin_lock(&s); 1914 r1 = READ_ONCE(x); 1915 spin_unlock(&s); 1916 spin_lock(&t); 1917 r2 = READ_ONCE(y); 1918 spin_unlock(&t); 1919 } 1920 1921 P1() 1922 { 1923 WRITE_ONCE(y, 1); 1924 smp_wmb(); 1925 WRITE_ONCE(x, 1); 1926 } 1927 1928Here the second spin_lock() is po-after the first spin_unlock(), and 1929therefore the load of x must execute before the load of y, even though 1930the two locking operations use different locks. Thus we cannot have 1931r1 = 1 and r2 = 0 at the end (this is an instance of the MP pattern). 1932 1933This requirement does not apply to ordinary release and acquire 1934fences, only to lock-related operations. For instance, suppose P0() 1935in the example had been written as: 1936 1937 P0() 1938 { 1939 int r1, r2, r3; 1940 1941 r1 = READ_ONCE(x); 1942 smp_store_release(&s, 1); 1943 r3 = smp_load_acquire(&s); 1944 r2 = READ_ONCE(y); 1945 } 1946 1947Then the CPU would be allowed to forward the s = 1 value from the 1948smp_store_release() to the smp_load_acquire(), executing the 1949instructions in the following order: 1950 1951 r3 = smp_load_acquire(&s); // Obtains r3 = 1 1952 r2 = READ_ONCE(y); 1953 r1 = READ_ONCE(x); 1954 smp_store_release(&s, 1); // Value is forwarded 1955 1956and thus it could load y before x, obtaining r2 = 0 and r1 = 1. 1957 1958Second, when a lock-acquire reads from or is po-after a lock-release, 1959and some other stores W and W' occur po-before the lock-release and 1960po-after the lock-acquire respectively, the LKMM requires that W must 1961propagate to each CPU before W' does. For example, consider: 1962 1963 int x, y; 1964 spinlock_t s; 1965 1966 P0() 1967 { 1968 spin_lock(&s); 1969 WRITE_ONCE(x, 1); 1970 spin_unlock(&s); 1971 } 1972 1973 P1() 1974 { 1975 int r1; 1976 1977 spin_lock(&s); 1978 r1 = READ_ONCE(x); 1979 WRITE_ONCE(y, 1); 1980 spin_unlock(&s); 1981 } 1982 1983 P2() 1984 { 1985 int r2, r3; 1986 1987 r2 = READ_ONCE(y); 1988 smp_rmb(); 1989 r3 = READ_ONCE(x); 1990 } 1991 1992If r1 = 1 at the end then the spin_lock() in P1 must have read from 1993the spin_unlock() in P0. Hence the store to x must propagate to P2 1994before the store to y does, so we cannot have r2 = 1 and r3 = 0. But 1995if P1 had used a lock variable different from s, the writes could have 1996propagated in either order. (On the other hand, if the code in P0 and 1997P1 had all executed on a single CPU, as in the example before this 1998one, then the writes would have propagated in order even if the two 1999critical sections used different lock variables.) 2000 2001These two special requirements for lock-release and lock-acquire do 2002not arise from the operational model. Nevertheless, kernel developers 2003have come to expect and rely on them because they do hold on all 2004architectures supported by the Linux kernel, albeit for various 2005differing reasons. 2006 2007 2008PLAIN ACCESSES AND DATA RACES 2009----------------------------- 2010 2011In the LKMM, memory accesses such as READ_ONCE(x), atomic_inc(&y), 2012smp_load_acquire(&z), and so on are collectively referred to as 2013"marked" accesses, because they are all annotated with special 2014operations of one kind or another. Ordinary C-language memory 2015accesses such as x or y = 0 are simply called "plain" accesses. 2016 2017Early versions of the LKMM had nothing to say about plain accesses. 2018The C standard allows compilers to assume that the variables affected 2019by plain accesses are not concurrently read or written by any other 2020threads or CPUs. This leaves compilers free to implement all manner 2021of transformations or optimizations of code containing plain accesses, 2022making such code very difficult for a memory model to handle. 2023 2024Here is just one example of a possible pitfall: 2025 2026 int a = 6; 2027 int *x = &a; 2028 2029 P0() 2030 { 2031 int *r1; 2032 int r2 = 0; 2033 2034 r1 = x; 2035 if (r1 != NULL) 2036 r2 = READ_ONCE(*r1); 2037 } 2038 2039 P1() 2040 { 2041 WRITE_ONCE(x, NULL); 2042 } 2043 2044On the face of it, one would expect that when this code runs, the only 2045possible final values for r2 are 6 and 0, depending on whether or not 2046P1's store to x propagates to P0 before P0's load from x executes. 2047But since P0's load from x is a plain access, the compiler may decide 2048to carry out the load twice (for the comparison against NULL, then again 2049for the READ_ONCE()) and eliminate the temporary variable r1. The 2050object code generated for P0 could therefore end up looking rather 2051like this: 2052 2053 P0() 2054 { 2055 int r2 = 0; 2056 2057 if (x != NULL) 2058 r2 = READ_ONCE(*x); 2059 } 2060 2061And now it is obvious that this code runs the risk of dereferencing a 2062NULL pointer, because P1's store to x might propagate to P0 after the 2063test against NULL has been made but before the READ_ONCE() executes. 2064If the original code had said "r1 = READ_ONCE(x)" instead of "r1 = x", 2065the compiler would not have performed this optimization and there 2066would be no possibility of a NULL-pointer dereference. 2067 2068Given the possibility of transformations like this one, the LKMM 2069doesn't try to predict all possible outcomes of code containing plain 2070accesses. It is instead content to determine whether the code 2071violates the compiler's assumptions, which would render the ultimate 2072outcome undefined. 2073 2074In technical terms, the compiler is allowed to assume that when the 2075program executes, there will not be any data races. A "data race" 2076occurs when there are two memory accesses such that: 2077 20781. they access the same location, 2079 20802. at least one of them is a store, 2081 20823. at least one of them is plain, 2083 20844. they occur on different CPUs (or in different threads on the 2085 same CPU), and 2086 20875. they execute concurrently. 2088 2089In the literature, two accesses are said to "conflict" if they satisfy 20901 and 2 above. We'll go a little farther and say that two accesses 2091are "race candidates" if they satisfy 1 - 4. Thus, whether or not two 2092race candidates actually do race in a given execution depends on 2093whether they are concurrent. 2094 2095The LKMM tries to determine whether a program contains race candidates 2096which may execute concurrently; if it does then the LKMM says there is 2097a potential data race and makes no predictions about the program's 2098outcome. 2099 2100Determining whether two accesses are race candidates is easy; you can 2101see that all the concepts involved in the definition above are already 2102part of the memory model. The hard part is telling whether they may 2103execute concurrently. The LKMM takes a conservative attitude, 2104assuming that accesses may be concurrent unless it can prove they 2105are not. 2106 2107If two memory accesses aren't concurrent then one must execute before 2108the other. Therefore the LKMM decides two accesses aren't concurrent 2109if they can be connected by a sequence of hb, pb, and rb links 2110(together referred to as xb, for "executes before"). However, there 2111are two complicating factors. 2112 2113If X is a load and X executes before a store Y, then indeed there is 2114no danger of X and Y being concurrent. After all, Y can't have any 2115effect on the value obtained by X until the memory subsystem has 2116propagated Y from its own CPU to X's CPU, which won't happen until 2117some time after Y executes and thus after X executes. But if X is a 2118store, then even if X executes before Y it is still possible that X 2119will propagate to Y's CPU just as Y is executing. In such a case X 2120could very well interfere somehow with Y, and we would have to 2121consider X and Y to be concurrent. 2122 2123Therefore when X is a store, for X and Y to be non-concurrent the LKMM 2124requires not only that X must execute before Y but also that X must 2125propagate to Y's CPU before Y executes. (Or vice versa, of course, if 2126Y executes before X -- then Y must propagate to X's CPU before X 2127executes if Y is a store.) This is expressed by the visibility 2128relation (vis), where X ->vis Y is defined to hold if there is an 2129intermediate event Z such that: 2130 2131 X is connected to Z by a possibly empty sequence of 2132 cumul-fence links followed by an optional rfe link (if none of 2133 these links are present, X and Z are the same event), 2134 2135and either: 2136 2137 Z is connected to Y by a strong-fence link followed by a 2138 possibly empty sequence of xb links, 2139 2140or: 2141 2142 Z is on the same CPU as Y and is connected to Y by a possibly 2143 empty sequence of xb links (again, if the sequence is empty it 2144 means Z and Y are the same event). 2145 2146The motivations behind this definition are straightforward: 2147 2148 cumul-fence memory barriers force stores that are po-before 2149 the barrier to propagate to other CPUs before stores that are 2150 po-after the barrier. 2151 2152 An rfe link from an event W to an event R says that R reads 2153 from W, which certainly means that W must have propagated to 2154 R's CPU before R executed. 2155 2156 strong-fence memory barriers force stores that are po-before 2157 the barrier, or that propagate to the barrier's CPU before the 2158 barrier executes, to propagate to all CPUs before any events 2159 po-after the barrier can execute. 2160 2161To see how this works out in practice, consider our old friend, the MP 2162pattern (with fences and statement labels, but without the conditional 2163test): 2164 2165 int buf = 0, flag = 0; 2166 2167 P0() 2168 { 2169 X: WRITE_ONCE(buf, 1); 2170 smp_wmb(); 2171 W: WRITE_ONCE(flag, 1); 2172 } 2173 2174 P1() 2175 { 2176 int r1; 2177 int r2 = 0; 2178 2179 Z: r1 = READ_ONCE(flag); 2180 smp_rmb(); 2181 Y: r2 = READ_ONCE(buf); 2182 } 2183 2184The smp_wmb() memory barrier gives a cumul-fence link from X to W, and 2185assuming r1 = 1 at the end, there is an rfe link from W to Z. This 2186means that the store to buf must propagate from P0 to P1 before Z 2187executes. Next, Z and Y are on the same CPU and the smp_rmb() fence 2188provides an xb link from Z to Y (i.e., it forces Z to execute before 2189Y). Therefore we have X ->vis Y: X must propagate to Y's CPU before Y 2190executes. 2191 2192The second complicating factor mentioned above arises from the fact 2193that when we are considering data races, some of the memory accesses 2194are plain. Now, although we have not said so explicitly, up to this 2195point most of the relations defined by the LKMM (ppo, hb, prop, 2196cumul-fence, pb, and so on -- including vis) apply only to marked 2197accesses. 2198 2199There are good reasons for this restriction. The compiler is not 2200allowed to apply fancy transformations to marked accesses, and 2201consequently each such access in the source code corresponds more or 2202less directly to a single machine instruction in the object code. But 2203plain accesses are a different story; the compiler may combine them, 2204split them up, duplicate them, eliminate them, invent new ones, and 2205who knows what else. Seeing a plain access in the source code tells 2206you almost nothing about what machine instructions will end up in the 2207object code. 2208 2209Fortunately, the compiler isn't completely free; it is subject to some 2210limitations. For one, it is not allowed to introduce a data race into 2211the object code if the source code does not already contain a data 2212race (if it could, memory models would be useless and no multithreaded 2213code would be safe!). For another, it cannot move a plain access past 2214a compiler barrier. 2215 2216A compiler barrier is a kind of fence, but as the name implies, it 2217only affects the compiler; it does not necessarily have any effect on 2218how instructions are executed by the CPU. In Linux kernel source 2219code, the barrier() function is a compiler barrier. It doesn't give 2220rise directly to any machine instructions in the object code; rather, 2221it affects how the compiler generates the rest of the object code. 2222Given source code like this: 2223 2224 ... some memory accesses ... 2225 barrier(); 2226 ... some other memory accesses ... 2227 2228the barrier() function ensures that the machine instructions 2229corresponding to the first group of accesses will all end po-before 2230any machine instructions corresponding to the second group of accesses 2231-- even if some of the accesses are plain. (Of course, the CPU may 2232then execute some of those accesses out of program order, but we 2233already know how to deal with such issues.) Without the barrier() 2234there would be no such guarantee; the two groups of accesses could be 2235intermingled or even reversed in the object code. 2236 2237The LKMM doesn't say much about the barrier() function, but it does 2238require that all fences are also compiler barriers. In addition, it 2239requires that the ordering properties of memory barriers such as 2240smp_rmb() or smp_store_release() apply to plain accesses as well as to 2241marked accesses. 2242 2243This is the key to analyzing data races. Consider the MP pattern 2244again, now using plain accesses for buf: 2245 2246 int buf = 0, flag = 0; 2247 2248 P0() 2249 { 2250 U: buf = 1; 2251 smp_wmb(); 2252 X: WRITE_ONCE(flag, 1); 2253 } 2254 2255 P1() 2256 { 2257 int r1; 2258 int r2 = 0; 2259 2260 Y: r1 = READ_ONCE(flag); 2261 if (r1) { 2262 smp_rmb(); 2263 V: r2 = buf; 2264 } 2265 } 2266 2267This program does not contain a data race. Although the U and V 2268accesses are race candidates, the LKMM can prove they are not 2269concurrent as follows: 2270 2271 The smp_wmb() fence in P0 is both a compiler barrier and a 2272 cumul-fence. It guarantees that no matter what hash of 2273 machine instructions the compiler generates for the plain 2274 access U, all those instructions will be po-before the fence. 2275 Consequently U's store to buf, no matter how it is carried out 2276 at the machine level, must propagate to P1 before X's store to 2277 flag does. 2278 2279 X and Y are both marked accesses. Hence an rfe link from X to 2280 Y is a valid indicator that X propagated to P1 before Y 2281 executed, i.e., X ->vis Y. (And if there is no rfe link then 2282 r1 will be 0, so V will not be executed and ipso facto won't 2283 race with U.) 2284 2285 The smp_rmb() fence in P1 is a compiler barrier as well as a 2286 fence. It guarantees that all the machine-level instructions 2287 corresponding to the access V will be po-after the fence, and 2288 therefore any loads among those instructions will execute 2289 after the fence does and hence after Y does. 2290 2291Thus U's store to buf is forced to propagate to P1 before V's load 2292executes (assuming V does execute), ruling out the possibility of a 2293data race between them. 2294 2295This analysis illustrates how the LKMM deals with plain accesses in 2296general. Suppose R is a plain load and we want to show that R 2297executes before some marked access E. We can do this by finding a 2298marked access X such that R and X are ordered by a suitable fence and 2299X ->xb* E. If E was also a plain access, we would also look for a 2300marked access Y such that X ->xb* Y, and Y and E are ordered by a 2301fence. We describe this arrangement by saying that R is 2302"post-bounded" by X and E is "pre-bounded" by Y. 2303 2304In fact, we go one step further: Since R is a read, we say that R is 2305"r-post-bounded" by X. Similarly, E would be "r-pre-bounded" or 2306"w-pre-bounded" by Y, depending on whether E was a store or a load. 2307This distinction is needed because some fences affect only loads 2308(i.e., smp_rmb()) and some affect only stores (smp_wmb()); otherwise 2309the two types of bounds are the same. And as a degenerate case, we 2310say that a marked access pre-bounds and post-bounds itself (e.g., if R 2311above were a marked load then X could simply be taken to be R itself.) 2312 2313The need to distinguish between r- and w-bounding raises yet another 2314issue. When the source code contains a plain store, the compiler is 2315allowed to put plain loads of the same location into the object code. 2316For example, given the source code: 2317 2318 x = 1; 2319 2320the compiler is theoretically allowed to generate object code that 2321looks like: 2322 2323 if (x != 1) 2324 x = 1; 2325 2326thereby adding a load (and possibly replacing the store entirely). 2327For this reason, whenever the LKMM requires a plain store to be 2328w-pre-bounded or w-post-bounded by a marked access, it also requires 2329the store to be r-pre-bounded or r-post-bounded, so as to handle cases 2330where the compiler adds a load. 2331 2332(This may be overly cautious. We don't know of any examples where a 2333compiler has augmented a store with a load in this fashion, and the 2334Linux kernel developers would probably fight pretty hard to change a 2335compiler if it ever did this. Still, better safe than sorry.) 2336 2337Incidentally, the other tranformation -- augmenting a plain load by 2338adding in a store to the same location -- is not allowed. This is 2339because the compiler cannot know whether any other CPUs might perform 2340a concurrent load from that location. Two concurrent loads don't 2341constitute a race (they can't interfere with each other), but a store 2342does race with a concurrent load. Thus adding a store might create a 2343data race where one was not already present in the source code, 2344something the compiler is forbidden to do. Augmenting a store with a 2345load, on the other hand, is acceptable because doing so won't create a 2346data race unless one already existed. 2347 2348The LKMM includes a second way to pre-bound plain accesses, in 2349addition to fences: an address dependency from a marked load. That 2350is, in the sequence: 2351 2352 p = READ_ONCE(ptr); 2353 r = *p; 2354 2355the LKMM says that the marked load of ptr pre-bounds the plain load of 2356*p; the marked load must execute before any of the machine 2357instructions corresponding to the plain load. This is a reasonable 2358stipulation, since after all, the CPU can't perform the load of *p 2359until it knows what value p will hold. Furthermore, without some 2360assumption like this one, some usages typical of RCU would count as 2361data races. For example: 2362 2363 int a = 1, b; 2364 int *ptr = &a; 2365 2366 P0() 2367 { 2368 b = 2; 2369 rcu_assign_pointer(ptr, &b); 2370 } 2371 2372 P1() 2373 { 2374 int *p; 2375 int r; 2376 2377 rcu_read_lock(); 2378 p = rcu_dereference(ptr); 2379 r = *p; 2380 rcu_read_unlock(); 2381 } 2382 2383(In this example the rcu_read_lock() and rcu_read_unlock() calls don't 2384really do anything, because there aren't any grace periods. They are 2385included merely for the sake of good form; typically P0 would call 2386synchronize_rcu() somewhere after the rcu_assign_pointer().) 2387 2388rcu_assign_pointer() performs a store-release, so the plain store to b 2389is definitely w-post-bounded before the store to ptr, and the two 2390stores will propagate to P1 in that order. However, rcu_dereference() 2391is only equivalent to READ_ONCE(). While it is a marked access, it is 2392not a fence or compiler barrier. Hence the only guarantee we have 2393that the load of ptr in P1 is r-pre-bounded before the load of *p 2394(thus avoiding a race) is the assumption about address dependencies. 2395 2396This is a situation where the compiler can undermine the memory model, 2397and a certain amount of care is required when programming constructs 2398like this one. In particular, comparisons between the pointer and 2399other known addresses can cause trouble. If you have something like: 2400 2401 p = rcu_dereference(ptr); 2402 if (p == &x) 2403 r = *p; 2404 2405then the compiler just might generate object code resembling: 2406 2407 p = rcu_dereference(ptr); 2408 if (p == &x) 2409 r = x; 2410 2411or even: 2412 2413 rtemp = x; 2414 p = rcu_dereference(ptr); 2415 if (p == &x) 2416 r = rtemp; 2417 2418which would invalidate the memory model's assumption, since the CPU 2419could now perform the load of x before the load of ptr (there might be 2420a control dependency but no address dependency at the machine level). 2421 2422Finally, it turns out there is a situation in which a plain write does 2423not need to be w-post-bounded: when it is separated from the other 2424race-candidate access by a fence. At first glance this may seem 2425impossible. After all, to be race candidates the two accesses must 2426be on different CPUs, and fences don't link events on different CPUs. 2427Well, normal fences don't -- but rcu-fence can! Here's an example: 2428 2429 int x, y; 2430 2431 P0() 2432 { 2433 WRITE_ONCE(x, 1); 2434 synchronize_rcu(); 2435 y = 3; 2436 } 2437 2438 P1() 2439 { 2440 rcu_read_lock(); 2441 if (READ_ONCE(x) == 0) 2442 y = 2; 2443 rcu_read_unlock(); 2444 } 2445 2446Do the plain stores to y race? Clearly not if P1 reads a non-zero 2447value for x, so let's assume the READ_ONCE(x) does obtain 0. This 2448means that the read-side critical section in P1 must finish executing 2449before the grace period in P0 does, because RCU's Grace-Period 2450Guarantee says that otherwise P0's store to x would have propagated to 2451P1 before the critical section started and so would have been visible 2452to the READ_ONCE(). (Another way of putting it is that the fre link 2453from the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link 2454between those two events.) 2455 2456This means there is an rcu-fence link from P1's "y = 2" store to P0's 2457"y = 3" store, and consequently the first must propagate from P1 to P0 2458before the second can execute. Therefore the two stores cannot be 2459concurrent and there is no race, even though P1's plain store to y 2460isn't w-post-bounded by any marked accesses. 2461 2462Putting all this material together yields the following picture. For 2463race-candidate stores W and W', where W ->co W', the LKMM says the 2464stores don't race if W can be linked to W' by a 2465 2466 w-post-bounded ; vis ; w-pre-bounded 2467 2468sequence. If W is plain then they also have to be linked by an 2469 2470 r-post-bounded ; xb* ; w-pre-bounded 2471 2472sequence, and if W' is plain then they also have to be linked by a 2473 2474 w-post-bounded ; vis ; r-pre-bounded 2475 2476sequence. For race-candidate load R and store W, the LKMM says the 2477two accesses don't race if R can be linked to W by an 2478 2479 r-post-bounded ; xb* ; w-pre-bounded 2480 2481sequence or if W can be linked to R by a 2482 2483 w-post-bounded ; vis ; r-pre-bounded 2484 2485sequence. For the cases involving a vis link, the LKMM also accepts 2486sequences in which W is linked to W' or R by a 2487 2488 strong-fence ; xb* ; {w and/or r}-pre-bounded 2489 2490sequence with no post-bounding, and in every case the LKMM also allows 2491the link simply to be a fence with no bounding at all. If no sequence 2492of the appropriate sort exists, the LKMM says that the accesses race. 2493 2494There is one more part of the LKMM related to plain accesses (although 2495not to data races) we should discuss. Recall that many relations such 2496as hb are limited to marked accesses only. As a result, the 2497happens-before, propagates-before, and rcu axioms (which state that 2498various relation must not contain a cycle) doesn't apply to plain 2499accesses. Nevertheless, we do want to rule out such cycles, because 2500they don't make sense even for plain accesses. 2501 2502To this end, the LKMM imposes three extra restrictions, together 2503called the "plain-coherence" axiom because of their resemblance to the 2504rules used by the operational model to ensure cache coherence (that 2505is, the rules governing the memory subsystem's choice of a store to 2506satisfy a load request and its determination of where a store will 2507fall in the coherence order): 2508 2509 If R and W are race candidates and it is possible to link R to 2510 W by one of the xb* sequences listed above, then W ->rfe R is 2511 not allowed (i.e., a load cannot read from a store that it 2512 executes before, even if one or both is plain). 2513 2514 If W and R are race candidates and it is possible to link W to 2515 R by one of the vis sequences listed above, then R ->fre W is 2516 not allowed (i.e., if a store is visible to a load then the 2517 load must read from that store or one coherence-after it). 2518 2519 If W and W' are race candidates and it is possible to link W 2520 to W' by one of the vis sequences listed above, then W' ->co W 2521 is not allowed (i.e., if one store is visible to a second then 2522 the second must come after the first in the coherence order). 2523 2524This is the extent to which the LKMM deals with plain accesses. 2525Perhaps it could say more (for example, plain accesses might 2526contribute to the ppo relation), but at the moment it seems that this 2527minimal, conservative approach is good enough. 2528 2529 2530ODDS AND ENDS 2531------------- 2532 2533This section covers material that didn't quite fit anywhere in the 2534earlier sections. 2535 2536The descriptions in this document don't always match the formal 2537version of the LKMM exactly. For example, the actual formal 2538definition of the prop relation makes the initial coe or fre part 2539optional, and it doesn't require the events linked by the relation to 2540be on the same CPU. These differences are very unimportant; indeed, 2541instances where the coe/fre part of prop is missing are of no interest 2542because all the other parts (fences and rfe) are already included in 2543hb anyway, and where the formal model adds prop into hb, it includes 2544an explicit requirement that the events being linked are on the same 2545CPU. 2546 2547Another minor difference has to do with events that are both memory 2548accesses and fences, such as those corresponding to smp_load_acquire() 2549calls. In the formal model, these events aren't actually both reads 2550and fences; rather, they are read events with an annotation marking 2551them as acquires. (Or write events annotated as releases, in the case 2552smp_store_release().) The final effect is the same. 2553 2554Although we didn't mention it above, the instruction execution 2555ordering provided by the smp_rmb() fence doesn't apply to read events 2556that are part of a non-value-returning atomic update. For instance, 2557given: 2558 2559 atomic_inc(&x); 2560 smp_rmb(); 2561 r1 = READ_ONCE(y); 2562 2563it is not guaranteed that the load from y will execute after the 2564update to x. This is because the ARMv8 architecture allows 2565non-value-returning atomic operations effectively to be executed off 2566the CPU. Basically, the CPU tells the memory subsystem to increment 2567x, and then the increment is carried out by the memory hardware with 2568no further involvement from the CPU. Since the CPU doesn't ever read 2569the value of x, there is nothing for the smp_rmb() fence to act on. 2570 2571The LKMM defines a few extra synchronization operations in terms of 2572things we have already covered. In particular, rcu_dereference() is 2573treated as READ_ONCE() and rcu_assign_pointer() is treated as 2574smp_store_release() -- which is basically how the Linux kernel treats 2575them. 2576 2577Although we said that plain accesses are not linked by the ppo 2578relation, they do contribute to it indirectly. Firstly, when there is 2579an address dependency from a marked load R to a plain store W, 2580followed by smp_wmb() and then a marked store W', the LKMM creates a 2581ppo link from R to W'. The reasoning behind this is perhaps a little 2582shaky, but essentially it says there is no way to generate object code 2583for this source code in which W' could execute before R. Just as with 2584pre-bounding by address dependencies, it is possible for the compiler 2585to undermine this relation if sufficient care is not taken. 2586 2587Secondly, plain accesses can carry dependencies: If a data dependency 2588links a marked load R to a store W, and the store is read by a load R' 2589from the same thread, then the data loaded by R' depends on the data 2590loaded originally by R. Thus, if R' is linked to any access X by a 2591dependency, R is also linked to access X by the same dependency, even 2592if W' or R' (or both!) are plain. 2593 2594There are a few oddball fences which need special treatment: 2595smp_mb__before_atomic(), smp_mb__after_atomic(), and 2596smp_mb__after_spinlock(). The LKMM uses fence events with special 2597annotations for them; they act as strong fences just like smp_mb() 2598except for the sets of events that they order. Instead of ordering 2599all po-earlier events against all po-later events, as smp_mb() does, 2600they behave as follows: 2601 2602 smp_mb__before_atomic() orders all po-earlier events against 2603 po-later atomic updates and the events following them; 2604 2605 smp_mb__after_atomic() orders po-earlier atomic updates and 2606 the events preceding them against all po-later events; 2607 2608 smp_mb__after_spinlock() orders po-earlier lock acquisition 2609 events and the events preceding them against all po-later 2610 events. 2611 2612Interestingly, RCU and locking each introduce the possibility of 2613deadlock. When faced with code sequences such as: 2614 2615 spin_lock(&s); 2616 spin_lock(&s); 2617 spin_unlock(&s); 2618 spin_unlock(&s); 2619 2620or: 2621 2622 rcu_read_lock(); 2623 synchronize_rcu(); 2624 rcu_read_unlock(); 2625 2626what does the LKMM have to say? Answer: It says there are no allowed 2627executions at all, which makes sense. But this can also lead to 2628misleading results, because if a piece of code has multiple possible 2629executions, some of which deadlock, the model will report only on the 2630non-deadlocking executions. For example: 2631 2632 int x, y; 2633 2634 P0() 2635 { 2636 int r0; 2637 2638 WRITE_ONCE(x, 1); 2639 r0 = READ_ONCE(y); 2640 } 2641 2642 P1() 2643 { 2644 rcu_read_lock(); 2645 if (READ_ONCE(x) > 0) { 2646 WRITE_ONCE(y, 36); 2647 synchronize_rcu(); 2648 } 2649 rcu_read_unlock(); 2650 } 2651 2652Is it possible to end up with r0 = 36 at the end? The LKMM will tell 2653you it is not, but the model won't mention that this is because P1 2654will self-deadlock in the executions where it stores 36 in y. 2655