xref: /openbmc/linux/tools/memory-model/Documentation/explanation.txt (revision 1ac731c529cd4d6adbce134754b51ff7d822b145)
1bf28ae56SAlan SternExplanation of the Linux-Kernel Memory Consistency Model
2bf28ae56SAlan Stern~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
31c27b644SPaul E. McKenney
41c27b644SPaul E. McKenney:Author: Alan Stern <stern@rowland.harvard.edu>
51c27b644SPaul E. McKenney:Created: October 2017
61c27b644SPaul E. McKenney
71c27b644SPaul E. McKenney.. Contents
81c27b644SPaul E. McKenney
91c27b644SPaul E. McKenney  1. INTRODUCTION
101c27b644SPaul E. McKenney  2. BACKGROUND
111c27b644SPaul E. McKenney  3. A SIMPLE EXAMPLE
121c27b644SPaul E. McKenney  4. A SELECTION OF MEMORY MODELS
131c27b644SPaul E. McKenney  5. ORDERING AND CYCLES
141c27b644SPaul E. McKenney  6. EVENTS
151c27b644SPaul E. McKenney  7. THE PROGRAM ORDER RELATION: po AND po-loc
161c27b644SPaul E. McKenney  8. A WARNING
171c27b644SPaul E. McKenney  9. DEPENDENCY RELATIONS: data, addr, and ctrl
181c27b644SPaul E. McKenney  10. THE READS-FROM RELATION: rf, rfi, and rfe
191c27b644SPaul E. McKenney  11. CACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe
201c27b644SPaul E. McKenney  12. THE FROM-READS RELATION: fr, fri, and fre
211c27b644SPaul E. McKenney  13. AN OPERATIONAL MODEL
221c27b644SPaul E. McKenney  14. PROPAGATION ORDER RELATION: cumul-fence
231c27b644SPaul E. McKenney  15. DERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL
241c27b644SPaul E. McKenney  16. SEQUENTIAL CONSISTENCY PER VARIABLE
251c27b644SPaul E. McKenney  17. ATOMIC UPDATES: rmw
261c27b644SPaul E. McKenney  18. THE PRESERVED PROGRAM ORDER RELATION: ppo
271c27b644SPaul E. McKenney  19. AND THEN THERE WAS ALPHA
281c27b644SPaul E. McKenney  20. THE HAPPENS-BEFORE RELATION: hb
291c27b644SPaul E. McKenney  21. THE PROPAGATES-BEFORE RELATION: pb
30ddc82999SAlan Stern  22. RCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
31*de041805SAlan Stern  23. SRCU READ-SIDE CRITICAL SECTIONS
32*de041805SAlan Stern  24. LOCKING
33*de041805SAlan Stern  25. PLAIN ACCESSES AND DATA RACES
34*de041805SAlan Stern  26. ODDS AND ENDS
351c27b644SPaul E. McKenney
361c27b644SPaul E. McKenney
371c27b644SPaul E. McKenney
381c27b644SPaul E. McKenneyINTRODUCTION
391c27b644SPaul E. McKenney------------
401c27b644SPaul E. McKenney
41bf28ae56SAlan SternThe Linux-kernel memory consistency model (LKMM) is rather complex and
42bf28ae56SAlan Sternobscure.  This is particularly evident if you read through the
43bf28ae56SAlan Sternlinux-kernel.bell and linux-kernel.cat files that make up the formal
44bf28ae56SAlan Sternversion of the model; they are extremely terse and their meanings are
45bf28ae56SAlan Sternfar from clear.
461c27b644SPaul E. McKenney
47c58a8017SAlan SternThis document describes the ideas underlying the LKMM.  It is meant
48bf28ae56SAlan Sternfor people who want to understand how the model was designed.  It does
49bf28ae56SAlan Sternnot go into the details of the code in the .bell and .cat files;
50bf28ae56SAlan Sternrather, it explains in English what the code expresses symbolically.
511c27b644SPaul E. McKenney
521c27b644SPaul E. McKenneySections 2 (BACKGROUND) through 5 (ORDERING AND CYCLES) are aimed
53bf28ae56SAlan Sterntoward beginners; they explain what memory consistency models are and
54bf28ae56SAlan Sternthe basic notions shared by all such models.  People already familiar
55bf28ae56SAlan Sternwith these concepts can skim or skip over them.  Sections 6 (EVENTS)
56bf28ae56SAlan Sternthrough 12 (THE FROM_READS RELATION) describe the fundamental
57bf28ae56SAlan Sternrelations used in many models.  Starting in Section 13 (AN OPERATIONAL
58bf28ae56SAlan SternMODEL), the workings of the LKMM itself are covered.
591c27b644SPaul E. McKenney
601c27b644SPaul E. McKenneyWarning: The code examples in this document are not written in the
611c27b644SPaul E. McKenneyproper format for litmus tests.  They don't include a header line, the
621c27b644SPaul E. McKenneyinitializations are not enclosed in braces, the global variables are
631c27b644SPaul E. McKenneynot passed by pointers, and they don't have an "exists" clause at the
641c27b644SPaul E. McKenneyend.  Converting them to the right format is left as an exercise for
651c27b644SPaul E. McKenneythe reader.
661c27b644SPaul E. McKenney
671c27b644SPaul E. McKenney
681c27b644SPaul E. McKenneyBACKGROUND
691c27b644SPaul E. McKenney----------
701c27b644SPaul E. McKenney
711c27b644SPaul E. McKenneyA memory consistency model (or just memory model, for short) is
721c27b644SPaul E. McKenneysomething which predicts, given a piece of computer code running on a
731c27b644SPaul E. McKenneyparticular kind of system, what values may be obtained by the code's
741c27b644SPaul E. McKenneyload instructions.  The LKMM makes these predictions for code running
751c27b644SPaul E. McKenneyas part of the Linux kernel.
761c27b644SPaul E. McKenney
771c27b644SPaul E. McKenneyIn practice, people tend to use memory models the other way around.
781c27b644SPaul E. McKenneyThat is, given a piece of code and a collection of values specified
791c27b644SPaul E. McKenneyfor the loads, the model will predict whether it is possible for the
801c27b644SPaul E. McKenneycode to run in such a way that the loads will indeed obtain the
811c27b644SPaul E. McKenneyspecified values.  Of course, this is just another way of expressing
821c27b644SPaul E. McKenneythe same idea.
831c27b644SPaul E. McKenney
841c27b644SPaul E. McKenneyFor code running on a uniprocessor system, the predictions are easy:
851c27b644SPaul E. McKenneyEach load instruction must obtain the value written by the most recent
861c27b644SPaul E. McKenneystore instruction accessing the same location (we ignore complicating
871c27b644SPaul E. McKenneyfactors such as DMA and mixed-size accesses.)  But on multiprocessor
881c27b644SPaul E. McKenneysystems, with multiple CPUs making concurrent accesses to shared
891c27b644SPaul E. McKenneymemory locations, things aren't so simple.
901c27b644SPaul E. McKenney
911c27b644SPaul E. McKenneyDifferent architectures have differing memory models, and the Linux
921c27b644SPaul E. McKenneykernel supports a variety of architectures.  The LKMM has to be fairly
931c27b644SPaul E. McKenneypermissive, in the sense that any behavior allowed by one of these
941c27b644SPaul E. McKenneyarchitectures also has to be allowed by the LKMM.
951c27b644SPaul E. McKenney
961c27b644SPaul E. McKenney
971c27b644SPaul E. McKenneyA SIMPLE EXAMPLE
981c27b644SPaul E. McKenney----------------
991c27b644SPaul E. McKenney
1001c27b644SPaul E. McKenneyHere is a simple example to illustrate the basic concepts.  Consider
1011c27b644SPaul E. McKenneysome code running as part of a device driver for an input device.  The
1021c27b644SPaul E. McKenneydriver might contain an interrupt handler which collects data from the
1031c27b644SPaul E. McKenneydevice, stores it in a buffer, and sets a flag to indicate the buffer
1041c27b644SPaul E. McKenneyis full.  Running concurrently on a different CPU might be a part of
1051c27b644SPaul E. McKenneythe driver code being executed by a process in the midst of a read(2)
1061c27b644SPaul E. McKenneysystem call.  This code tests the flag to see whether the buffer is
1071c27b644SPaul E. McKenneyready, and if it is, copies the data back to userspace.  The buffer
1081c27b644SPaul E. McKenneyand the flag are memory locations shared between the two CPUs.
1091c27b644SPaul E. McKenney
1101c27b644SPaul E. McKenneyWe can abstract out the important pieces of the driver code as follows
1111c27b644SPaul E. McKenney(the reason for using WRITE_ONCE() and READ_ONCE() instead of simple
1121c27b644SPaul E. McKenneyassignment statements is discussed later):
1131c27b644SPaul E. McKenney
1141c27b644SPaul E. McKenney	int buf = 0, flag = 0;
1151c27b644SPaul E. McKenney
1161c27b644SPaul E. McKenney	P0()
1171c27b644SPaul E. McKenney	{
1181c27b644SPaul E. McKenney		WRITE_ONCE(buf, 1);
1191c27b644SPaul E. McKenney		WRITE_ONCE(flag, 1);
1201c27b644SPaul E. McKenney	}
1211c27b644SPaul E. McKenney
1221c27b644SPaul E. McKenney	P1()
1231c27b644SPaul E. McKenney	{
1241c27b644SPaul E. McKenney		int r1;
1251c27b644SPaul E. McKenney		int r2 = 0;
1261c27b644SPaul E. McKenney
1271c27b644SPaul E. McKenney		r1 = READ_ONCE(flag);
1281c27b644SPaul E. McKenney		if (r1)
1291c27b644SPaul E. McKenney			r2 = READ_ONCE(buf);
1301c27b644SPaul E. McKenney	}
1311c27b644SPaul E. McKenney
1321c27b644SPaul E. McKenneyHere the P0() function represents the interrupt handler running on one
1331c27b644SPaul E. McKenneyCPU and P1() represents the read() routine running on another.  The
1341c27b644SPaul E. McKenneyvalue 1 stored in buf represents input data collected from the device.
1351c27b644SPaul E. McKenneyThus, P0 stores the data in buf and then sets flag.  Meanwhile, P1
1361c27b644SPaul E. McKenneyreads flag into the private variable r1, and if it is set, reads the
1371c27b644SPaul E. McKenneydata from buf into a second private variable r2 for copying to
1381c27b644SPaul E. McKenneyuserspace.  (Presumably if flag is not set then the driver will wait a
1391c27b644SPaul E. McKenneywhile and try again.)
1401c27b644SPaul E. McKenney
1411c27b644SPaul E. McKenneyThis pattern of memory accesses, where one CPU stores values to two
1421c27b644SPaul E. McKenneyshared memory locations and another CPU loads from those locations in
1431c27b644SPaul E. McKenneythe opposite order, is widely known as the "Message Passing" or MP
1441c27b644SPaul E. McKenneypattern.  It is typical of memory access patterns in the kernel.
1451c27b644SPaul E. McKenney
1461c27b644SPaul E. McKenneyPlease note that this example code is a simplified abstraction.  Real
1471c27b644SPaul E. McKenneybuffers are usually larger than a single integer, real device drivers
1481c27b644SPaul E. McKenneyusually use sleep and wakeup mechanisms rather than polling for I/O
1491c27b644SPaul E. McKenneycompletion, and real code generally doesn't bother to copy values into
1501c27b644SPaul E. McKenneyprivate variables before using them.  All that is beside the point;
1511c27b644SPaul E. McKenneythe idea here is simply to illustrate the overall pattern of memory
1521c27b644SPaul E. McKenneyaccesses by the CPUs.
1531c27b644SPaul E. McKenney
1541c27b644SPaul E. McKenneyA memory model will predict what values P1 might obtain for its loads
1551c27b644SPaul E. McKenneyfrom flag and buf, or equivalently, what values r1 and r2 might end up
1561c27b644SPaul E. McKenneywith after the code has finished running.
1571c27b644SPaul E. McKenney
1581c27b644SPaul E. McKenneySome predictions are trivial.  For instance, no sane memory model would
1591c27b644SPaul E. McKenneypredict that r1 = 42 or r2 = -7, because neither of those values ever
1601c27b644SPaul E. McKenneygets stored in flag or buf.
1611c27b644SPaul E. McKenney
1621c27b644SPaul E. McKenneySome nontrivial predictions are nonetheless quite simple.  For
1631c27b644SPaul E. McKenneyinstance, P1 might run entirely before P0 begins, in which case r1 and
1641c27b644SPaul E. McKenneyr2 will both be 0 at the end.  Or P0 might run entirely before P1
1651c27b644SPaul E. McKenneybegins, in which case r1 and r2 will both be 1.
1661c27b644SPaul E. McKenney
1671c27b644SPaul E. McKenneyThe interesting predictions concern what might happen when the two
1681c27b644SPaul E. McKenneyroutines run concurrently.  One possibility is that P1 runs after P0's
1691c27b644SPaul E. McKenneystore to buf but before the store to flag.  In this case, r1 and r2
1701c27b644SPaul E. McKenneywill again both be 0.  (If P1 had been designed to read buf
1711c27b644SPaul E. McKenneyunconditionally then we would instead have r1 = 0 and r2 = 1.)
1721c27b644SPaul E. McKenney
1731c27b644SPaul E. McKenneyHowever, the most interesting possibility is where r1 = 1 and r2 = 0.
1741c27b644SPaul E. McKenneyIf this were to occur it would mean the driver contains a bug, because
1751c27b644SPaul E. McKenneyincorrect data would get sent to the user: 0 instead of 1.  As it
1761c27b644SPaul E. McKenneyhappens, the LKMM does predict this outcome can occur, and the example
1771c27b644SPaul E. McKenneydriver code shown above is indeed buggy.
1781c27b644SPaul E. McKenney
1791c27b644SPaul E. McKenney
1801c27b644SPaul E. McKenneyA SELECTION OF MEMORY MODELS
1811c27b644SPaul E. McKenney----------------------------
1821c27b644SPaul E. McKenney
1831c27b644SPaul E. McKenneyThe first widely cited memory model, and the simplest to understand,
1841c27b644SPaul E. McKenneyis Sequential Consistency.  According to this model, systems behave as
1851c27b644SPaul E. McKenneyif each CPU executed its instructions in order but with unspecified
1861c27b644SPaul E. McKenneytiming.  In other words, the instructions from the various CPUs get
1871c27b644SPaul E. McKenneyinterleaved in a nondeterministic way, always according to some single
1881c27b644SPaul E. McKenneyglobal order that agrees with the order of the instructions in the
1891c27b644SPaul E. McKenneyprogram source for each CPU.  The model says that the value obtained
1901c27b644SPaul E. McKenneyby each load is simply the value written by the most recently executed
1911c27b644SPaul E. McKenneystore to the same memory location, from any CPU.
1921c27b644SPaul E. McKenney
1931c27b644SPaul E. McKenneyFor the MP example code shown above, Sequential Consistency predicts
1941c27b644SPaul E. McKenneythat the undesired result r1 = 1, r2 = 0 cannot occur.  The reasoning
1951c27b644SPaul E. McKenneygoes like this:
1961c27b644SPaul E. McKenney
1971c27b644SPaul E. McKenney	Since r1 = 1, P0 must store 1 to flag before P1 loads 1 from
1981c27b644SPaul E. McKenney	it, as loads can obtain values only from earlier stores.
1991c27b644SPaul E. McKenney
2001c27b644SPaul E. McKenney	P1 loads from flag before loading from buf, since CPUs execute
2011c27b644SPaul E. McKenney	their instructions in order.
2021c27b644SPaul E. McKenney
2031c27b644SPaul E. McKenney	P1 must load 0 from buf before P0 stores 1 to it; otherwise r2
2041c27b644SPaul E. McKenney	would be 1 since a load obtains its value from the most recent
2051c27b644SPaul E. McKenney	store to the same address.
2061c27b644SPaul E. McKenney
2071c27b644SPaul E. McKenney	P0 stores 1 to buf before storing 1 to flag, since it executes
2081c27b644SPaul E. McKenney	its instructions in order.
2091c27b644SPaul E. McKenney
2103321ea12SAlan Stern	Since an instruction (in this case, P0's store to flag) cannot
2111c27b644SPaul E. McKenney	execute before itself, the specified outcome is impossible.
2121c27b644SPaul E. McKenney
2131c27b644SPaul E. McKenneyHowever, real computer hardware almost never follows the Sequential
2141c27b644SPaul E. McKenneyConsistency memory model; doing so would rule out too many valuable
2151c27b644SPaul E. McKenneyperformance optimizations.  On ARM and PowerPC architectures, for
2161c27b644SPaul E. McKenneyinstance, the MP example code really does sometimes yield r1 = 1 and
2171c27b644SPaul E. McKenneyr2 = 0.
2181c27b644SPaul E. McKenney
2191c27b644SPaul E. McKenneyx86 and SPARC follow yet a different memory model: TSO (Total Store
2201c27b644SPaul E. McKenneyOrdering).  This model predicts that the undesired outcome for the MP
2211c27b644SPaul E. McKenneypattern cannot occur, but in other respects it differs from Sequential
2221c27b644SPaul E. McKenneyConsistency.  One example is the Store Buffer (SB) pattern, in which
2231c27b644SPaul E. McKenneyeach CPU stores to its own shared location and then loads from the
2241c27b644SPaul E. McKenneyother CPU's location:
2251c27b644SPaul E. McKenney
2261c27b644SPaul E. McKenney	int x = 0, y = 0;
2271c27b644SPaul E. McKenney
2281c27b644SPaul E. McKenney	P0()
2291c27b644SPaul E. McKenney	{
2301c27b644SPaul E. McKenney		int r0;
2311c27b644SPaul E. McKenney
2321c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
2331c27b644SPaul E. McKenney		r0 = READ_ONCE(y);
2341c27b644SPaul E. McKenney	}
2351c27b644SPaul E. McKenney
2361c27b644SPaul E. McKenney	P1()
2371c27b644SPaul E. McKenney	{
2381c27b644SPaul E. McKenney		int r1;
2391c27b644SPaul E. McKenney
2401c27b644SPaul E. McKenney		WRITE_ONCE(y, 1);
2411c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
2421c27b644SPaul E. McKenney	}
2431c27b644SPaul E. McKenney
2441c27b644SPaul E. McKenneySequential Consistency predicts that the outcome r0 = 0, r1 = 0 is
2451c27b644SPaul E. McKenneyimpossible.  (Exercise: Figure out the reasoning.)  But TSO allows
2461c27b644SPaul E. McKenneythis outcome to occur, and in fact it does sometimes occur on x86 and
2471c27b644SPaul E. McKenneySPARC systems.
2481c27b644SPaul E. McKenney
2491c27b644SPaul E. McKenneyThe LKMM was inspired by the memory models followed by PowerPC, ARM,
2501c27b644SPaul E. McKenneyx86, Alpha, and other architectures.  However, it is different in
2511c27b644SPaul E. McKenneydetail from each of them.
2521c27b644SPaul E. McKenney
2531c27b644SPaul E. McKenney
2541c27b644SPaul E. McKenneyORDERING AND CYCLES
2551c27b644SPaul E. McKenney-------------------
2561c27b644SPaul E. McKenney
2571c27b644SPaul E. McKenneyMemory models are all about ordering.  Often this is temporal ordering
2581c27b644SPaul E. McKenney(i.e., the order in which certain events occur) but it doesn't have to
2591c27b644SPaul E. McKenneybe; consider for example the order of instructions in a program's
2601c27b644SPaul E. McKenneysource code.  We saw above that Sequential Consistency makes an
2611c27b644SPaul E. McKenneyimportant assumption that CPUs execute instructions in the same order
2621c27b644SPaul E. McKenneyas those instructions occur in the code, and there are many other
2631c27b644SPaul E. McKenneyinstances of ordering playing central roles in memory models.
2641c27b644SPaul E. McKenney
2651c27b644SPaul E. McKenneyThe counterpart to ordering is a cycle.  Ordering rules out cycles:
2661c27b644SPaul E. McKenneyIt's not possible to have X ordered before Y, Y ordered before Z, and
2671c27b644SPaul E. McKenneyZ ordered before X, because this would mean that X is ordered before
2681c27b644SPaul E. McKenneyitself.  The analysis of the MP example under Sequential Consistency
2691c27b644SPaul E. McKenneyinvolved just such an impossible cycle:
2701c27b644SPaul E. McKenney
2711c27b644SPaul E. McKenney	W: P0 stores 1 to flag   executes before
2721c27b644SPaul E. McKenney	X: P1 loads 1 from flag  executes before
2731c27b644SPaul E. McKenney	Y: P1 loads 0 from buf   executes before
2741c27b644SPaul E. McKenney	Z: P0 stores 1 to buf    executes before
2751c27b644SPaul E. McKenney	W: P0 stores 1 to flag.
2761c27b644SPaul E. McKenney
2771c27b644SPaul E. McKenneyIn short, if a memory model requires certain accesses to be ordered,
2781c27b644SPaul E. McKenneyand a certain outcome for the loads in a piece of code can happen only
2791c27b644SPaul E. McKenneyif those accesses would form a cycle, then the memory model predicts
2801c27b644SPaul E. McKenneythat outcome cannot occur.
2811c27b644SPaul E. McKenney
2821c27b644SPaul E. McKenneyThe LKMM is defined largely in terms of cycles, as we will see.
2831c27b644SPaul E. McKenney
2841c27b644SPaul E. McKenney
2851c27b644SPaul E. McKenneyEVENTS
2861c27b644SPaul E. McKenney------
2871c27b644SPaul E. McKenney
2881c27b644SPaul E. McKenneyThe LKMM does not work directly with the C statements that make up
2891c27b644SPaul E. McKenneykernel source code.  Instead it considers the effects of those
2901c27b644SPaul E. McKenneystatements in a more abstract form, namely, events.  The model
2911c27b644SPaul E. McKenneyincludes three types of events:
2921c27b644SPaul E. McKenney
2931c27b644SPaul E. McKenney	Read events correspond to loads from shared memory, such as
2941c27b644SPaul E. McKenney	calls to READ_ONCE(), smp_load_acquire(), or
2951c27b644SPaul E. McKenney	rcu_dereference().
2961c27b644SPaul E. McKenney
2971c27b644SPaul E. McKenney	Write events correspond to stores to shared memory, such as
2981c27b644SPaul E. McKenney	calls to WRITE_ONCE(), smp_store_release(), or atomic_set().
2991c27b644SPaul E. McKenney
3001c27b644SPaul E. McKenney	Fence events correspond to memory barriers (also known as
3011c27b644SPaul E. McKenney	fences), such as calls to smp_rmb() or rcu_read_lock().
3021c27b644SPaul E. McKenney
3031c27b644SPaul E. McKenneyThese categories are not exclusive; a read or write event can also be
3041c27b644SPaul E. McKenneya fence.  This happens with functions like smp_load_acquire() or
3051c27b644SPaul E. McKenneyspin_lock().  However, no single event can be both a read and a write.
3061c27b644SPaul E. McKenneyAtomic read-modify-write accesses, such as atomic_inc() or xchg(),
3071c27b644SPaul E. McKenneycorrespond to a pair of events: a read followed by a write.  (The
3081c27b644SPaul E. McKenneywrite event is omitted for executions where it doesn't occur, such as
3091c27b644SPaul E. McKenneya cmpxchg() where the comparison fails.)
3101c27b644SPaul E. McKenney
3111c27b644SPaul E. McKenneyOther parts of the code, those which do not involve interaction with
3121c27b644SPaul E. McKenneyshared memory, do not give rise to events.  Thus, arithmetic and
3131c27b644SPaul E. McKenneylogical computations, control-flow instructions, or accesses to
3141c27b644SPaul E. McKenneyprivate memory or CPU registers are not of central interest to the
3151c27b644SPaul E. McKenneymemory model.  They only affect the model's predictions indirectly.
3161c27b644SPaul E. McKenneyFor example, an arithmetic computation might determine the value that
3171c27b644SPaul E. McKenneygets stored to a shared memory location (or in the case of an array
3181c27b644SPaul E. McKenneyindex, the address where the value gets stored), but the memory model
3191c27b644SPaul E. McKenneyis concerned only with the store itself -- its value and its address
3201c27b644SPaul E. McKenney-- not the computation leading up to it.
3211c27b644SPaul E. McKenney
3221c27b644SPaul E. McKenneyEvents in the LKMM can be linked by various relations, which we will
3231c27b644SPaul E. McKenneydescribe in the following sections.  The memory model requires certain
3241c27b644SPaul E. McKenneyof these relations to be orderings, that is, it requires them not to
3251c27b644SPaul E. McKenneyhave any cycles.
3261c27b644SPaul E. McKenney
3271c27b644SPaul E. McKenney
3281c27b644SPaul E. McKenneyTHE PROGRAM ORDER RELATION: po AND po-loc
3291c27b644SPaul E. McKenney-----------------------------------------
3301c27b644SPaul E. McKenney
3311c27b644SPaul E. McKenneyThe most important relation between events is program order (po).  You
3321c27b644SPaul E. McKenneycan think of it as the order in which statements occur in the source
3331c27b644SPaul E. McKenneycode after branches are taken into account and loops have been
3341c27b644SPaul E. McKenneyunrolled.  A better description might be the order in which
3351c27b644SPaul E. McKenneyinstructions are presented to a CPU's execution unit.  Thus, we say
3361c27b644SPaul E. McKenneythat X is po-before Y (written as "X ->po Y" in formulas) if X occurs
3371c27b644SPaul E. McKenneybefore Y in the instruction stream.
3381c27b644SPaul E. McKenney
3391c27b644SPaul E. McKenneyThis is inherently a single-CPU relation; two instructions executing
3401c27b644SPaul E. McKenneyon different CPUs are never linked by po.  Also, it is by definition
3411c27b644SPaul E. McKenneyan ordering so it cannot have any cycles.
3421c27b644SPaul E. McKenney
3431c27b644SPaul E. McKenneypo-loc is a sub-relation of po.  It links two memory accesses when the
3441c27b644SPaul E. McKenneyfirst comes before the second in program order and they access the
3451c27b644SPaul E. McKenneysame memory location (the "-loc" suffix).
3461c27b644SPaul E. McKenney
3471c27b644SPaul E. McKenneyAlthough this may seem straightforward, there is one subtle aspect to
3481c27b644SPaul E. McKenneyprogram order we need to explain.  The LKMM was inspired by low-level
3491c27b644SPaul E. McKenneyarchitectural memory models which describe the behavior of machine
3501c27b644SPaul E. McKenneycode, and it retains their outlook to a considerable extent.  The
3511c27b644SPaul E. McKenneyread, write, and fence events used by the model are close in spirit to
3521c27b644SPaul E. McKenneyindividual machine instructions.  Nevertheless, the LKMM describes
3531c27b644SPaul E. McKenneykernel code written in C, and the mapping from C to machine code can
3541c27b644SPaul E. McKenneybe extremely complex.
3551c27b644SPaul E. McKenney
3561c27b644SPaul E. McKenneyOptimizing compilers have great freedom in the way they translate
3571c27b644SPaul E. McKenneysource code to object code.  They are allowed to apply transformations
3581c27b644SPaul E. McKenneythat add memory accesses, eliminate accesses, combine them, split them
3596738ff85SAndrea Parriinto pieces, or move them around.  The use of READ_ONCE(), WRITE_ONCE(),
3606738ff85SAndrea Parrior one of the other atomic or synchronization primitives prevents a
3616738ff85SAndrea Parrilarge number of compiler optimizations.  In particular, it is guaranteed
3626738ff85SAndrea Parrithat the compiler will not remove such accesses from the generated code
3636738ff85SAndrea Parri(unless it can prove the accesses will never be executed), it will not
3646738ff85SAndrea Parrichange the order in which they occur in the code (within limits imposed
3656738ff85SAndrea Parriby the C standard), and it will not introduce extraneous accesses.
3661c27b644SPaul E. McKenney
3676738ff85SAndrea ParriThe MP and SB examples above used READ_ONCE() and WRITE_ONCE() rather
3686738ff85SAndrea Parrithan ordinary memory accesses.  Thanks to this usage, we can be certain
3696738ff85SAndrea Parrithat in the MP example, the compiler won't reorder P0's write event to
3706738ff85SAndrea Parribuf and P0's write event to flag, and similarly for the other shared
3716738ff85SAndrea Parrimemory accesses in the examples.
3721c27b644SPaul E. McKenney
3736738ff85SAndrea ParriSince private variables are not shared between CPUs, they can be
3746738ff85SAndrea Parriaccessed normally without READ_ONCE() or WRITE_ONCE().  In fact, they
3756738ff85SAndrea Parrineed not even be stored in normal memory at all -- in principle a
3766738ff85SAndrea Parriprivate variable could be stored in a CPU register (hence the convention
3776738ff85SAndrea Parrithat these variables have names starting with the letter 'r').
3781c27b644SPaul E. McKenney
3791c27b644SPaul E. McKenney
3801c27b644SPaul E. McKenneyA WARNING
3811c27b644SPaul E. McKenney---------
3821c27b644SPaul E. McKenney
3831c27b644SPaul E. McKenneyThe protections provided by READ_ONCE(), WRITE_ONCE(), and others are
3841c27b644SPaul E. McKenneynot perfect; and under some circumstances it is possible for the
3851c27b644SPaul E. McKenneycompiler to undermine the memory model.  Here is an example.  Suppose
3861c27b644SPaul E. McKenneyboth branches of an "if" statement store the same value to the same
3871c27b644SPaul E. McKenneylocation:
3881c27b644SPaul E. McKenney
3891c27b644SPaul E. McKenney	r1 = READ_ONCE(x);
3901c27b644SPaul E. McKenney	if (r1) {
3911c27b644SPaul E. McKenney		WRITE_ONCE(y, 2);
3921c27b644SPaul E. McKenney		...  /* do something */
3931c27b644SPaul E. McKenney	} else {
3941c27b644SPaul E. McKenney		WRITE_ONCE(y, 2);
3951c27b644SPaul E. McKenney		...  /* do something else */
3961c27b644SPaul E. McKenney	}
3971c27b644SPaul E. McKenney
3981c27b644SPaul E. McKenneyFor this code, the LKMM predicts that the load from x will always be
3991c27b644SPaul E. McKenneyexecuted before either of the stores to y.  However, a compiler could
4001c27b644SPaul E. McKenneylift the stores out of the conditional, transforming the code into
4011c27b644SPaul E. McKenneysomething resembling:
4021c27b644SPaul E. McKenney
4031c27b644SPaul E. McKenney	r1 = READ_ONCE(x);
4041c27b644SPaul E. McKenney	WRITE_ONCE(y, 2);
4051c27b644SPaul E. McKenney	if (r1) {
4061c27b644SPaul E. McKenney		...  /* do something */
4071c27b644SPaul E. McKenney	} else {
4081c27b644SPaul E. McKenney		...  /* do something else */
4091c27b644SPaul E. McKenney	}
4101c27b644SPaul E. McKenney
4111c27b644SPaul E. McKenneyGiven this version of the code, the LKMM would predict that the load
4121c27b644SPaul E. McKenneyfrom x could be executed after the store to y.  Thus, the memory
4131c27b644SPaul E. McKenneymodel's original prediction could be invalidated by the compiler.
4141c27b644SPaul E. McKenney
4151c27b644SPaul E. McKenneyAnother issue arises from the fact that in C, arguments to many
4161c27b644SPaul E. McKenneyoperators and function calls can be evaluated in any order.  For
4171c27b644SPaul E. McKenneyexample:
4181c27b644SPaul E. McKenney
4191c27b644SPaul E. McKenney	r1 = f(5) + g(6);
4201c27b644SPaul E. McKenney
4211c27b644SPaul E. McKenneyThe object code might call f(5) either before or after g(6); the
4221c27b644SPaul E. McKenneymemory model cannot assume there is a fixed program order relation
4233321ea12SAlan Sternbetween them.  (In fact, if the function calls are inlined then the
4241c27b644SPaul E. McKenneycompiler might even interleave their object code.)
4251c27b644SPaul E. McKenney
4261c27b644SPaul E. McKenney
4271c27b644SPaul E. McKenneyDEPENDENCY RELATIONS: data, addr, and ctrl
4281c27b644SPaul E. McKenney------------------------------------------
4291c27b644SPaul E. McKenney
4301c27b644SPaul E. McKenneyWe say that two events are linked by a dependency relation when the
4311c27b644SPaul E. McKenneyexecution of the second event depends in some way on a value obtained
4321c27b644SPaul E. McKenneyfrom memory by the first.  The first event must be a read, and the
4331c27b644SPaul E. McKenneyvalue it obtains must somehow affect what the second event does.
4341c27b644SPaul E. McKenneyThere are three kinds of dependencies: data, address (addr), and
4351c27b644SPaul E. McKenneycontrol (ctrl).
4361c27b644SPaul E. McKenney
4371c27b644SPaul E. McKenneyA read and a write event are linked by a data dependency if the value
4381c27b644SPaul E. McKenneyobtained by the read affects the value stored by the write.  As a very
4391c27b644SPaul E. McKenneysimple example:
4401c27b644SPaul E. McKenney
4411c27b644SPaul E. McKenney	int x, y;
4421c27b644SPaul E. McKenney
4431c27b644SPaul E. McKenney	r1 = READ_ONCE(x);
4441c27b644SPaul E. McKenney	WRITE_ONCE(y, r1 + 5);
4451c27b644SPaul E. McKenney
4461c27b644SPaul E. McKenneyThe value stored by the WRITE_ONCE obviously depends on the value
4471c27b644SPaul E. McKenneyloaded by the READ_ONCE.  Such dependencies can wind through
4481c27b644SPaul E. McKenneyarbitrarily complicated computations, and a write can depend on the
4491c27b644SPaul E. McKenneyvalues of multiple reads.
4501c27b644SPaul E. McKenney
4511c27b644SPaul E. McKenneyA read event and another memory access event are linked by an address
4521c27b644SPaul E. McKenneydependency if the value obtained by the read affects the location
4531c27b644SPaul E. McKenneyaccessed by the other event.  The second event can be either a read or
4541c27b644SPaul E. McKenneya write.  Here's another simple example:
4551c27b644SPaul E. McKenney
4561c27b644SPaul E. McKenney	int a[20];
4571c27b644SPaul E. McKenney	int i;
4581c27b644SPaul E. McKenney
4591c27b644SPaul E. McKenney	r1 = READ_ONCE(i);
4601c27b644SPaul E. McKenney	r2 = READ_ONCE(a[r1]);
4611c27b644SPaul E. McKenney
4621c27b644SPaul E. McKenneyHere the location accessed by the second READ_ONCE() depends on the
4631c27b644SPaul E. McKenneyindex value loaded by the first.  Pointer indirection also gives rise
4641c27b644SPaul E. McKenneyto address dependencies, since the address of a location accessed
4651c27b644SPaul E. McKenneythrough a pointer will depend on the value read earlier from that
4661c27b644SPaul E. McKenneypointer.
4671c27b644SPaul E. McKenney
468fc13b476SPaul HeidekrügerFinally, a read event X and a write event Y are linked by a control
469fc13b476SPaul Heidekrügerdependency if Y syntactically lies within an arm of an if statement and
470fc13b476SPaul HeidekrügerX affects the evaluation of the if condition via a data or address
471fc13b476SPaul Heidekrügerdependency (or similarly for a switch statement).  Simple example:
4721c27b644SPaul E. McKenney
4731c27b644SPaul E. McKenney	int x, y;
4741c27b644SPaul E. McKenney
4751c27b644SPaul E. McKenney	r1 = READ_ONCE(x);
4761c27b644SPaul E. McKenney	if (r1)
4771c27b644SPaul E. McKenney		WRITE_ONCE(y, 1984);
4781c27b644SPaul E. McKenney
4791c27b644SPaul E. McKenneyExecution of the WRITE_ONCE() is controlled by a conditional expression
4801c27b644SPaul E. McKenneywhich depends on the value obtained by the READ_ONCE(); hence there is
4811c27b644SPaul E. McKenneya control dependency from the load to the store.
4821c27b644SPaul E. McKenney
4831c27b644SPaul E. McKenneyIt should be pretty obvious that events can only depend on reads that
4841c27b644SPaul E. McKenneycome earlier in program order.  Symbolically, if we have R ->data X,
4851c27b644SPaul E. McKenneyR ->addr X, or R ->ctrl X (where R is a read event), then we must also
4861c27b644SPaul E. McKenneyhave R ->po X.  It wouldn't make sense for a computation to depend
4871c27b644SPaul E. McKenneysomehow on a value that doesn't get loaded from shared memory until
4881c27b644SPaul E. McKenneylater in the code!
4891c27b644SPaul E. McKenney
490e2b665f6SAlan SternHere's a trick question: When is a dependency not a dependency?  Answer:
491e2b665f6SAlan SternWhen it is purely syntactic rather than semantic.  We say a dependency
492e2b665f6SAlan Sternbetween two accesses is purely syntactic if the second access doesn't
493e2b665f6SAlan Sternactually depend on the result of the first.  Here is a trivial example:
494e2b665f6SAlan Stern
495e2b665f6SAlan Stern	r1 = READ_ONCE(x);
496e2b665f6SAlan Stern	WRITE_ONCE(y, r1 * 0);
497e2b665f6SAlan Stern
498e2b665f6SAlan SternThere appears to be a data dependency from the load of x to the store
499e2b665f6SAlan Sternof y, since the value to be stored is computed from the value that was
500e2b665f6SAlan Sternloaded.  But in fact, the value stored does not really depend on
501e2b665f6SAlan Sternanything since it will always be 0.  Thus the data dependency is only
502e2b665f6SAlan Sternsyntactic (it appears to exist in the code) but not semantic (the
503e2b665f6SAlan Sternsecond access will always be the same, regardless of the value of the
504e2b665f6SAlan Sternfirst access).  Given code like this, a compiler could simply discard
505e2b665f6SAlan Sternthe value returned by the load from x, which would certainly destroy
506e2b665f6SAlan Sternany dependency.  (The compiler is not permitted to eliminate entirely
507e2b665f6SAlan Sternthe load generated for a READ_ONCE() -- that's one of the nice
508e2b665f6SAlan Sternproperties of READ_ONCE() -- but it is allowed to ignore the load's
509e2b665f6SAlan Sternvalue.)
510e2b665f6SAlan Stern
511e2b665f6SAlan SternIt's natural to object that no one in their right mind would write
512e2b665f6SAlan Sterncode like the above.  However, macro expansions can easily give rise
513e2b665f6SAlan Sternto this sort of thing, in ways that often are not apparent to the
514e2b665f6SAlan Sternprogrammer.
515e2b665f6SAlan Stern
516e2b665f6SAlan SternAnother mechanism that can lead to purely syntactic dependencies is
517e2b665f6SAlan Sternrelated to the notion of "undefined behavior".  Certain program
518e2b665f6SAlan Sternbehaviors are called "undefined" in the C language specification,
519e2b665f6SAlan Sternwhich means that when they occur there are no guarantees at all about
520e2b665f6SAlan Sternthe outcome.  Consider the following example:
521e2b665f6SAlan Stern
522e2b665f6SAlan Stern	int a[1];
523e2b665f6SAlan Stern	int i;
524e2b665f6SAlan Stern
525e2b665f6SAlan Stern	r1 = READ_ONCE(i);
526e2b665f6SAlan Stern	r2 = READ_ONCE(a[r1]);
527e2b665f6SAlan Stern
528e2b665f6SAlan SternAccess beyond the end or before the beginning of an array is one kind
529e2b665f6SAlan Sternof undefined behavior.  Therefore the compiler doesn't have to worry
530e2b665f6SAlan Sternabout what will happen if r1 is nonzero, and it can assume that r1
531e2b665f6SAlan Sternwill always be zero regardless of the value actually loaded from i.
532e2b665f6SAlan Stern(If the assumption turns out to be wrong the resulting behavior will
533e2b665f6SAlan Sternbe undefined anyway, so the compiler doesn't care!)  Thus the value
534e2b665f6SAlan Sternfrom the load can be discarded, breaking the address dependency.
535e2b665f6SAlan Stern
536e2b665f6SAlan SternThe LKMM is unaware that purely syntactic dependencies are different
537e2b665f6SAlan Sternfrom semantic dependencies and therefore mistakenly predicts that the
538e2b665f6SAlan Sternaccesses in the two examples above will be ordered.  This is another
539e2b665f6SAlan Sternexample of how the compiler can undermine the memory model.  Be warned.
540e2b665f6SAlan Stern
5411c27b644SPaul E. McKenney
5421c27b644SPaul E. McKenneyTHE READS-FROM RELATION: rf, rfi, and rfe
5431c27b644SPaul E. McKenney-----------------------------------------
5441c27b644SPaul E. McKenney
5451c27b644SPaul E. McKenneyThe reads-from relation (rf) links a write event to a read event when
5461c27b644SPaul E. McKenneythe value loaded by the read is the value that was stored by the
5471c27b644SPaul E. McKenneywrite.  In colloquial terms, the load "reads from" the store.  We
5481c27b644SPaul E. McKenneywrite W ->rf R to indicate that the load R reads from the store W.  We
5491c27b644SPaul E. McKenneyfurther distinguish the cases where the load and the store occur on
5501c27b644SPaul E. McKenneythe same CPU (internal reads-from, or rfi) and where they occur on
5511c27b644SPaul E. McKenneydifferent CPUs (external reads-from, or rfe).
5521c27b644SPaul E. McKenney
5531c27b644SPaul E. McKenneyFor our purposes, a memory location's initial value is treated as
5541c27b644SPaul E. McKenneythough it had been written there by an imaginary initial store that
5553321ea12SAlan Sternexecutes on a separate CPU before the main program runs.
5561c27b644SPaul E. McKenney
5571c27b644SPaul E. McKenneyUsage of the rf relation implicitly assumes that loads will always
5581c27b644SPaul E. McKenneyread from a single store.  It doesn't apply properly in the presence
5591c27b644SPaul E. McKenneyof load-tearing, where a load obtains some of its bits from one store
5601c27b644SPaul E. McKenneyand some of them from another store.  Fortunately, use of READ_ONCE()
5611c27b644SPaul E. McKenneyand WRITE_ONCE() will prevent load-tearing; it's not possible to have:
5621c27b644SPaul E. McKenney
5631c27b644SPaul E. McKenney	int x = 0;
5641c27b644SPaul E. McKenney
5651c27b644SPaul E. McKenney	P0()
5661c27b644SPaul E. McKenney	{
5671c27b644SPaul E. McKenney		WRITE_ONCE(x, 0x1234);
5681c27b644SPaul E. McKenney	}
5691c27b644SPaul E. McKenney
5701c27b644SPaul E. McKenney	P1()
5711c27b644SPaul E. McKenney	{
5721c27b644SPaul E. McKenney		int r1;
5731c27b644SPaul E. McKenney
5741c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
5751c27b644SPaul E. McKenney	}
5761c27b644SPaul E. McKenney
5771c27b644SPaul E. McKenneyand end up with r1 = 0x1200 (partly from x's initial value and partly
5781c27b644SPaul E. McKenneyfrom the value stored by P0).
5791c27b644SPaul E. McKenney
5801c27b644SPaul E. McKenneyOn the other hand, load-tearing is unavoidable when mixed-size
5811c27b644SPaul E. McKenneyaccesses are used.  Consider this example:
5821c27b644SPaul E. McKenney
5831c27b644SPaul E. McKenney	union {
5841c27b644SPaul E. McKenney		u32	w;
5851c27b644SPaul E. McKenney		u16	h[2];
5861c27b644SPaul E. McKenney	} x;
5871c27b644SPaul E. McKenney
5881c27b644SPaul E. McKenney	P0()
5891c27b644SPaul E. McKenney	{
5901c27b644SPaul E. McKenney		WRITE_ONCE(x.h[0], 0x1234);
5911c27b644SPaul E. McKenney		WRITE_ONCE(x.h[1], 0x5678);
5921c27b644SPaul E. McKenney	}
5931c27b644SPaul E. McKenney
5941c27b644SPaul E. McKenney	P1()
5951c27b644SPaul E. McKenney	{
5961c27b644SPaul E. McKenney		int r1;
5971c27b644SPaul E. McKenney
5981c27b644SPaul E. McKenney		r1 = READ_ONCE(x.w);
5991c27b644SPaul E. McKenney	}
6001c27b644SPaul E. McKenney
6011c27b644SPaul E. McKenneyIf r1 = 0x56781234 (little-endian!) at the end, then P1 must have read
6021c27b644SPaul E. McKenneyfrom both of P0's stores.  It is possible to handle mixed-size and
6031c27b644SPaul E. McKenneyunaligned accesses in a memory model, but the LKMM currently does not
6041c27b644SPaul E. McKenneyattempt to do so.  It requires all accesses to be properly aligned and
6051c27b644SPaul E. McKenneyof the location's actual size.
6061c27b644SPaul E. McKenney
6071c27b644SPaul E. McKenney
6081c27b644SPaul E. McKenneyCACHE COHERENCE AND THE COHERENCE ORDER RELATION: co, coi, and coe
6091c27b644SPaul E. McKenney------------------------------------------------------------------
6101c27b644SPaul E. McKenney
6111c27b644SPaul E. McKenneyCache coherence is a general principle requiring that in a
6121c27b644SPaul E. McKenneymulti-processor system, the CPUs must share a consistent view of the
6131c27b644SPaul E. McKenneymemory contents.  Specifically, it requires that for each location in
6141c27b644SPaul E. McKenneyshared memory, the stores to that location must form a single global
6151c27b644SPaul E. McKenneyordering which all the CPUs agree on (the coherence order), and this
6161c27b644SPaul E. McKenneyordering must be consistent with the program order for accesses to
6171c27b644SPaul E. McKenneythat location.
6181c27b644SPaul E. McKenney
6191c27b644SPaul E. McKenneyTo put it another way, for any variable x, the coherence order (co) of
6201c27b644SPaul E. McKenneythe stores to x is simply the order in which the stores overwrite one
6211c27b644SPaul E. McKenneyanother.  The imaginary store which establishes x's initial value
6221c27b644SPaul E. McKenneycomes first in the coherence order; the store which directly
6231c27b644SPaul E. McKenneyoverwrites the initial value comes second; the store which overwrites
6241c27b644SPaul E. McKenneythat value comes third, and so on.
6251c27b644SPaul E. McKenney
6261c27b644SPaul E. McKenneyYou can think of the coherence order as being the order in which the
6271c27b644SPaul E. McKenneystores reach x's location in memory (or if you prefer a more
6281c27b644SPaul E. McKenneyhardware-centric view, the order in which the stores get written to
6291c27b644SPaul E. McKenneyx's cache line).  We write W ->co W' if W comes before W' in the
6301c27b644SPaul E. McKenneycoherence order, that is, if the value stored by W gets overwritten,
6311c27b644SPaul E. McKenneydirectly or indirectly, by the value stored by W'.
6321c27b644SPaul E. McKenney
6331c27b644SPaul E. McKenneyCoherence order is required to be consistent with program order.  This
6341c27b644SPaul E. McKenneyrequirement takes the form of four coherency rules:
6351c27b644SPaul E. McKenney
6361c27b644SPaul E. McKenney	Write-write coherence: If W ->po-loc W' (i.e., W comes before
6371c27b644SPaul E. McKenney	W' in program order and they access the same location), where W
6381c27b644SPaul E. McKenney	and W' are two stores, then W ->co W'.
6391c27b644SPaul E. McKenney
6401c27b644SPaul E. McKenney	Write-read coherence: If W ->po-loc R, where W is a store and R
6411c27b644SPaul E. McKenney	is a load, then R must read from W or from some other store
6421c27b644SPaul E. McKenney	which comes after W in the coherence order.
6431c27b644SPaul E. McKenney
6441c27b644SPaul E. McKenney	Read-write coherence: If R ->po-loc W, where R is a load and W
6451c27b644SPaul E. McKenney	is a store, then the store which R reads from must come before
6461c27b644SPaul E. McKenney	W in the coherence order.
6471c27b644SPaul E. McKenney
6481c27b644SPaul E. McKenney	Read-read coherence: If R ->po-loc R', where R and R' are two
6491c27b644SPaul E. McKenney	loads, then either they read from the same store or else the
6501c27b644SPaul E. McKenney	store read by R comes before the store read by R' in the
6511c27b644SPaul E. McKenney	coherence order.
6521c27b644SPaul E. McKenney
6531c27b644SPaul E. McKenneyThis is sometimes referred to as sequential consistency per variable,
6541c27b644SPaul E. McKenneybecause it means that the accesses to any single memory location obey
6551c27b644SPaul E. McKenneythe rules of the Sequential Consistency memory model.  (According to
6561c27b644SPaul E. McKenneyWikipedia, sequential consistency per variable and cache coherence
6571c27b644SPaul E. McKenneymean the same thing except that cache coherence includes an extra
6581c27b644SPaul E. McKenneyrequirement that every store eventually becomes visible to every CPU.)
6591c27b644SPaul E. McKenney
6601c27b644SPaul E. McKenneyAny reasonable memory model will include cache coherence.  Indeed, our
6611c27b644SPaul E. McKenneyexpectation of cache coherence is so deeply ingrained that violations
6621c27b644SPaul E. McKenneyof its requirements look more like hardware bugs than programming
6631c27b644SPaul E. McKenneyerrors:
6641c27b644SPaul E. McKenney
6651c27b644SPaul E. McKenney	int x;
6661c27b644SPaul E. McKenney
6671c27b644SPaul E. McKenney	P0()
6681c27b644SPaul E. McKenney	{
6691c27b644SPaul E. McKenney		WRITE_ONCE(x, 17);
6701c27b644SPaul E. McKenney		WRITE_ONCE(x, 23);
6711c27b644SPaul E. McKenney	}
6721c27b644SPaul E. McKenney
6731c27b644SPaul E. McKenneyIf the final value stored in x after this code ran was 17, you would
6741c27b644SPaul E. McKenneythink your computer was broken.  It would be a violation of the
6751c27b644SPaul E. McKenneywrite-write coherence rule: Since the store of 23 comes later in
6761c27b644SPaul E. McKenneyprogram order, it must also come later in x's coherence order and
6771c27b644SPaul E. McKenneythus must overwrite the store of 17.
6781c27b644SPaul E. McKenney
6791c27b644SPaul E. McKenney	int x = 0;
6801c27b644SPaul E. McKenney
6811c27b644SPaul E. McKenney	P0()
6821c27b644SPaul E. McKenney	{
6831c27b644SPaul E. McKenney		int r1;
6841c27b644SPaul E. McKenney
6851c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
6861c27b644SPaul E. McKenney		WRITE_ONCE(x, 666);
6871c27b644SPaul E. McKenney	}
6881c27b644SPaul E. McKenney
6891c27b644SPaul E. McKenneyIf r1 = 666 at the end, this would violate the read-write coherence
6901c27b644SPaul E. McKenneyrule: The READ_ONCE() load comes before the WRITE_ONCE() store in
6911c27b644SPaul E. McKenneyprogram order, so it must not read from that store but rather from one
6921c27b644SPaul E. McKenneycoming earlier in the coherence order (in this case, x's initial
6931c27b644SPaul E. McKenneyvalue).
6941c27b644SPaul E. McKenney
6951c27b644SPaul E. McKenney	int x = 0;
6961c27b644SPaul E. McKenney
6971c27b644SPaul E. McKenney	P0()
6981c27b644SPaul E. McKenney	{
6991c27b644SPaul E. McKenney		WRITE_ONCE(x, 5);
7001c27b644SPaul E. McKenney	}
7011c27b644SPaul E. McKenney
7021c27b644SPaul E. McKenney	P1()
7031c27b644SPaul E. McKenney	{
7041c27b644SPaul E. McKenney		int r1, r2;
7051c27b644SPaul E. McKenney
7061c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
7071c27b644SPaul E. McKenney		r2 = READ_ONCE(x);
7081c27b644SPaul E. McKenney	}
7091c27b644SPaul E. McKenney
7101c27b644SPaul E. McKenneyIf r1 = 5 (reading from P0's store) and r2 = 0 (reading from the
7111c27b644SPaul E. McKenneyimaginary store which establishes x's initial value) at the end, this
7121c27b644SPaul E. McKenneywould violate the read-read coherence rule: The r1 load comes before
7131c27b644SPaul E. McKenneythe r2 load in program order, so it must not read from a store that
7141c27b644SPaul E. McKenneycomes later in the coherence order.
7151c27b644SPaul E. McKenney
7161c27b644SPaul E. McKenney(As a minor curiosity, if this code had used normal loads instead of
7171c27b644SPaul E. McKenneyREAD_ONCE() in P1, on Itanium it sometimes could end up with r1 = 5
7181c27b644SPaul E. McKenneyand r2 = 0!  This results from parallel execution of the operations
7191c27b644SPaul E. McKenneyencoded in Itanium's Very-Long-Instruction-Word format, and it is yet
7201c27b644SPaul E. McKenneyanother motivation for using READ_ONCE() when accessing shared memory
7211c27b644SPaul E. McKenneylocations.)
7221c27b644SPaul E. McKenney
7231c27b644SPaul E. McKenneyJust like the po relation, co is inherently an ordering -- it is not
7241c27b644SPaul E. McKenneypossible for a store to directly or indirectly overwrite itself!  And
7251c27b644SPaul E. McKenneyjust like with the rf relation, we distinguish between stores that
7261c27b644SPaul E. McKenneyoccur on the same CPU (internal coherence order, or coi) and stores
7271c27b644SPaul E. McKenneythat occur on different CPUs (external coherence order, or coe).
7281c27b644SPaul E. McKenney
7291c27b644SPaul E. McKenneyOn the other hand, stores to different memory locations are never
7301c27b644SPaul E. McKenneyrelated by co, just as instructions on different CPUs are never
7311c27b644SPaul E. McKenneyrelated by po.  Coherence order is strictly per-location, or if you
7321c27b644SPaul E. McKenneyprefer, each location has its own independent coherence order.
7331c27b644SPaul E. McKenney
7341c27b644SPaul E. McKenney
7351c27b644SPaul E. McKenneyTHE FROM-READS RELATION: fr, fri, and fre
7361c27b644SPaul E. McKenney-----------------------------------------
7371c27b644SPaul E. McKenney
7381c27b644SPaul E. McKenneyThe from-reads relation (fr) can be a little difficult for people to
7391c27b644SPaul E. McKenneygrok.  It describes the situation where a load reads a value that gets
7401c27b644SPaul E. McKenneyoverwritten by a store.  In other words, we have R ->fr W when the
7411c27b644SPaul E. McKenneyvalue that R reads is overwritten (directly or indirectly) by W, or
7421c27b644SPaul E. McKenneyequivalently, when R reads from a store which comes earlier than W in
7431c27b644SPaul E. McKenneythe coherence order.
7441c27b644SPaul E. McKenney
7451c27b644SPaul E. McKenneyFor example:
7461c27b644SPaul E. McKenney
7471c27b644SPaul E. McKenney	int x = 0;
7481c27b644SPaul E. McKenney
7491c27b644SPaul E. McKenney	P0()
7501c27b644SPaul E. McKenney	{
7511c27b644SPaul E. McKenney		int r1;
7521c27b644SPaul E. McKenney
7531c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
7541c27b644SPaul E. McKenney		WRITE_ONCE(x, 2);
7551c27b644SPaul E. McKenney	}
7561c27b644SPaul E. McKenney
7571c27b644SPaul E. McKenneyThe value loaded from x will be 0 (assuming cache coherence!), and it
7581c27b644SPaul E. McKenneygets overwritten by the value 2.  Thus there is an fr link from the
7591c27b644SPaul E. McKenneyREAD_ONCE() to the WRITE_ONCE().  If the code contained any later
7601c27b644SPaul E. McKenneystores to x, there would also be fr links from the READ_ONCE() to
7611c27b644SPaul E. McKenneythem.
7621c27b644SPaul E. McKenney
7631c27b644SPaul E. McKenneyAs with rf, rfi, and rfe, we subdivide the fr relation into fri (when
7641c27b644SPaul E. McKenneythe load and the store are on the same CPU) and fre (when they are on
7651c27b644SPaul E. McKenneydifferent CPUs).
7661c27b644SPaul E. McKenney
7671c27b644SPaul E. McKenneyNote that the fr relation is determined entirely by the rf and co
7681c27b644SPaul E. McKenneyrelations; it is not independent.  Given a read event R and a write
7691c27b644SPaul E. McKenneyevent W for the same location, we will have R ->fr W if and only if
7701c27b644SPaul E. McKenneythe write which R reads from is co-before W.  In symbols,
7711c27b644SPaul E. McKenney
7721c27b644SPaul E. McKenney	(R ->fr W) := (there exists W' with W' ->rf R and W' ->co W).
7731c27b644SPaul E. McKenney
7741c27b644SPaul E. McKenney
7751c27b644SPaul E. McKenneyAN OPERATIONAL MODEL
7761c27b644SPaul E. McKenney--------------------
7771c27b644SPaul E. McKenney
7781c27b644SPaul E. McKenneyThe LKMM is based on various operational memory models, meaning that
7791c27b644SPaul E. McKenneythe models arise from an abstract view of how a computer system
7801c27b644SPaul E. McKenneyoperates.  Here are the main ideas, as incorporated into the LKMM.
7811c27b644SPaul E. McKenney
7821c27b644SPaul E. McKenneyThe system as a whole is divided into the CPUs and a memory subsystem.
7831c27b644SPaul E. McKenneyThe CPUs are responsible for executing instructions (not necessarily
7841c27b644SPaul E. McKenneyin program order), and they communicate with the memory subsystem.
7851c27b644SPaul E. McKenneyFor the most part, executing an instruction requires a CPU to perform
7861c27b644SPaul E. McKenneyonly internal operations.  However, loads, stores, and fences involve
7871c27b644SPaul E. McKenneymore.
7881c27b644SPaul E. McKenney
7891c27b644SPaul E. McKenneyWhen CPU C executes a store instruction, it tells the memory subsystem
7901c27b644SPaul E. McKenneyto store a certain value at a certain location.  The memory subsystem
7911c27b644SPaul E. McKenneypropagates the store to all the other CPUs as well as to RAM.  (As a
7921c27b644SPaul E. McKenneyspecial case, we say that the store propagates to its own CPU at the
7931c27b644SPaul E. McKenneytime it is executed.)  The memory subsystem also determines where the
7941c27b644SPaul E. McKenneystore falls in the location's coherence order.  In particular, it must
7951c27b644SPaul E. McKenneyarrange for the store to be co-later than (i.e., to overwrite) any
7961c27b644SPaul E. McKenneyother store to the same location which has already propagated to CPU C.
7971c27b644SPaul E. McKenney
7981c27b644SPaul E. McKenneyWhen a CPU executes a load instruction R, it first checks to see
7991c27b644SPaul E. McKenneywhether there are any as-yet unexecuted store instructions, for the
8001c27b644SPaul E. McKenneysame location, that come before R in program order.  If there are, it
8011c27b644SPaul E. McKenneyuses the value of the po-latest such store as the value obtained by R,
8021c27b644SPaul E. McKenneyand we say that the store's value is forwarded to R.  Otherwise, the
8031c27b644SPaul E. McKenneyCPU asks the memory subsystem for the value to load and we say that R
8041c27b644SPaul E. McKenneyis satisfied from memory.  The memory subsystem hands back the value
8051c27b644SPaul E. McKenneyof the co-latest store to the location in question which has already
8061c27b644SPaul E. McKenneypropagated to that CPU.
8071c27b644SPaul E. McKenney
8081c27b644SPaul E. McKenney(In fact, the picture needs to be a little more complicated than this.
8091c27b644SPaul E. McKenneyCPUs have local caches, and propagating a store to a CPU really means
8101c27b644SPaul E. McKenneypropagating it to the CPU's local cache.  A local cache can take some
8111c27b644SPaul E. McKenneytime to process the stores that it receives, and a store can't be used
8121c27b644SPaul E. McKenneyto satisfy one of the CPU's loads until it has been processed.  On
8131c27b644SPaul E. McKenneymost architectures, the local caches process stores in
8141c27b644SPaul E. McKenneyFirst-In-First-Out order, and consequently the processing delay
8151c27b644SPaul E. McKenneydoesn't matter for the memory model.  But on Alpha, the local caches
8161c27b644SPaul E. McKenneyhave a partitioned design that results in non-FIFO behavior.  We will
8171c27b644SPaul E. McKenneydiscuss this in more detail later.)
8181c27b644SPaul E. McKenney
8191c27b644SPaul E. McKenneyNote that load instructions may be executed speculatively and may be
8201c27b644SPaul E. McKenneyrestarted under certain circumstances.  The memory model ignores these
8211c27b644SPaul E. McKenneypremature executions; we simply say that the load executes at the
8221c27b644SPaul E. McKenneyfinal time it is forwarded or satisfied.
8231c27b644SPaul E. McKenney
8241c27b644SPaul E. McKenneyExecuting a fence (or memory barrier) instruction doesn't require a
8251c27b644SPaul E. McKenneyCPU to do anything special other than informing the memory subsystem
8261c27b644SPaul E. McKenneyabout the fence.  However, fences do constrain the way CPUs and the
8271c27b644SPaul E. McKenneymemory subsystem handle other instructions, in two respects.
8281c27b644SPaul E. McKenney
8291c27b644SPaul E. McKenneyFirst, a fence forces the CPU to execute various instructions in
8301c27b644SPaul E. McKenneyprogram order.  Exactly which instructions are ordered depends on the
8311c27b644SPaul E. McKenneytype of fence:
8321c27b644SPaul E. McKenney
8331c27b644SPaul E. McKenney	Strong fences, including smp_mb() and synchronize_rcu(), force
8341c27b644SPaul E. McKenney	the CPU to execute all po-earlier instructions before any
8351c27b644SPaul E. McKenney	po-later instructions;
8361c27b644SPaul E. McKenney
8371c27b644SPaul E. McKenney	smp_rmb() forces the CPU to execute all po-earlier loads
8381c27b644SPaul E. McKenney	before any po-later loads;
8391c27b644SPaul E. McKenney
8401c27b644SPaul E. McKenney	smp_wmb() forces the CPU to execute all po-earlier stores
8411c27b644SPaul E. McKenney	before any po-later stores;
8421c27b644SPaul E. McKenney
8431c27b644SPaul E. McKenney	Acquire fences, such as smp_load_acquire(), force the CPU to
8441c27b644SPaul E. McKenney	execute the load associated with the fence (e.g., the load
8451c27b644SPaul E. McKenney	part of an smp_load_acquire()) before any po-later
8461c27b644SPaul E. McKenney	instructions;
8471c27b644SPaul E. McKenney
8481c27b644SPaul E. McKenney	Release fences, such as smp_store_release(), force the CPU to
8491c27b644SPaul E. McKenney	execute all po-earlier instructions before the store
8501c27b644SPaul E. McKenney	associated with the fence (e.g., the store part of an
8511c27b644SPaul E. McKenney	smp_store_release()).
8521c27b644SPaul E. McKenney
8531c27b644SPaul E. McKenneySecond, some types of fence affect the way the memory subsystem
8541c27b644SPaul E. McKenneypropagates stores.  When a fence instruction is executed on CPU C:
8551c27b644SPaul E. McKenney
8560fcff171SYauheni Kaliuta	For each other CPU C', smp_wmb() forces all po-earlier stores
8571c27b644SPaul E. McKenney	on C to propagate to C' before any po-later stores do.
8581c27b644SPaul E. McKenney
8591c27b644SPaul E. McKenney	For each other CPU C', any store which propagates to C before
8601c27b644SPaul E. McKenney	a release fence is executed (including all po-earlier
8611c27b644SPaul E. McKenney	stores executed on C) is forced to propagate to C' before the
8621c27b644SPaul E. McKenney	store associated with the release fence does.
8631c27b644SPaul E. McKenney
8641c27b644SPaul E. McKenney	Any store which propagates to C before a strong fence is
8651c27b644SPaul E. McKenney	executed (including all po-earlier stores on C) is forced to
8661c27b644SPaul E. McKenney	propagate to all other CPUs before any instructions po-after
8671c27b644SPaul E. McKenney	the strong fence are executed on C.
8681c27b644SPaul E. McKenney
8691c27b644SPaul E. McKenneyThe propagation ordering enforced by release fences and strong fences
8701c27b644SPaul E. McKenneyaffects stores from other CPUs that propagate to CPU C before the
8711c27b644SPaul E. McKenneyfence is executed, as well as stores that are executed on C before the
8721c27b644SPaul E. McKenneyfence.  We describe this property by saying that release fences and
8731c27b644SPaul E. McKenneystrong fences are A-cumulative.  By contrast, smp_wmb() fences are not
8741c27b644SPaul E. McKenneyA-cumulative; they only affect the propagation of stores that are
8751c27b644SPaul E. McKenneyexecuted on C before the fence (i.e., those which precede the fence in
8761c27b644SPaul E. McKenneyprogram order).
8771c27b644SPaul E. McKenney
878bd5c0ba2SAlan Sternrcu_read_lock(), rcu_read_unlock(), and synchronize_rcu() fences have
879bf28ae56SAlan Sternother properties which we discuss later.
8801c27b644SPaul E. McKenney
8811c27b644SPaul E. McKenney
8821c27b644SPaul E. McKenneyPROPAGATION ORDER RELATION: cumul-fence
8831c27b644SPaul E. McKenney---------------------------------------
8841c27b644SPaul E. McKenney
8851c27b644SPaul E. McKenneyThe fences which affect propagation order (i.e., strong, release, and
8861c27b644SPaul E. McKenneysmp_wmb() fences) are collectively referred to as cumul-fences, even
8871c27b644SPaul E. McKenneythough smp_wmb() isn't A-cumulative.  The cumul-fence relation is
8881c27b644SPaul E. McKenneydefined to link memory access events E and F whenever:
8891c27b644SPaul E. McKenney
8901c27b644SPaul E. McKenney	E and F are both stores on the same CPU and an smp_wmb() fence
8911c27b644SPaul E. McKenney	event occurs between them in program order; or
8921c27b644SPaul E. McKenney
8931c27b644SPaul E. McKenney	F is a release fence and some X comes before F in program order,
8941c27b644SPaul E. McKenney	where either X = E or else E ->rf X; or
8951c27b644SPaul E. McKenney
8961c27b644SPaul E. McKenney	A strong fence event occurs between some X and F in program
8971c27b644SPaul E. McKenney	order, where either X = E or else E ->rf X.
8981c27b644SPaul E. McKenney
8991c27b644SPaul E. McKenneyThe operational model requires that whenever W and W' are both stores
9001c27b644SPaul E. McKenneyand W ->cumul-fence W', then W must propagate to any given CPU
9011c27b644SPaul E. McKenneybefore W' does.  However, for different CPUs C and C', it does not
9021c27b644SPaul E. McKenneyrequire W to propagate to C before W' propagates to C'.
9031c27b644SPaul E. McKenney
9041c27b644SPaul E. McKenney
9051c27b644SPaul E. McKenneyDERIVATION OF THE LKMM FROM THE OPERATIONAL MODEL
9061c27b644SPaul E. McKenney-------------------------------------------------
9071c27b644SPaul E. McKenney
9081c27b644SPaul E. McKenneyThe LKMM is derived from the restrictions imposed by the design
9091c27b644SPaul E. McKenneyoutlined above.  These restrictions involve the necessity of
9101c27b644SPaul E. McKenneymaintaining cache coherence and the fact that a CPU can't operate on a
9111c27b644SPaul E. McKenneyvalue before it knows what that value is, among other things.
9121c27b644SPaul E. McKenney
913c58a8017SAlan SternThe formal version of the LKMM is defined by six requirements, or
9141c27b644SPaul E. McKenneyaxioms:
9151c27b644SPaul E. McKenney
9161c27b644SPaul E. McKenney	Sequential consistency per variable: This requires that the
9171c27b644SPaul E. McKenney	system obey the four coherency rules.
9181c27b644SPaul E. McKenney
9191c27b644SPaul E. McKenney	Atomicity: This requires that atomic read-modify-write
9201c27b644SPaul E. McKenney	operations really are atomic, that is, no other stores can
9211c27b644SPaul E. McKenney	sneak into the middle of such an update.
9221c27b644SPaul E. McKenney
9231c27b644SPaul E. McKenney	Happens-before: This requires that certain instructions are
9241c27b644SPaul E. McKenney	executed in a specific order.
9251c27b644SPaul E. McKenney
9261c27b644SPaul E. McKenney	Propagation: This requires that certain stores propagate to
9271c27b644SPaul E. McKenney	CPUs and to RAM in a specific order.
9281c27b644SPaul E. McKenney
9291c27b644SPaul E. McKenney	Rcu: This requires that RCU read-side critical sections and
9301c27b644SPaul E. McKenney	grace periods obey the rules of RCU, in particular, the
9311c27b644SPaul E. McKenney	Grace-Period Guarantee.
9321c27b644SPaul E. McKenney
933c58a8017SAlan Stern	Plain-coherence: This requires that plain memory accesses
934c58a8017SAlan Stern	(those not using READ_ONCE(), WRITE_ONCE(), etc.) must obey
935c58a8017SAlan Stern	the operational model's rules regarding cache coherence.
936c58a8017SAlan Stern
9371c27b644SPaul E. McKenneyThe first and second are quite common; they can be found in many
9381c27b644SPaul E. McKenneymemory models (such as those for C11/C++11).  The "happens-before" and
9391c27b644SPaul E. McKenney"propagation" axioms have analogs in other memory models as well.  The
940c58a8017SAlan Stern"rcu" and "plain-coherence" axioms are specific to the LKMM.
9411c27b644SPaul E. McKenney
9421c27b644SPaul E. McKenneyEach of these axioms is discussed below.
9431c27b644SPaul E. McKenney
9441c27b644SPaul E. McKenney
9451c27b644SPaul E. McKenneySEQUENTIAL CONSISTENCY PER VARIABLE
9461c27b644SPaul E. McKenney-----------------------------------
9471c27b644SPaul E. McKenney
9481c27b644SPaul E. McKenneyAccording to the principle of cache coherence, the stores to any fixed
9491c27b644SPaul E. McKenneyshared location in memory form a global ordering.  We can imagine
9501c27b644SPaul E. McKenneyinserting the loads from that location into this ordering, by placing
9511c27b644SPaul E. McKenneyeach load between the store that it reads from and the following
9521c27b644SPaul E. McKenneystore.  This leaves the relative positions of loads that read from the
9531c27b644SPaul E. McKenneysame store unspecified; let's say they are inserted in program order,
9541c27b644SPaul E. McKenneyfirst for CPU 0, then CPU 1, etc.
9551c27b644SPaul E. McKenney
9561c27b644SPaul E. McKenneyYou can check that the four coherency rules imply that the rf, co, fr,
9571c27b644SPaul E. McKenneyand po-loc relations agree with this global ordering; in other words,
9581c27b644SPaul E. McKenneywhenever we have X ->rf Y or X ->co Y or X ->fr Y or X ->po-loc Y, the
9591c27b644SPaul E. McKenneyX event comes before the Y event in the global ordering.  The LKMM's
9601c27b644SPaul E. McKenney"coherence" axiom expresses this by requiring the union of these
9611c27b644SPaul E. McKenneyrelations not to have any cycles.  This means it must not be possible
9621c27b644SPaul E. McKenneyto find events
9631c27b644SPaul E. McKenney
9641c27b644SPaul E. McKenney	X0 -> X1 -> X2 -> ... -> Xn -> X0,
9651c27b644SPaul E. McKenney
9661c27b644SPaul E. McKenneywhere each of the links is either rf, co, fr, or po-loc.  This has to
9671c27b644SPaul E. McKenneyhold if the accesses to the fixed memory location can be ordered as
9681c27b644SPaul E. McKenneycache coherence demands.
9691c27b644SPaul E. McKenney
9701c27b644SPaul E. McKenneyAlthough it is not obvious, it can be shown that the converse is also
9711c27b644SPaul E. McKenneytrue: This LKMM axiom implies that the four coherency rules are
9721c27b644SPaul E. McKenneyobeyed.
9731c27b644SPaul E. McKenney
9741c27b644SPaul E. McKenney
9751c27b644SPaul E. McKenneyATOMIC UPDATES: rmw
9761c27b644SPaul E. McKenney-------------------
9771c27b644SPaul E. McKenney
9781c27b644SPaul E. McKenneyWhat does it mean to say that a read-modify-write (rmw) update, such
9791c27b644SPaul E. McKenneyas atomic_inc(&x), is atomic?  It means that the memory location (x in
9801c27b644SPaul E. McKenneythis case) does not get altered between the read and the write events
9811c27b644SPaul E. McKenneymaking up the atomic operation.  In particular, if two CPUs perform
9821c27b644SPaul E. McKenneyatomic_inc(&x) concurrently, it must be guaranteed that the final
9831c27b644SPaul E. McKenneyvalue of x will be the initial value plus two.  We should never have
9841c27b644SPaul E. McKenneythe following sequence of events:
9851c27b644SPaul E. McKenney
9861c27b644SPaul E. McKenney	CPU 0 loads x obtaining 13;
9871c27b644SPaul E. McKenney					CPU 1 loads x obtaining 13;
9881c27b644SPaul E. McKenney	CPU 0 stores 14 to x;
9891c27b644SPaul E. McKenney					CPU 1 stores 14 to x;
9901c27b644SPaul E. McKenney
9911c27b644SPaul E. McKenneywhere the final value of x is wrong (14 rather than 15).
9921c27b644SPaul E. McKenney
9931c27b644SPaul E. McKenneyIn this example, CPU 0's increment effectively gets lost because it
9941c27b644SPaul E. McKenneyoccurs in between CPU 1's load and store.  To put it another way, the
9951c27b644SPaul E. McKenneyproblem is that the position of CPU 0's store in x's coherence order
9961c27b644SPaul E. McKenneyis between the store that CPU 1 reads from and the store that CPU 1
9971c27b644SPaul E. McKenneyperforms.
9981c27b644SPaul E. McKenney
9991c27b644SPaul E. McKenneyThe same analysis applies to all atomic update operations.  Therefore,
10001c27b644SPaul E. McKenneyto enforce atomicity the LKMM requires that atomic updates follow this
10011c27b644SPaul E. McKenneyrule: Whenever R and W are the read and write events composing an
10021c27b644SPaul E. McKenneyatomic read-modify-write and W' is the write event which R reads from,
10031c27b644SPaul E. McKenneythere must not be any stores coming between W' and W in the coherence
10041c27b644SPaul E. McKenneyorder.  Equivalently,
10051c27b644SPaul E. McKenney
10061c27b644SPaul E. McKenney	(R ->rmw W) implies (there is no X with R ->fr X and X ->co W),
10071c27b644SPaul E. McKenney
10081c27b644SPaul E. McKenneywhere the rmw relation links the read and write events making up each
10091c27b644SPaul E. McKenneyatomic update.  This is what the LKMM's "atomic" axiom says.
10101c27b644SPaul E. McKenney
1011ebd50e29SAlan SternAtomic rmw updates play one more role in the LKMM: They can form "rmw
1012ebd50e29SAlan Sternsequences".  An rmw sequence is simply a bunch of atomic updates where
1013ebd50e29SAlan Sterneach update reads from the previous one.  Written using events, it
1014ebd50e29SAlan Sternlooks like this:
1015ebd50e29SAlan Stern
1016ebd50e29SAlan Stern	Z0 ->rf Y1 ->rmw Z1 ->rf ... ->rf Yn ->rmw Zn,
1017ebd50e29SAlan Stern
1018ebd50e29SAlan Sternwhere Z0 is some store event and n can be any number (even 0, in the
1019ebd50e29SAlan Sterndegenerate case).  We write this relation as: Z0 ->rmw-sequence Zn.
1020ebd50e29SAlan SternNote that this implies Z0 and Zn are stores to the same variable.
1021ebd50e29SAlan Stern
1022ebd50e29SAlan SternRmw sequences have a special property in the LKMM: They can extend the
1023ebd50e29SAlan Sterncumul-fence relation.  That is, if we have:
1024ebd50e29SAlan Stern
1025ebd50e29SAlan Stern	U ->cumul-fence X -> rmw-sequence Y
1026ebd50e29SAlan Stern
1027ebd50e29SAlan Sternthen also U ->cumul-fence Y.  Thinking about this in terms of the
1028ebd50e29SAlan Sternoperational model, U ->cumul-fence X says that the store U propagates
1029ebd50e29SAlan Sternto each CPU before the store X does.  Then the fact that X and Y are
1030ebd50e29SAlan Sternlinked by an rmw sequence means that U also propagates to each CPU
1031ebd50e29SAlan Sternbefore Y does.  In an analogous way, rmw sequences can also extend
1032ebd50e29SAlan Sternthe w-post-bounded relation defined below in the PLAIN ACCESSES AND
1033ebd50e29SAlan SternDATA RACES section.
1034ebd50e29SAlan Stern
1035ebd50e29SAlan Stern(The notion of rmw sequences in the LKMM is similar to, but not quite
1036ebd50e29SAlan Sternthe same as, that of release sequences in the C11 memory model.  They
1037ebd50e29SAlan Sternwere added to the LKMM to fix an obscure bug; without them, atomic
1038ebd50e29SAlan Sternupdates with full-barrier semantics did not always guarantee ordering
1039ebd50e29SAlan Sternat least as strong as atomic updates with release-barrier semantics.)
1040ebd50e29SAlan Stern
10411c27b644SPaul E. McKenney
10421c27b644SPaul E. McKenneyTHE PRESERVED PROGRAM ORDER RELATION: ppo
10431c27b644SPaul E. McKenney-----------------------------------------
10441c27b644SPaul E. McKenney
10453321ea12SAlan SternThere are many situations where a CPU is obliged to execute two
10461c27b644SPaul E. McKenneyinstructions in program order.  We amalgamate them into the ppo (for
10471c27b644SPaul E. McKenney"preserved program order") relation, which links the po-earlier
10481c27b644SPaul E. McKenneyinstruction to the po-later instruction and is thus a sub-relation of
10491c27b644SPaul E. McKenneypo.
10501c27b644SPaul E. McKenney
10511c27b644SPaul E. McKenneyThe operational model already includes a description of one such
10521c27b644SPaul E. McKenneysituation: Fences are a source of ppo links.  Suppose X and Y are
10531c27b644SPaul E. McKenneymemory accesses with X ->po Y; then the CPU must execute X before Y if
10541c27b644SPaul E. McKenneyany of the following hold:
10551c27b644SPaul E. McKenney
10561c27b644SPaul E. McKenney	A strong (smp_mb() or synchronize_rcu()) fence occurs between
10571c27b644SPaul E. McKenney	X and Y;
10581c27b644SPaul E. McKenney
10591c27b644SPaul E. McKenney	X and Y are both stores and an smp_wmb() fence occurs between
10601c27b644SPaul E. McKenney	them;
10611c27b644SPaul E. McKenney
10621c27b644SPaul E. McKenney	X and Y are both loads and an smp_rmb() fence occurs between
10631c27b644SPaul E. McKenney	them;
10641c27b644SPaul E. McKenney
10651c27b644SPaul E. McKenney	X is also an acquire fence, such as smp_load_acquire();
10661c27b644SPaul E. McKenney
10671c27b644SPaul E. McKenney	Y is also a release fence, such as smp_store_release().
10681c27b644SPaul E. McKenney
10691c27b644SPaul E. McKenneyAnother possibility, not mentioned earlier but discussed in the next
10701c27b644SPaul E. McKenneysection, is:
10711c27b644SPaul E. McKenney
10721c27b644SPaul E. McKenney	X and Y are both loads, X ->addr Y (i.e., there is an address
1073bf28ae56SAlan Stern	dependency from X to Y), and X is a READ_ONCE() or an atomic
1074bf28ae56SAlan Stern	access.
10751c27b644SPaul E. McKenney
10761c27b644SPaul E. McKenneyDependencies can also cause instructions to be executed in program
10771c27b644SPaul E. McKenneyorder.  This is uncontroversial when the second instruction is a
10781c27b644SPaul E. McKenneystore; either a data, address, or control dependency from a load R to
10791c27b644SPaul E. McKenneya store W will force the CPU to execute R before W.  This is very
10801c27b644SPaul E. McKenneysimply because the CPU cannot tell the memory subsystem about W's
10811c27b644SPaul E. McKenneystore before it knows what value should be stored (in the case of a
10821c27b644SPaul E. McKenneydata dependency), what location it should be stored into (in the case
10831c27b644SPaul E. McKenneyof an address dependency), or whether the store should actually take
10841c27b644SPaul E. McKenneyplace (in the case of a control dependency).
10851c27b644SPaul E. McKenney
10861c27b644SPaul E. McKenneyDependencies to load instructions are more problematic.  To begin with,
10871c27b644SPaul E. McKenneythere is no such thing as a data dependency to a load.  Next, a CPU
10881c27b644SPaul E. McKenneyhas no reason to respect a control dependency to a load, because it
10891c27b644SPaul E. McKenneycan always satisfy the second load speculatively before the first, and
10901c27b644SPaul E. McKenneythen ignore the result if it turns out that the second load shouldn't
10911c27b644SPaul E. McKenneybe executed after all.  And lastly, the real difficulties begin when
10921c27b644SPaul E. McKenneywe consider address dependencies to loads.
10931c27b644SPaul E. McKenney
10941c27b644SPaul E. McKenneyTo be fair about it, all Linux-supported architectures do execute
10951c27b644SPaul E. McKenneyloads in program order if there is an address dependency between them.
10961c27b644SPaul E. McKenneyAfter all, a CPU cannot ask the memory subsystem to load a value from
10971c27b644SPaul E. McKenneya particular location before it knows what that location is.  However,
10981c27b644SPaul E. McKenneythe split-cache design used by Alpha can cause it to behave in a way
10991c27b644SPaul E. McKenneythat looks as if the loads were executed out of order (see the next
1100bf28ae56SAlan Sternsection for more details).  The kernel includes a workaround for this
1101bf28ae56SAlan Sternproblem when the loads come from READ_ONCE(), and therefore the LKMM
1102bf28ae56SAlan Sternincludes address dependencies to loads in the ppo relation.
11031c27b644SPaul E. McKenney
11041c27b644SPaul E. McKenneyOn the other hand, dependencies can indirectly affect the ordering of
11051c27b644SPaul E. McKenneytwo loads.  This happens when there is a dependency from a load to a
11061c27b644SPaul E. McKenneystore and a second, po-later load reads from that store:
11071c27b644SPaul E. McKenney
11081c27b644SPaul E. McKenney	R ->dep W ->rfi R',
11091c27b644SPaul E. McKenney
11101c27b644SPaul E. McKenneywhere the dep link can be either an address or a data dependency.  In
11111c27b644SPaul E. McKenneythis situation we know it is possible for the CPU to execute R' before
11121c27b644SPaul E. McKenneyW, because it can forward the value that W will store to R'.  But it
11131c27b644SPaul E. McKenneycannot execute R' before R, because it cannot forward the value before
11141c27b644SPaul E. McKenneyit knows what that value is, or that W and R' do access the same
11151c27b644SPaul E. McKenneylocation.  However, if there is merely a control dependency between R
11161c27b644SPaul E. McKenneyand W then the CPU can speculatively forward W to R' before executing
11171c27b644SPaul E. McKenneyR; if the speculation turns out to be wrong then the CPU merely has to
11181c27b644SPaul E. McKenneyrestart or abandon R'.
11191c27b644SPaul E. McKenney
11201c27b644SPaul E. McKenney(In theory, a CPU might forward a store to a load when it runs across
11211c27b644SPaul E. McKenneyan address dependency like this:
11221c27b644SPaul E. McKenney
11231c27b644SPaul E. McKenney	r1 = READ_ONCE(ptr);
11241c27b644SPaul E. McKenney	WRITE_ONCE(*r1, 17);
11251c27b644SPaul E. McKenney	r2 = READ_ONCE(*r1);
11261c27b644SPaul E. McKenney
11271c27b644SPaul E. McKenneybecause it could tell that the store and the second load access the
11281c27b644SPaul E. McKenneysame location even before it knows what the location's address is.
11291c27b644SPaul E. McKenneyHowever, none of the architectures supported by the Linux kernel do
11301c27b644SPaul E. McKenneythis.)
11311c27b644SPaul E. McKenney
11321c27b644SPaul E. McKenneyTwo memory accesses of the same location must always be executed in
11331c27b644SPaul E. McKenneyprogram order if the second access is a store.  Thus, if we have
11341c27b644SPaul E. McKenney
11351c27b644SPaul E. McKenney	R ->po-loc W
11361c27b644SPaul E. McKenney
11371c27b644SPaul E. McKenney(the po-loc link says that R comes before W in program order and they
11381c27b644SPaul E. McKenneyaccess the same location), the CPU is obliged to execute W after R.
11391c27b644SPaul E. McKenneyIf it executed W first then the memory subsystem would respond to R's
11401c27b644SPaul E. McKenneyread request with the value stored by W (or an even later store), in
11411c27b644SPaul E. McKenneyviolation of the read-write coherence rule.  Similarly, if we had
11421c27b644SPaul E. McKenney
11431c27b644SPaul E. McKenney	W ->po-loc W'
11441c27b644SPaul E. McKenney
11451c27b644SPaul E. McKenneyand the CPU executed W' before W, then the memory subsystem would put
11461c27b644SPaul E. McKenneyW' before W in the coherence order.  It would effectively cause W to
11471c27b644SPaul E. McKenneyoverwrite W', in violation of the write-write coherence rule.
11481c27b644SPaul E. McKenney(Interestingly, an early ARMv8 memory model, now obsolete, proposed
11491c27b644SPaul E. McKenneyallowing out-of-order writes like this to occur.  The model avoided
11501c27b644SPaul E. McKenneyviolating the write-write coherence rule by requiring the CPU not to
11511c27b644SPaul E. McKenneysend the W write to the memory subsystem at all!)
11521c27b644SPaul E. McKenney
11531c27b644SPaul E. McKenney
11541c27b644SPaul E. McKenneyAND THEN THERE WAS ALPHA
11551c27b644SPaul E. McKenney------------------------
11561c27b644SPaul E. McKenney
11571c27b644SPaul E. McKenneyAs mentioned above, the Alpha architecture is unique in that it does
11581c27b644SPaul E. McKenneynot appear to respect address dependencies to loads.  This means that
11591c27b644SPaul E. McKenneycode such as the following:
11601c27b644SPaul E. McKenney
11611c27b644SPaul E. McKenney	int x = 0;
11621c27b644SPaul E. McKenney	int y = -1;
11631c27b644SPaul E. McKenney	int *ptr = &y;
11641c27b644SPaul E. McKenney
11651c27b644SPaul E. McKenney	P0()
11661c27b644SPaul E. McKenney	{
11671c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
11681c27b644SPaul E. McKenney		smp_wmb();
11691c27b644SPaul E. McKenney		WRITE_ONCE(ptr, &x);
11701c27b644SPaul E. McKenney	}
11711c27b644SPaul E. McKenney
11721c27b644SPaul E. McKenney	P1()
11731c27b644SPaul E. McKenney	{
11741c27b644SPaul E. McKenney		int *r1;
11751c27b644SPaul E. McKenney		int r2;
11761c27b644SPaul E. McKenney
1177bf28ae56SAlan Stern		r1 = ptr;
11781c27b644SPaul E. McKenney		r2 = READ_ONCE(*r1);
11791c27b644SPaul E. McKenney	}
11801c27b644SPaul E. McKenney
1181bf28ae56SAlan Sterncan malfunction on Alpha systems (notice that P1 uses an ordinary load
1182bf28ae56SAlan Sternto read ptr instead of READ_ONCE()).  It is quite possible that r1 = &x
11831c27b644SPaul E. McKenneyand r2 = 0 at the end, in spite of the address dependency.
11841c27b644SPaul E. McKenney
11851c27b644SPaul E. McKenneyAt first glance this doesn't seem to make sense.  We know that the
11861c27b644SPaul E. McKenneysmp_wmb() forces P0's store to x to propagate to P1 before the store
11871c27b644SPaul E. McKenneyto ptr does.  And since P1 can't execute its second load
11881c27b644SPaul E. McKenneyuntil it knows what location to load from, i.e., after executing its
11891c27b644SPaul E. McKenneyfirst load, the value x = 1 must have propagated to P1 before the
11901c27b644SPaul E. McKenneysecond load executed.  So why doesn't r2 end up equal to 1?
11911c27b644SPaul E. McKenney
11921c27b644SPaul E. McKenneyThe answer lies in the Alpha's split local caches.  Although the two
11931c27b644SPaul E. McKenneystores do reach P1's local cache in the proper order, it can happen
11941c27b644SPaul E. McKenneythat the first store is processed by a busy part of the cache while
11951c27b644SPaul E. McKenneythe second store is processed by an idle part.  As a result, the x = 1
11961c27b644SPaul E. McKenneyvalue may not become available for P1's CPU to read until after the
11971c27b644SPaul E. McKenneyptr = &x value does, leading to the undesirable result above.  The
11981c27b644SPaul E. McKenneyfinal effect is that even though the two loads really are executed in
11991c27b644SPaul E. McKenneyprogram order, it appears that they aren't.
12001c27b644SPaul E. McKenney
12011c27b644SPaul E. McKenneyThis could not have happened if the local cache had processed the
1202bd5c0ba2SAlan Sternincoming stores in FIFO order.  By contrast, other architectures
12031c27b644SPaul E. McKenneymaintain at least the appearance of FIFO order.
12041c27b644SPaul E. McKenney
1205bf28ae56SAlan SternIn practice, this difficulty is solved by inserting a special fence
1206bf28ae56SAlan Sternbetween P1's two loads when the kernel is compiled for the Alpha
1207bf28ae56SAlan Sternarchitecture.  In fact, as of version 4.15, the kernel automatically
1208628fd556SWill Deaconadds this fence after every READ_ONCE() and atomic load on Alpha.  The
1209628fd556SWill Deaconeffect of the fence is to cause the CPU not to execute any po-later
1210628fd556SWill Deaconinstructions until after the local cache has finished processing all
1211628fd556SWill Deaconthe stores it has already received.  Thus, if the code was changed to:
12121c27b644SPaul E. McKenney
12131c27b644SPaul E. McKenney	P1()
12141c27b644SPaul E. McKenney	{
12151c27b644SPaul E. McKenney		int *r1;
12161c27b644SPaul E. McKenney		int r2;
12171c27b644SPaul E. McKenney
12181c27b644SPaul E. McKenney		r1 = READ_ONCE(ptr);
12191c27b644SPaul E. McKenney		r2 = READ_ONCE(*r1);
12201c27b644SPaul E. McKenney	}
12211c27b644SPaul E. McKenney
12221c27b644SPaul E. McKenneythen we would never get r1 = &x and r2 = 0.  By the time P1 executed
12231c27b644SPaul E. McKenneyits second load, the x = 1 store would already be fully processed by
1224bf28ae56SAlan Sternthe local cache and available for satisfying the read request.  Thus
1225bf28ae56SAlan Sternwe have yet another reason why shared data should always be read with
1226bf28ae56SAlan SternREAD_ONCE() or another synchronization primitive rather than accessed
1227bf28ae56SAlan Sterndirectly.
12281c27b644SPaul E. McKenney
12291c27b644SPaul E. McKenneyThe LKMM requires that smp_rmb(), acquire fences, and strong fences
1230628fd556SWill Deaconshare this property: They do not allow the CPU to execute any po-later
1231628fd556SWill Deaconinstructions (or po-later loads in the case of smp_rmb()) until all
1232628fd556SWill Deaconoutstanding stores have been processed by the local cache.  In the
1233628fd556SWill Deaconcase of a strong fence, the CPU first has to wait for all of its
1234628fd556SWill Deaconpo-earlier stores to propagate to every other CPU in the system; then
1235628fd556SWill Deaconit has to wait for the local cache to process all the stores received
1236628fd556SWill Deaconas of that time -- not just the stores received when the strong fence
1237628fd556SWill Deaconbegan.
12381c27b644SPaul E. McKenney
12391c27b644SPaul E. McKenneyAnd of course, none of this matters for any architecture other than
12401c27b644SPaul E. McKenneyAlpha.
12411c27b644SPaul E. McKenney
12421c27b644SPaul E. McKenney
12431c27b644SPaul E. McKenneyTHE HAPPENS-BEFORE RELATION: hb
12441c27b644SPaul E. McKenney-------------------------------
12451c27b644SPaul E. McKenney
12461c27b644SPaul E. McKenneyThe happens-before relation (hb) links memory accesses that have to
12471c27b644SPaul E. McKenneyexecute in a certain order.  hb includes the ppo relation and two
12481c27b644SPaul E. McKenneyothers, one of which is rfe.
12491c27b644SPaul E. McKenney
12501c27b644SPaul E. McKenneyW ->rfe R implies that W and R are on different CPUs.  It also means
12511c27b644SPaul E. McKenneythat W's store must have propagated to R's CPU before R executed;
12521c27b644SPaul E. McKenneyotherwise R could not have read the value stored by W.  Therefore W
12531c27b644SPaul E. McKenneymust have executed before R, and so we have W ->hb R.
12541c27b644SPaul E. McKenney
12551c27b644SPaul E. McKenneyThe equivalent fact need not hold if W ->rfi R (i.e., W and R are on
12561c27b644SPaul E. McKenneythe same CPU).  As we have already seen, the operational model allows
12571c27b644SPaul E. McKenneyW's value to be forwarded to R in such cases, meaning that R may well
12581c27b644SPaul E. McKenneyexecute before W does.
12591c27b644SPaul E. McKenney
12601c27b644SPaul E. McKenneyIt's important to understand that neither coe nor fre is included in
12611c27b644SPaul E. McKenneyhb, despite their similarities to rfe.  For example, suppose we have
12621c27b644SPaul E. McKenneyW ->coe W'.  This means that W and W' are stores to the same location,
12631c27b644SPaul E. McKenneythey execute on different CPUs, and W comes before W' in the coherence
12641c27b644SPaul E. McKenneyorder (i.e., W' overwrites W).  Nevertheless, it is possible for W' to
12651c27b644SPaul E. McKenneyexecute before W, because the decision as to which store overwrites
12661c27b644SPaul E. McKenneythe other is made later by the memory subsystem.  When the stores are
12671c27b644SPaul E. McKenneynearly simultaneous, either one can come out on top.  Similarly,
12681c27b644SPaul E. McKenneyR ->fre W means that W overwrites the value which R reads, but it
12691c27b644SPaul E. McKenneydoesn't mean that W has to execute after R.  All that's necessary is
12701c27b644SPaul E. McKenneyfor the memory subsystem not to propagate W to R's CPU until after R
12711c27b644SPaul E. McKenneyhas executed, which is possible if W executes shortly before R.
12721c27b644SPaul E. McKenney
12731c27b644SPaul E. McKenneyThe third relation included in hb is like ppo, in that it only links
12741c27b644SPaul E. McKenneyevents that are on the same CPU.  However it is more difficult to
12751c27b644SPaul E. McKenneyexplain, because it arises only indirectly from the requirement of
12761c27b644SPaul E. McKenneycache coherence.  The relation is called prop, and it links two events
12771c27b644SPaul E. McKenneyon CPU C in situations where a store from some other CPU comes after
12781c27b644SPaul E. McKenneythe first event in the coherence order and propagates to C before the
12791c27b644SPaul E. McKenneysecond event executes.
12801c27b644SPaul E. McKenney
12811c27b644SPaul E. McKenneyThis is best explained with some examples.  The simplest case looks
12821c27b644SPaul E. McKenneylike this:
12831c27b644SPaul E. McKenney
12841c27b644SPaul E. McKenney	int x;
12851c27b644SPaul E. McKenney
12861c27b644SPaul E. McKenney	P0()
12871c27b644SPaul E. McKenney	{
12881c27b644SPaul E. McKenney		int r1;
12891c27b644SPaul E. McKenney
12901c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
12911c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
12921c27b644SPaul E. McKenney	}
12931c27b644SPaul E. McKenney
12941c27b644SPaul E. McKenney	P1()
12951c27b644SPaul E. McKenney	{
12961c27b644SPaul E. McKenney		WRITE_ONCE(x, 8);
12971c27b644SPaul E. McKenney	}
12981c27b644SPaul E. McKenney
12991c27b644SPaul E. McKenneyIf r1 = 8 at the end then P0's accesses must have executed in program
13001c27b644SPaul E. McKenneyorder.  We can deduce this from the operational model; if P0's load
13011c27b644SPaul E. McKenneyhad executed before its store then the value of the store would have
13021c27b644SPaul E. McKenneybeen forwarded to the load, so r1 would have ended up equal to 1, not
13031c27b644SPaul E. McKenney8.  In this case there is a prop link from P0's write event to its read
13041c27b644SPaul E. McKenneyevent, because P1's store came after P0's store in x's coherence
13051c27b644SPaul E. McKenneyorder, and P1's store propagated to P0 before P0's load executed.
13061c27b644SPaul E. McKenney
13071c27b644SPaul E. McKenneyAn equally simple case involves two loads of the same location that
13081c27b644SPaul E. McKenneyread from different stores:
13091c27b644SPaul E. McKenney
13101c27b644SPaul E. McKenney	int x = 0;
13111c27b644SPaul E. McKenney
13121c27b644SPaul E. McKenney	P0()
13131c27b644SPaul E. McKenney	{
13141c27b644SPaul E. McKenney		int r1, r2;
13151c27b644SPaul E. McKenney
13161c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
13171c27b644SPaul E. McKenney		r2 = READ_ONCE(x);
13181c27b644SPaul E. McKenney	}
13191c27b644SPaul E. McKenney
13201c27b644SPaul E. McKenney	P1()
13211c27b644SPaul E. McKenney	{
13221c27b644SPaul E. McKenney		WRITE_ONCE(x, 9);
13231c27b644SPaul E. McKenney	}
13241c27b644SPaul E. McKenney
13251c27b644SPaul E. McKenneyIf r1 = 0 and r2 = 9 at the end then P0's accesses must have executed
13261c27b644SPaul E. McKenneyin program order.  If the second load had executed before the first
13271c27b644SPaul E. McKenneythen the x = 9 store must have been propagated to P0 before the first
13281c27b644SPaul E. McKenneyload executed, and so r1 would have been 9 rather than 0.  In this
13291c27b644SPaul E. McKenneycase there is a prop link from P0's first read event to its second,
13301c27b644SPaul E. McKenneybecause P1's store overwrote the value read by P0's first load, and
13311c27b644SPaul E. McKenneyP1's store propagated to P0 before P0's second load executed.
13321c27b644SPaul E. McKenney
13331c27b644SPaul E. McKenneyLess trivial examples of prop all involve fences.  Unlike the simple
13341c27b644SPaul E. McKenneyexamples above, they can require that some instructions are executed
13351c27b644SPaul E. McKenneyout of program order.  This next one should look familiar:
13361c27b644SPaul E. McKenney
13371c27b644SPaul E. McKenney	int buf = 0, flag = 0;
13381c27b644SPaul E. McKenney
13391c27b644SPaul E. McKenney	P0()
13401c27b644SPaul E. McKenney	{
13411c27b644SPaul E. McKenney		WRITE_ONCE(buf, 1);
13421c27b644SPaul E. McKenney		smp_wmb();
13431c27b644SPaul E. McKenney		WRITE_ONCE(flag, 1);
13441c27b644SPaul E. McKenney	}
13451c27b644SPaul E. McKenney
13461c27b644SPaul E. McKenney	P1()
13471c27b644SPaul E. McKenney	{
13481c27b644SPaul E. McKenney		int r1;
13491c27b644SPaul E. McKenney		int r2;
13501c27b644SPaul E. McKenney
13511c27b644SPaul E. McKenney		r1 = READ_ONCE(flag);
13521c27b644SPaul E. McKenney		r2 = READ_ONCE(buf);
13531c27b644SPaul E. McKenney	}
13541c27b644SPaul E. McKenney
13551c27b644SPaul E. McKenneyThis is the MP pattern again, with an smp_wmb() fence between the two
13561c27b644SPaul E. McKenneystores.  If r1 = 1 and r2 = 0 at the end then there is a prop link
13571c27b644SPaul E. McKenneyfrom P1's second load to its first (backwards!).  The reason is
13581c27b644SPaul E. McKenneysimilar to the previous examples: The value P1 loads from buf gets
13591c27b644SPaul E. McKenneyoverwritten by P0's store to buf, the fence guarantees that the store
13601c27b644SPaul E. McKenneyto buf will propagate to P1 before the store to flag does, and the
13611c27b644SPaul E. McKenneystore to flag propagates to P1 before P1 reads flag.
13621c27b644SPaul E. McKenney
13631c27b644SPaul E. McKenneyThe prop link says that in order to obtain the r1 = 1, r2 = 0 result,
13641c27b644SPaul E. McKenneyP1 must execute its second load before the first.  Indeed, if the load
13651c27b644SPaul E. McKenneyfrom flag were executed first, then the buf = 1 store would already
13661c27b644SPaul E. McKenneyhave propagated to P1 by the time P1's load from buf executed, so r2
13671c27b644SPaul E. McKenneywould have been 1 at the end, not 0.  (The reasoning holds even for
13681c27b644SPaul E. McKenneyAlpha, although the details are more complicated and we will not go
13691c27b644SPaul E. McKenneyinto them.)
13701c27b644SPaul E. McKenney
13711c27b644SPaul E. McKenneyBut what if we put an smp_rmb() fence between P1's loads?  The fence
13721c27b644SPaul E. McKenneywould force the two loads to be executed in program order, and it
13731c27b644SPaul E. McKenneywould generate a cycle in the hb relation: The fence would create a ppo
13741c27b644SPaul E. McKenneylink (hence an hb link) from the first load to the second, and the
13751c27b644SPaul E. McKenneyprop relation would give an hb link from the second load to the first.
13761c27b644SPaul E. McKenneySince an instruction can't execute before itself, we are forced to
13771c27b644SPaul E. McKenneyconclude that if an smp_rmb() fence is added, the r1 = 1, r2 = 0
13781c27b644SPaul E. McKenneyoutcome is impossible -- as it should be.
13791c27b644SPaul E. McKenney
13801c27b644SPaul E. McKenneyThe formal definition of the prop relation involves a coe or fre link,
13811c27b644SPaul E. McKenneyfollowed by an arbitrary number of cumul-fence links, ending with an
13821c27b644SPaul E. McKenneyrfe link.  You can concoct more exotic examples, containing more than
13831c27b644SPaul E. McKenneyone fence, although this quickly leads to diminishing returns in terms
13841c27b644SPaul E. McKenneyof complexity.  For instance, here's an example containing a coe link
13856240973eSJoel Fernandes (Google)followed by two cumul-fences and an rfe link, utilizing the fact that
13861c27b644SPaul E. McKenneyrelease fences are A-cumulative:
13871c27b644SPaul E. McKenney
13881c27b644SPaul E. McKenney	int x, y, z;
13891c27b644SPaul E. McKenney
13901c27b644SPaul E. McKenney	P0()
13911c27b644SPaul E. McKenney	{
13921c27b644SPaul E. McKenney		int r0;
13931c27b644SPaul E. McKenney
13941c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
13951c27b644SPaul E. McKenney		r0 = READ_ONCE(z);
13961c27b644SPaul E. McKenney	}
13971c27b644SPaul E. McKenney
13981c27b644SPaul E. McKenney	P1()
13991c27b644SPaul E. McKenney	{
14001c27b644SPaul E. McKenney		WRITE_ONCE(x, 2);
14011c27b644SPaul E. McKenney		smp_wmb();
14021c27b644SPaul E. McKenney		WRITE_ONCE(y, 1);
14031c27b644SPaul E. McKenney	}
14041c27b644SPaul E. McKenney
14051c27b644SPaul E. McKenney	P2()
14061c27b644SPaul E. McKenney	{
14071c27b644SPaul E. McKenney		int r2;
14081c27b644SPaul E. McKenney
14091c27b644SPaul E. McKenney		r2 = READ_ONCE(y);
14101c27b644SPaul E. McKenney		smp_store_release(&z, 1);
14111c27b644SPaul E. McKenney	}
14121c27b644SPaul E. McKenney
14131c27b644SPaul E. McKenneyIf x = 2, r0 = 1, and r2 = 1 after this code runs then there is a prop
14141c27b644SPaul E. McKenneylink from P0's store to its load.  This is because P0's store gets
14151c27b644SPaul E. McKenneyoverwritten by P1's store since x = 2 at the end (a coe link), the
14161c27b644SPaul E. McKenneysmp_wmb() ensures that P1's store to x propagates to P2 before the
14176240973eSJoel Fernandes (Google)store to y does (the first cumul-fence), the store to y propagates to P2
14181c27b644SPaul E. McKenneybefore P2's load and store execute, P2's smp_store_release()
14191c27b644SPaul E. McKenneyguarantees that the stores to x and y both propagate to P0 before the
14206240973eSJoel Fernandes (Google)store to z does (the second cumul-fence), and P0's load executes after the
14211c27b644SPaul E. McKenneystore to z has propagated to P0 (an rfe link).
14221c27b644SPaul E. McKenney
14231c27b644SPaul E. McKenneyIn summary, the fact that the hb relation links memory access events
14241c27b644SPaul E. McKenneyin the order they execute means that it must not have cycles.  This
14251c27b644SPaul E. McKenneyrequirement is the content of the LKMM's "happens-before" axiom.
14261c27b644SPaul E. McKenney
14271c27b644SPaul E. McKenneyThe LKMM defines yet another relation connected to times of
14281c27b644SPaul E. McKenneyinstruction execution, but it is not included in hb.  It relies on the
14291c27b644SPaul E. McKenneyparticular properties of strong fences, which we cover in the next
14301c27b644SPaul E. McKenneysection.
14311c27b644SPaul E. McKenney
14321c27b644SPaul E. McKenney
14331c27b644SPaul E. McKenneyTHE PROPAGATES-BEFORE RELATION: pb
14341c27b644SPaul E. McKenney----------------------------------
14351c27b644SPaul E. McKenney
14361c27b644SPaul E. McKenneyThe propagates-before (pb) relation capitalizes on the special
14371c27b644SPaul E. McKenneyfeatures of strong fences.  It links two events E and F whenever some
14381c27b644SPaul E. McKenneystore is coherence-later than E and propagates to every CPU and to RAM
14391c27b644SPaul E. McKenneybefore F executes.  The formal definition requires that E be linked to
14401c27b644SPaul E. McKenneyF via a coe or fre link, an arbitrary number of cumul-fences, an
14411c27b644SPaul E. McKenneyoptional rfe link, a strong fence, and an arbitrary number of hb
14421c27b644SPaul E. McKenneylinks.  Let's see how this definition works out.
14431c27b644SPaul E. McKenney
14441c27b644SPaul E. McKenneyConsider first the case where E is a store (implying that the sequence
14451c27b644SPaul E. McKenneyof links begins with coe).  Then there are events W, X, Y, and Z such
14461c27b644SPaul E. McKenneythat:
14471c27b644SPaul E. McKenney
14481c27b644SPaul E. McKenney	E ->coe W ->cumul-fence* X ->rfe? Y ->strong-fence Z ->hb* F,
14491c27b644SPaul E. McKenney
14501c27b644SPaul E. McKenneywhere the * suffix indicates an arbitrary number of links of the
14511c27b644SPaul E. McKenneyspecified type, and the ? suffix indicates the link is optional (Y may
14521c27b644SPaul E. McKenneybe equal to X).  Because of the cumul-fence links, we know that W will
14531c27b644SPaul E. McKenneypropagate to Y's CPU before X does, hence before Y executes and hence
14541c27b644SPaul E. McKenneybefore the strong fence executes.  Because this fence is strong, we
14551c27b644SPaul E. McKenneyknow that W will propagate to every CPU and to RAM before Z executes.
14561c27b644SPaul E. McKenneyAnd because of the hb links, we know that Z will execute before F.
14571c27b644SPaul E. McKenneyThus W, which comes later than E in the coherence order, will
14581c27b644SPaul E. McKenneypropagate to every CPU and to RAM before F executes.
14591c27b644SPaul E. McKenney
14601c27b644SPaul E. McKenneyThe case where E is a load is exactly the same, except that the first
14611c27b644SPaul E. McKenneylink in the sequence is fre instead of coe.
14621c27b644SPaul E. McKenney
14631c27b644SPaul E. McKenneyThe existence of a pb link from E to F implies that E must execute
14641c27b644SPaul E. McKenneybefore F.  To see why, suppose that F executed first.  Then W would
14651c27b644SPaul E. McKenneyhave propagated to E's CPU before E executed.  If E was a store, the
14661c27b644SPaul E. McKenneymemory subsystem would then be forced to make E come after W in the
14671c27b644SPaul E. McKenneycoherence order, contradicting the fact that E ->coe W.  If E was a
14681c27b644SPaul E. McKenneyload, the memory subsystem would then be forced to satisfy E's read
14691c27b644SPaul E. McKenneyrequest with the value stored by W or an even later store,
14701c27b644SPaul E. McKenneycontradicting the fact that E ->fre W.
14711c27b644SPaul E. McKenney
14721c27b644SPaul E. McKenneyA good example illustrating how pb works is the SB pattern with strong
14731c27b644SPaul E. McKenneyfences:
14741c27b644SPaul E. McKenney
14751c27b644SPaul E. McKenney	int x = 0, y = 0;
14761c27b644SPaul E. McKenney
14771c27b644SPaul E. McKenney	P0()
14781c27b644SPaul E. McKenney	{
14791c27b644SPaul E. McKenney		int r0;
14801c27b644SPaul E. McKenney
14811c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
14821c27b644SPaul E. McKenney		smp_mb();
14831c27b644SPaul E. McKenney		r0 = READ_ONCE(y);
14841c27b644SPaul E. McKenney	}
14851c27b644SPaul E. McKenney
14861c27b644SPaul E. McKenney	P1()
14871c27b644SPaul E. McKenney	{
14881c27b644SPaul E. McKenney		int r1;
14891c27b644SPaul E. McKenney
14901c27b644SPaul E. McKenney		WRITE_ONCE(y, 1);
14911c27b644SPaul E. McKenney		smp_mb();
14921c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
14931c27b644SPaul E. McKenney	}
14941c27b644SPaul E. McKenney
14951c27b644SPaul E. McKenneyIf r0 = 0 at the end then there is a pb link from P0's load to P1's
14961c27b644SPaul E. McKenneyload: an fre link from P0's load to P1's store (which overwrites the
14971c27b644SPaul E. McKenneyvalue read by P0), and a strong fence between P1's store and its load.
14981c27b644SPaul E. McKenneyIn this example, the sequences of cumul-fence and hb links are empty.
14991c27b644SPaul E. McKenneyNote that this pb link is not included in hb as an instance of prop,
15001c27b644SPaul E. McKenneybecause it does not start and end on the same CPU.
15011c27b644SPaul E. McKenney
15021c27b644SPaul E. McKenneySimilarly, if r1 = 0 at the end then there is a pb link from P1's load
15031c27b644SPaul E. McKenneyto P0's.  This means that if both r1 and r2 were 0 there would be a
15041c27b644SPaul E. McKenneycycle in pb, which is not possible since an instruction cannot execute
15051c27b644SPaul E. McKenneybefore itself.  Thus, adding smp_mb() fences to the SB pattern
15061c27b644SPaul E. McKenneyprevents the r0 = 0, r1 = 0 outcome.
15071c27b644SPaul E. McKenney
15081c27b644SPaul E. McKenneyIn summary, the fact that the pb relation links events in the order
15091c27b644SPaul E. McKenneythey execute means that it cannot have cycles.  This requirement is
15101c27b644SPaul E. McKenneythe content of the LKMM's "propagation" axiom.
15111c27b644SPaul E. McKenney
15121c27b644SPaul E. McKenney
1513ddc82999SAlan SternRCU RELATIONS: rcu-link, rcu-gp, rcu-rscsi, rcu-order, rcu-fence, and rb
1514ddc82999SAlan Stern------------------------------------------------------------------------
15151c27b644SPaul E. McKenney
15161c27b644SPaul E. McKenneyRCU (Read-Copy-Update) is a powerful synchronization mechanism.  It
15171c27b644SPaul E. McKenneyrests on two concepts: grace periods and read-side critical sections.
15181c27b644SPaul E. McKenney
15191c27b644SPaul E. McKenneyA grace period is the span of time occupied by a call to
15201c27b644SPaul E. McKenneysynchronize_rcu().  A read-side critical section (or just critical
15211c27b644SPaul E. McKenneysection, for short) is a region of code delimited by rcu_read_lock()
15221c27b644SPaul E. McKenneyat the start and rcu_read_unlock() at the end.  Critical sections can
15231c27b644SPaul E. McKenneybe nested, although we won't make use of this fact.
15241c27b644SPaul E. McKenney
15251c27b644SPaul E. McKenneyAs far as memory models are concerned, RCU's main feature is its
15261c27b644SPaul E. McKenneyGrace-Period Guarantee, which states that a critical section can never
15271c27b644SPaul E. McKenneyspan a full grace period.  In more detail, the Guarantee says:
15281c27b644SPaul E. McKenney
1529648e7175SAlan Stern	For any critical section C and any grace period G, at least
1530648e7175SAlan Stern	one of the following statements must hold:
15311c27b644SPaul E. McKenney
1532648e7175SAlan Stern(1)	C ends before G does, and in addition, every store that
1533648e7175SAlan Stern	propagates to C's CPU before the end of C must propagate to
1534648e7175SAlan Stern	every CPU before G ends.
1535648e7175SAlan Stern
1536648e7175SAlan Stern(2)	G starts before C does, and in addition, every store that
1537648e7175SAlan Stern	propagates to G's CPU before the start of G must propagate
1538648e7175SAlan Stern	to every CPU before C starts.
1539648e7175SAlan Stern
1540648e7175SAlan SternIn particular, it is not possible for a critical section to both start
1541648e7175SAlan Sternbefore and end after a grace period.
15421c27b644SPaul E. McKenney
15431c27b644SPaul E. McKenneyHere is a simple example of RCU in action:
15441c27b644SPaul E. McKenney
15451c27b644SPaul E. McKenney	int x, y;
15461c27b644SPaul E. McKenney
15471c27b644SPaul E. McKenney	P0()
15481c27b644SPaul E. McKenney	{
15491c27b644SPaul E. McKenney		rcu_read_lock();
15501c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
15511c27b644SPaul E. McKenney		WRITE_ONCE(y, 1);
15521c27b644SPaul E. McKenney		rcu_read_unlock();
15531c27b644SPaul E. McKenney	}
15541c27b644SPaul E. McKenney
15551c27b644SPaul E. McKenney	P1()
15561c27b644SPaul E. McKenney	{
15571c27b644SPaul E. McKenney		int r1, r2;
15581c27b644SPaul E. McKenney
15591c27b644SPaul E. McKenney		r1 = READ_ONCE(x);
15601c27b644SPaul E. McKenney		synchronize_rcu();
15611c27b644SPaul E. McKenney		r2 = READ_ONCE(y);
15621c27b644SPaul E. McKenney	}
15631c27b644SPaul E. McKenney
15641c27b644SPaul E. McKenneyThe Grace Period Guarantee tells us that when this code runs, it will
15651c27b644SPaul E. McKenneynever end with r1 = 1 and r2 = 0.  The reasoning is as follows.  r1 = 1
15661c27b644SPaul E. McKenneymeans that P0's store to x propagated to P1 before P1 called
15671c27b644SPaul E. McKenneysynchronize_rcu(), so P0's critical section must have started before
1568648e7175SAlan SternP1's grace period, contrary to part (2) of the Guarantee.  On the
1569648e7175SAlan Sternother hand, r2 = 0 means that P0's store to y, which occurs before the
1570648e7175SAlan Sternend of the critical section, did not propagate to P1 before the end of
1571648e7175SAlan Sternthe grace period, contrary to part (1).  Together the results violate
1572648e7175SAlan Sternthe Guarantee.
15731c27b644SPaul E. McKenney
15741ee2da5fSAlan SternIn the kernel's implementations of RCU, the requirements for stores
15751ee2da5fSAlan Sternto propagate to every CPU are fulfilled by placing strong fences at
15761c27b644SPaul E. McKenneysuitable places in the RCU-related code.  Thus, if a critical section
15771c27b644SPaul E. McKenneystarts before a grace period does then the critical section's CPU will
15781c27b644SPaul E. McKenneyexecute an smp_mb() fence after the end of the critical section and
15791c27b644SPaul E. McKenneysome time before the grace period's synchronize_rcu() call returns.
15801c27b644SPaul E. McKenneyAnd if a critical section ends after a grace period does then the
15811c27b644SPaul E. McKenneysynchronize_rcu() routine will execute an smp_mb() fence at its start
15821c27b644SPaul E. McKenneyand some time before the critical section's opening rcu_read_lock()
15831c27b644SPaul E. McKenneyexecutes.
15841c27b644SPaul E. McKenney
15851c27b644SPaul E. McKenneyWhat exactly do we mean by saying that a critical section "starts
15861c27b644SPaul E. McKenneybefore" or "ends after" a grace period?  Some aspects of the meaning
15871c27b644SPaul E. McKenneyare pretty obvious, as in the example above, but the details aren't
15881ee2da5fSAlan Sternentirely clear.  The LKMM formalizes this notion by means of the
15891ee2da5fSAlan Sternrcu-link relation.  rcu-link encompasses a very general notion of
1590648e7175SAlan Stern"before": If E and F are RCU fence events (i.e., rcu_read_lock(),
1591648e7175SAlan Sternrcu_read_unlock(), or synchronize_rcu()) then among other things,
1592648e7175SAlan SternE ->rcu-link F includes cases where E is po-before some memory-access
1593648e7175SAlan Sternevent X, F is po-after some memory-access event Y, and we have any of
1594648e7175SAlan SternX ->rfe Y, X ->co Y, or X ->fr Y.
15951c27b644SPaul E. McKenney
15961ee2da5fSAlan SternThe formal definition of the rcu-link relation is more than a little
15971c27b644SPaul E. McKenneyobscure, and we won't give it here.  It is closely related to the pb
15981c27b644SPaul E. McKenneyrelation, and the details don't matter unless you want to comb through
15991c27b644SPaul E. McKenneya somewhat lengthy formal proof.  Pretty much all you need to know
16001ee2da5fSAlan Sternabout rcu-link is the information in the preceding paragraph.
16011c27b644SPaul E. McKenney
1602648e7175SAlan SternThe LKMM also defines the rcu-gp and rcu-rscsi relations.  They bring
1603648e7175SAlan Sterngrace periods and read-side critical sections into the picture, in the
16049d036883SAlan Sternfollowing way:
16051c27b644SPaul E. McKenney
1606648e7175SAlan Stern	E ->rcu-gp F means that E and F are in fact the same event,
1607648e7175SAlan Stern	and that event is a synchronize_rcu() fence (i.e., a grace
1608648e7175SAlan Stern	period).
16091c27b644SPaul E. McKenney
1610648e7175SAlan Stern	E ->rcu-rscsi F means that E and F are the rcu_read_unlock()
1611648e7175SAlan Stern	and rcu_read_lock() fence events delimiting some read-side
1612648e7175SAlan Stern	critical section.  (The 'i' at the end of the name emphasizes
1613648e7175SAlan Stern	that this relation is "inverted": It links the end of the
1614648e7175SAlan Stern	critical section to the start.)
16151c27b644SPaul E. McKenney
16161ee2da5fSAlan SternIf we think of the rcu-link relation as standing for an extended
1617648e7175SAlan Stern"before", then X ->rcu-gp Y ->rcu-link Z roughly says that X is a
1618648e7175SAlan Sterngrace period which ends before Z begins.  (In fact it covers more than
1619648e7175SAlan Sternthis, because it also includes cases where some store propagates to
1620648e7175SAlan SternZ's CPU before Z begins but doesn't propagate to some other CPU until
1621648e7175SAlan Sternafter X ends.)  Similarly, X ->rcu-rscsi Y ->rcu-link Z says that X is
1622648e7175SAlan Sternthe end of a critical section which starts before Z begins.
16239d036883SAlan Stern
1624ddc82999SAlan SternThe LKMM goes on to define the rcu-order relation as a sequence of
1625648e7175SAlan Sternrcu-gp and rcu-rscsi links separated by rcu-link links, in which the
1626648e7175SAlan Sternnumber of rcu-gp links is >= the number of rcu-rscsi links.  For
1627648e7175SAlan Sternexample:
16289d036883SAlan Stern
1629648e7175SAlan Stern	X ->rcu-gp Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
16309d036883SAlan Stern
1631ddc82999SAlan Sternwould imply that X ->rcu-order V, because this sequence contains two
1632648e7175SAlan Sternrcu-gp links and one rcu-rscsi link.  (It also implies that
1633ddc82999SAlan SternX ->rcu-order T and Z ->rcu-order V.)  On the other hand:
16349d036883SAlan Stern
1635648e7175SAlan Stern	X ->rcu-rscsi Y ->rcu-link Z ->rcu-rscsi T ->rcu-link U ->rcu-gp V
16369d036883SAlan Stern
1637ddc82999SAlan Sterndoes not imply X ->rcu-order V, because the sequence contains only
1638648e7175SAlan Sternone rcu-gp link but two rcu-rscsi links.
16399d036883SAlan Stern
1640ddc82999SAlan SternThe rcu-order relation is important because the Grace Period Guarantee
1641ddc82999SAlan Sternmeans that rcu-order links act kind of like strong fences.  In
1642ddc82999SAlan Sternparticular, E ->rcu-order F implies not only that E begins before F
1643ddc82999SAlan Sternends, but also that any write po-before E will propagate to every CPU
1644ddc82999SAlan Sternbefore any instruction po-after F can execute.  (However, it does not
1645ddc82999SAlan Sternimply that E must execute before F; in fact, each synchronize_rcu()
1646ddc82999SAlan Sternfence event is linked to itself by rcu-order as a degenerate case.)
16479d036883SAlan Stern
16489d036883SAlan SternTo prove this in full generality requires some intellectual effort.
16499d036883SAlan SternWe'll consider just a very simple case:
16509d036883SAlan Stern
1651648e7175SAlan Stern	G ->rcu-gp W ->rcu-link Z ->rcu-rscsi F.
16529d036883SAlan Stern
1653648e7175SAlan SternThis formula means that G and W are the same event (a grace period),
1654648e7175SAlan Sternand there are events X, Y and a read-side critical section C such that:
16559d036883SAlan Stern
1656648e7175SAlan Stern	1. G = W is po-before or equal to X;
16579d036883SAlan Stern
1658648e7175SAlan Stern	2. X comes "before" Y in some sense (including rfe, co and fr);
16599d036883SAlan Stern
16603321ea12SAlan Stern	3. Y is po-before Z;
16619d036883SAlan Stern
1662648e7175SAlan Stern	4. Z is the rcu_read_unlock() event marking the end of C;
16639d036883SAlan Stern
1664648e7175SAlan Stern	5. F is the rcu_read_lock() event marking the start of C.
16659d036883SAlan Stern
1666648e7175SAlan SternFrom 1 - 4 we deduce that the grace period G ends before the critical
1667648e7175SAlan Sternsection C.  Then part (2) of the Grace Period Guarantee says not only
1668648e7175SAlan Sternthat G starts before C does, but also that any write which executes on
1669648e7175SAlan SternG's CPU before G starts must propagate to every CPU before C starts.
1670648e7175SAlan SternIn particular, the write propagates to every CPU before F finishes
1671648e7175SAlan Sternexecuting and hence before any instruction po-after F can execute.
1672648e7175SAlan SternThis sort of reasoning can be extended to handle all the situations
1673ddc82999SAlan Sterncovered by rcu-order.
1674ddc82999SAlan Stern
1675ddc82999SAlan SternThe rcu-fence relation is a simple extension of rcu-order.  While
1676ddc82999SAlan Sternrcu-order only links certain fence events (calls to synchronize_rcu(),
1677ddc82999SAlan Sternrcu_read_lock(), or rcu_read_unlock()), rcu-fence links any events
1678ddc82999SAlan Sternthat are separated by an rcu-order link.  This is analogous to the way
1679ddc82999SAlan Sternthe strong-fence relation links events that are separated by an
1680ddc82999SAlan Sternsmp_mb() fence event (as mentioned above, rcu-order links act kind of
1681ddc82999SAlan Sternlike strong fences).  Written symbolically, X ->rcu-fence Y means
1682ddc82999SAlan Sternthere are fence events E and F such that:
1683ddc82999SAlan Stern
1684ddc82999SAlan Stern	X ->po E ->rcu-order F ->po Y.
1685ddc82999SAlan Stern
1686ddc82999SAlan SternFrom the discussion above, we see this implies not only that X
1687ddc82999SAlan Sternexecutes before Y, but also (if X is a store) that X propagates to
1688ddc82999SAlan Sternevery CPU before Y executes.  Thus rcu-fence is sort of a
1689ddc82999SAlan Stern"super-strong" fence: Unlike the original strong fences (smp_mb() and
1690ddc82999SAlan Sternsynchronize_rcu()), rcu-fence is able to link events on different
1691ddc82999SAlan SternCPUs.  (Perhaps this fact should lead us to say that rcu-fence isn't
1692ddc82999SAlan Sternreally a fence at all!)
16939d036883SAlan Stern
16949d036883SAlan SternFinally, the LKMM defines the RCU-before (rb) relation in terms of
16959d036883SAlan Sternrcu-fence.  This is done in essentially the same way as the pb
16969d036883SAlan Sternrelation was defined in terms of strong-fence.  We will omit the
1697648e7175SAlan Sterndetails; the end result is that E ->rb F implies E must execute
1698648e7175SAlan Sternbefore F, just as E ->pb F does (and for much the same reasons).
16991c27b644SPaul E. McKenney
17001c27b644SPaul E. McKenneyPutting this all together, the LKMM expresses the Grace Period
17019d036883SAlan SternGuarantee by requiring that the rb relation does not contain a cycle.
1702648e7175SAlan SternEquivalently, this "rcu" axiom requires that there are no events E
1703ddc82999SAlan Sternand F with E ->rcu-link F ->rcu-order E.  Or to put it a third way,
1704648e7175SAlan Sternthe axiom requires that there are no cycles consisting of rcu-gp and
1705648e7175SAlan Sternrcu-rscsi alternating with rcu-link, where the number of rcu-gp links
1706648e7175SAlan Sternis >= the number of rcu-rscsi links.
17071c27b644SPaul E. McKenney
17089d036883SAlan SternJustifying the axiom isn't easy, but it is in fact a valid
17099d036883SAlan Sternformalization of the Grace Period Guarantee.  We won't attempt to go
17109d036883SAlan Sternthrough the detailed argument, but the following analysis gives a
1711648e7175SAlan Sterntaste of what is involved.  Suppose both parts of the Guarantee are
1712648e7175SAlan Sternviolated: A critical section starts before a grace period, and some
1713648e7175SAlan Sternstore propagates to the critical section's CPU before the end of the
1714648e7175SAlan Sterncritical section but doesn't propagate to some other CPU until after
1715648e7175SAlan Sternthe end of the grace period.
17161c27b644SPaul E. McKenney
17171c27b644SPaul E. McKenneyPutting symbols to these ideas, let L and U be the rcu_read_lock() and
17181c27b644SPaul E. McKenneyrcu_read_unlock() fence events delimiting the critical section in
17191c27b644SPaul E. McKenneyquestion, and let S be the synchronize_rcu() fence event for the grace
17201c27b644SPaul E. McKenneyperiod.  Saying that the critical section starts before S means there
1721648e7175SAlan Sternare events Q and R where Q is po-after L (which marks the start of the
1722648e7175SAlan Sterncritical section), Q is "before" R in the sense used by the rcu-link
1723648e7175SAlan Sternrelation, and R is po-before the grace period S.  Thus we have:
17241c27b644SPaul E. McKenney
1725648e7175SAlan Stern	L ->rcu-link S.
17261c27b644SPaul E. McKenney
1727648e7175SAlan SternLet W be the store mentioned above, let Y come before the end of the
17281c27b644SPaul E. McKenneycritical section and witness that W propagates to the critical
1729648e7175SAlan Sternsection's CPU by reading from W, and let Z on some arbitrary CPU be a
1730648e7175SAlan Sternwitness that W has not propagated to that CPU, where Z happens after
17311c27b644SPaul E. McKenneysome event X which is po-after S.  Symbolically, this amounts to:
17321c27b644SPaul E. McKenney
1733648e7175SAlan Stern	S ->po X ->hb* Z ->fr W ->rf Y ->po U.
17341c27b644SPaul E. McKenney
1735648e7175SAlan SternThe fr link from Z to W indicates that W has not propagated to Z's CPU
1736648e7175SAlan Sternat the time that Z executes.  From this, it can be shown (see the
1737648e7175SAlan Sterndiscussion of the rcu-link relation earlier) that S and U are related
1738648e7175SAlan Sternby rcu-link:
17391c27b644SPaul E. McKenney
1740648e7175SAlan Stern	S ->rcu-link U.
17411c27b644SPaul E. McKenney
1742648e7175SAlan SternSince S is a grace period we have S ->rcu-gp S, and since L and U are
1743648e7175SAlan Sternthe start and end of the critical section C we have U ->rcu-rscsi L.
1744648e7175SAlan SternFrom this we obtain:
17459d036883SAlan Stern
1746648e7175SAlan Stern	S ->rcu-gp S ->rcu-link U ->rcu-rscsi L ->rcu-link S,
17479d036883SAlan Stern
17489d036883SAlan Sterna forbidden cycle.  Thus the "rcu" axiom rules out this violation of
17499d036883SAlan Sternthe Grace Period Guarantee.
17501c27b644SPaul E. McKenney
17511c27b644SPaul E. McKenneyFor something a little more down-to-earth, let's see how the axiom
17521c27b644SPaul E. McKenneyworks out in practice.  Consider the RCU code example from above, this
1753648e7175SAlan Sterntime with statement labels added:
17541c27b644SPaul E. McKenney
17551c27b644SPaul E. McKenney	int x, y;
17561c27b644SPaul E. McKenney
17571c27b644SPaul E. McKenney	P0()
17581c27b644SPaul E. McKenney	{
1759648e7175SAlan Stern		L: rcu_read_lock();
1760648e7175SAlan Stern		X: WRITE_ONCE(x, 1);
1761648e7175SAlan Stern		Y: WRITE_ONCE(y, 1);
1762648e7175SAlan Stern		U: rcu_read_unlock();
17631c27b644SPaul E. McKenney	}
17641c27b644SPaul E. McKenney
17651c27b644SPaul E. McKenney	P1()
17661c27b644SPaul E. McKenney	{
17671c27b644SPaul E. McKenney		int r1, r2;
17681c27b644SPaul E. McKenney
1769648e7175SAlan Stern		Z: r1 = READ_ONCE(x);
1770648e7175SAlan Stern		S: synchronize_rcu();
1771648e7175SAlan Stern		W: r2 = READ_ONCE(y);
17721c27b644SPaul E. McKenney	}
17731c27b644SPaul E. McKenney
17741c27b644SPaul E. McKenney
1775648e7175SAlan SternIf r2 = 0 at the end then P0's store at Y overwrites the value that
1776648e7175SAlan SternP1's load at W reads from, so we have W ->fre Y.  Since S ->po W and
1777648e7175SAlan Sternalso Y ->po U, we get S ->rcu-link U.  In addition, S ->rcu-gp S
1778648e7175SAlan Sternbecause S is a grace period.
17791c27b644SPaul E. McKenney
1780648e7175SAlan SternIf r1 = 1 at the end then P1's load at Z reads from P0's store at X,
1781648e7175SAlan Sternso we have X ->rfe Z.  Together with L ->po X and Z ->po S, this
1782648e7175SAlan Sternyields L ->rcu-link S.  And since L and U are the start and end of a
1783648e7175SAlan Sterncritical section, we have U ->rcu-rscsi L.
17841c27b644SPaul E. McKenney
1785648e7175SAlan SternThen U ->rcu-rscsi L ->rcu-link S ->rcu-gp S ->rcu-link U is a
1786648e7175SAlan Sternforbidden cycle, violating the "rcu" axiom.  Hence the outcome is not
1787648e7175SAlan Sternallowed by the LKMM, as we would expect.
17881c27b644SPaul E. McKenney
17891c27b644SPaul E. McKenneyFor contrast, let's see what can happen in a more complicated example:
17901c27b644SPaul E. McKenney
17911c27b644SPaul E. McKenney	int x, y, z;
17921c27b644SPaul E. McKenney
17931c27b644SPaul E. McKenney	P0()
17941c27b644SPaul E. McKenney	{
17951c27b644SPaul E. McKenney		int r0;
17961c27b644SPaul E. McKenney
1797648e7175SAlan Stern		L0: rcu_read_lock();
1798648e7175SAlan Stern		    r0 = READ_ONCE(x);
1799648e7175SAlan Stern		    WRITE_ONCE(y, 1);
1800648e7175SAlan Stern		U0: rcu_read_unlock();
18011c27b644SPaul E. McKenney	}
18021c27b644SPaul E. McKenney
18031c27b644SPaul E. McKenney	P1()
18041c27b644SPaul E. McKenney	{
18051c27b644SPaul E. McKenney		int r1;
18061c27b644SPaul E. McKenney
1807648e7175SAlan Stern		    r1 = READ_ONCE(y);
1808648e7175SAlan Stern		S1: synchronize_rcu();
1809648e7175SAlan Stern		    WRITE_ONCE(z, 1);
18101c27b644SPaul E. McKenney	}
18111c27b644SPaul E. McKenney
18121c27b644SPaul E. McKenney	P2()
18131c27b644SPaul E. McKenney	{
18141c27b644SPaul E. McKenney		int r2;
18151c27b644SPaul E. McKenney
1816648e7175SAlan Stern		L2: rcu_read_lock();
1817648e7175SAlan Stern		    r2 = READ_ONCE(z);
1818648e7175SAlan Stern		    WRITE_ONCE(x, 1);
1819648e7175SAlan Stern		U2: rcu_read_unlock();
18201c27b644SPaul E. McKenney	}
18211c27b644SPaul E. McKenney
18221c27b644SPaul E. McKenneyIf r0 = r1 = r2 = 1 at the end, then similar reasoning to before shows
1823648e7175SAlan Sternthat U0 ->rcu-rscsi L0 ->rcu-link S1 ->rcu-gp S1 ->rcu-link U2 ->rcu-rscsi
1824648e7175SAlan SternL2 ->rcu-link U0.  However this cycle is not forbidden, because the
1825648e7175SAlan Sternsequence of relations contains fewer instances of rcu-gp (one) than of
1826648e7175SAlan Sternrcu-rscsi (two).  Consequently the outcome is allowed by the LKMM.
1827648e7175SAlan SternThe following instruction timing diagram shows how it might actually
1828648e7175SAlan Sternoccur:
18291c27b644SPaul E. McKenney
18301c27b644SPaul E. McKenneyP0			P1			P2
18311c27b644SPaul E. McKenney--------------------	--------------------	--------------------
18321c27b644SPaul E. McKenneyrcu_read_lock()
1833648e7175SAlan SternWRITE_ONCE(y, 1)
1834648e7175SAlan Stern			r1 = READ_ONCE(y)
18351c27b644SPaul E. McKenney			synchronize_rcu() starts
18361c27b644SPaul E. McKenney			.			rcu_read_lock()
1837648e7175SAlan Stern			.			WRITE_ONCE(x, 1)
1838648e7175SAlan Sternr0 = READ_ONCE(x)	.
18391c27b644SPaul E. McKenneyrcu_read_unlock()	.
18401c27b644SPaul E. McKenney			synchronize_rcu() ends
1841648e7175SAlan Stern			WRITE_ONCE(z, 1)
1842648e7175SAlan Stern						r2 = READ_ONCE(z)
18431c27b644SPaul E. McKenney						rcu_read_unlock()
18441c27b644SPaul E. McKenney
18451c27b644SPaul E. McKenneyThis requires P0 and P2 to execute their loads and stores out of
18461c27b644SPaul E. McKenneyprogram order, but of course they are allowed to do so.  And as you
18471c27b644SPaul E. McKenneycan see, the Grace Period Guarantee is not violated: The critical
18481c27b644SPaul E. McKenneysection in P0 both starts before P1's grace period does and ends
18491c27b644SPaul E. McKenneybefore it does, and the critical section in P2 both starts after P1's
18501c27b644SPaul E. McKenneygrace period does and ends after it does.
18511c27b644SPaul E. McKenney
1852*de041805SAlan SternThe LKMM supports SRCU (Sleepable Read-Copy-Update) in addition to
1853*de041805SAlan Sternnormal RCU.  The ideas involved are much the same as above, with new
1854*de041805SAlan Sternrelations srcu-gp and srcu-rscsi added to represent SRCU grace periods
1855*de041805SAlan Sternand read-side critical sections.  However, there are some significant
1856*de041805SAlan Sterndifferences between RCU read-side critical sections and their SRCU
1857*de041805SAlan Sterncounterparts, as described in the next section.
1858*de041805SAlan Stern
1859*de041805SAlan Stern
1860*de041805SAlan SternSRCU READ-SIDE CRITICAL SECTIONS
1861*de041805SAlan Stern--------------------------------
1862*de041805SAlan Stern
1863*de041805SAlan SternThe LKMM uses the srcu-rscsi relation to model SRCU read-side critical
1864*de041805SAlan Sternsections.  They differ from RCU read-side critical sections in the
1865*de041805SAlan Sternfollowing respects:
1866*de041805SAlan Stern
1867*de041805SAlan Stern1.	Unlike the analogous RCU primitives, synchronize_srcu(),
1868*de041805SAlan Stern	srcu_read_lock(), and srcu_read_unlock() take a pointer to a
1869*de041805SAlan Stern	struct srcu_struct as an argument.  This structure is called
1870*de041805SAlan Stern	an SRCU domain, and calls linked by srcu-rscsi must have the
1871*de041805SAlan Stern	same domain.  Read-side critical sections and grace periods
1872*de041805SAlan Stern	associated with different domains are independent of one
1873*de041805SAlan Stern	another; the SRCU version of the RCU Guarantee applies only
1874*de041805SAlan Stern	to pairs of critical sections and grace periods having the
1875*de041805SAlan Stern	same domain.
1876*de041805SAlan Stern
1877*de041805SAlan Stern2.	srcu_read_lock() returns a value, called the index, which must
1878*de041805SAlan Stern	be passed to the matching srcu_read_unlock() call.  Unlike
1879*de041805SAlan Stern	rcu_read_lock() and rcu_read_unlock(), an srcu_read_lock()
1880*de041805SAlan Stern	call does not always have to match the next unpaired
1881*de041805SAlan Stern	srcu_read_unlock().  In fact, it is possible for two SRCU
1882*de041805SAlan Stern	read-side critical sections to overlap partially, as in the
1883*de041805SAlan Stern	following example (where s is an srcu_struct and idx1 and idx2
1884*de041805SAlan Stern	are integer variables):
1885*de041805SAlan Stern
1886*de041805SAlan Stern		idx1 = srcu_read_lock(&s);	// Start of first RSCS
1887*de041805SAlan Stern		idx2 = srcu_read_lock(&s);	// Start of second RSCS
1888*de041805SAlan Stern		srcu_read_unlock(&s, idx1);	// End of first RSCS
1889*de041805SAlan Stern		srcu_read_unlock(&s, idx2);	// End of second RSCS
1890*de041805SAlan Stern
1891*de041805SAlan Stern	The matching is determined entirely by the domain pointer and
1892*de041805SAlan Stern	index value.  By contrast, if the calls had been
1893*de041805SAlan Stern	rcu_read_lock() and rcu_read_unlock() then they would have
1894*de041805SAlan Stern	created two nested (fully overlapping) read-side critical
1895*de041805SAlan Stern	sections: an inner one and an outer one.
1896*de041805SAlan Stern
1897*de041805SAlan Stern3.	The srcu_down_read() and srcu_up_read() primitives work
1898*de041805SAlan Stern	exactly like srcu_read_lock() and srcu_read_unlock(), except
1899*de041805SAlan Stern	that matching calls don't have to execute on the same CPU.
1900*de041805SAlan Stern	(The names are meant to be suggestive of operations on
1901*de041805SAlan Stern	semaphores.)  Since the matching is determined by the domain
1902*de041805SAlan Stern	pointer and index value, these primitives make it possible for
1903*de041805SAlan Stern	an SRCU read-side critical section to start on one CPU and end
1904*de041805SAlan Stern	on another, so to speak.
1905*de041805SAlan Stern
1906*de041805SAlan SternIn order to account for these properties of SRCU, the LKMM models
1907*de041805SAlan Sternsrcu_read_lock() as a special type of load event (which is
1908*de041805SAlan Sternappropriate, since it takes a memory location as argument and returns
1909*de041805SAlan Sterna value, just as a load does) and srcu_read_unlock() as a special type
1910*de041805SAlan Sternof store event (again appropriate, since it takes as arguments a
1911*de041805SAlan Sternmemory location and a value).  These loads and stores are annotated as
1912*de041805SAlan Sternbelonging to the "srcu-lock" and "srcu-unlock" event classes
1913*de041805SAlan Sternrespectively.
1914*de041805SAlan Stern
1915*de041805SAlan SternThis approach allows the LKMM to tell whether two events are
1916*de041805SAlan Sternassociated with the same SRCU domain, simply by checking whether they
1917*de041805SAlan Sternaccess the same memory location (i.e., they are linked by the loc
1918*de041805SAlan Sternrelation).  It also gives a way to tell which unlock matches a
1919*de041805SAlan Sternparticular lock, by checking for the presence of a data dependency
1920*de041805SAlan Sternfrom the load (srcu-lock) to the store (srcu-unlock).  For example,
1921*de041805SAlan Sterngiven the situation outlined earlier (with statement labels added):
1922*de041805SAlan Stern
1923*de041805SAlan Stern	A: idx1 = srcu_read_lock(&s);
1924*de041805SAlan Stern	B: idx2 = srcu_read_lock(&s);
1925*de041805SAlan Stern	C: srcu_read_unlock(&s, idx1);
1926*de041805SAlan Stern	D: srcu_read_unlock(&s, idx2);
1927*de041805SAlan Stern
1928*de041805SAlan Sternthe LKMM will treat A and B as loads from s yielding values saved in
1929*de041805SAlan Sternidx1 and idx2 respectively.  Similarly, it will treat C and D as
1930*de041805SAlan Sternthough they stored the values from idx1 and idx2 in s.  The end result
1931*de041805SAlan Sternis much as if we had written:
1932*de041805SAlan Stern
1933*de041805SAlan Stern	A: idx1 = READ_ONCE(s);
1934*de041805SAlan Stern	B: idx2 = READ_ONCE(s);
1935*de041805SAlan Stern	C: WRITE_ONCE(s, idx1);
1936*de041805SAlan Stern	D: WRITE_ONCE(s, idx2);
1937*de041805SAlan Stern
1938*de041805SAlan Sternexcept for the presence of the special srcu-lock and srcu-unlock
1939*de041805SAlan Sternannotations.  You can see at once that we have A ->data C and
1940*de041805SAlan SternB ->data D.  These dependencies tell the LKMM that C is the
1941*de041805SAlan Sternsrcu-unlock event matching srcu-lock event A, and D is the
1942*de041805SAlan Sternsrcu-unlock event matching srcu-lock event B.
1943*de041805SAlan Stern
1944*de041805SAlan SternThis approach is admittedly a hack, and it has the potential to lead
1945*de041805SAlan Sternto problems.  For example, in:
1946*de041805SAlan Stern
1947*de041805SAlan Stern	idx1 = srcu_read_lock(&s);
1948*de041805SAlan Stern	srcu_read_unlock(&s, idx1);
1949*de041805SAlan Stern	idx2 = srcu_read_lock(&s);
1950*de041805SAlan Stern	srcu_read_unlock(&s, idx2);
1951*de041805SAlan Stern
1952*de041805SAlan Sternthe LKMM will believe that idx2 must have the same value as idx1,
1953*de041805SAlan Sternsince it reads from the immediately preceding store of idx1 in s.
1954*de041805SAlan SternFortunately this won't matter, assuming that litmus tests never do
1955*de041805SAlan Sternanything with SRCU index values other than pass them to
1956*de041805SAlan Sternsrcu_read_unlock() or srcu_up_read() calls.
1957*de041805SAlan Stern
1958*de041805SAlan SternHowever, sometimes it is necessary to store an index value in a
1959*de041805SAlan Sternshared variable temporarily.  In fact, this is the only way for
1960*de041805SAlan Sternsrcu_down_read() to pass the index it gets to an srcu_up_read() call
1961*de041805SAlan Sternon a different CPU.  In more detail, we might have soething like:
1962*de041805SAlan Stern
1963*de041805SAlan Stern	struct srcu_struct s;
1964*de041805SAlan Stern	int x;
1965*de041805SAlan Stern
1966*de041805SAlan Stern	P0()
1967*de041805SAlan Stern	{
1968*de041805SAlan Stern		int r0;
1969*de041805SAlan Stern
1970*de041805SAlan Stern		A: r0 = srcu_down_read(&s);
1971*de041805SAlan Stern		B: WRITE_ONCE(x, r0);
1972*de041805SAlan Stern	}
1973*de041805SAlan Stern
1974*de041805SAlan Stern	P1()
1975*de041805SAlan Stern	{
1976*de041805SAlan Stern		int r1;
1977*de041805SAlan Stern
1978*de041805SAlan Stern		C: r1 = READ_ONCE(x);
1979*de041805SAlan Stern		D: srcu_up_read(&s, r1);
1980*de041805SAlan Stern	}
1981*de041805SAlan Stern
1982*de041805SAlan SternAssuming that P1 executes after P0 and does read the index value
1983*de041805SAlan Sternstored in x, we can write this (using brackets to represent event
1984*de041805SAlan Sternannotations) as:
1985*de041805SAlan Stern
1986*de041805SAlan Stern	A[srcu-lock] ->data B[once] ->rf C[once] ->data D[srcu-unlock].
1987*de041805SAlan Stern
1988*de041805SAlan SternThe LKMM defines a carry-srcu-data relation to express this pattern;
1989*de041805SAlan Sternit permits an arbitrarily long sequence of
1990*de041805SAlan Stern
1991*de041805SAlan Stern	data ; rf
1992*de041805SAlan Stern
1993*de041805SAlan Sternpairs (that is, a data link followed by an rf link) to occur between
1994*de041805SAlan Sternan srcu-lock event and the final data dependency leading to the
1995*de041805SAlan Sternmatching srcu-unlock event.  carry-srcu-data is complicated by the
1996*de041805SAlan Sternneed to ensure that none of the intermediate store events in this
1997*de041805SAlan Sternsequence are instances of srcu-unlock.  This is necessary because in a
1998*de041805SAlan Sternpattern like the one above:
1999*de041805SAlan Stern
2000*de041805SAlan Stern	A: idx1 = srcu_read_lock(&s);
2001*de041805SAlan Stern	B: srcu_read_unlock(&s, idx1);
2002*de041805SAlan Stern	C: idx2 = srcu_read_lock(&s);
2003*de041805SAlan Stern	D: srcu_read_unlock(&s, idx2);
2004*de041805SAlan Stern
2005*de041805SAlan Sternthe LKMM treats B as a store to the variable s and C as a load from
2006*de041805SAlan Sternthat variable, creating an undesirable rf link from B to C:
2007*de041805SAlan Stern
2008*de041805SAlan Stern	A ->data B ->rf C ->data D.
2009*de041805SAlan Stern
2010*de041805SAlan SternThis would cause carry-srcu-data to mistakenly extend a data
2011*de041805SAlan Sterndependency from A to D, giving the impression that D was the
2012*de041805SAlan Sternsrcu-unlock event matching A's srcu-lock.  To avoid such problems,
2013*de041805SAlan Sterncarry-srcu-data does not accept sequences in which the ends of any of
2014*de041805SAlan Sternthe intermediate ->data links (B above) is an srcu-unlock event.
2015648e7175SAlan Stern
20161c27b644SPaul E. McKenney
20176e89e831SAlan SternLOCKING
20186e89e831SAlan Stern-------
20196e89e831SAlan Stern
20206e89e831SAlan SternThe LKMM includes locking.  In fact, there is special code for locking
20216e89e831SAlan Sternin the formal model, added in order to make tools run faster.
20226e89e831SAlan SternHowever, this special code is intended to be more or less equivalent
20236e89e831SAlan Sternto concepts we have already covered.  A spinlock_t variable is treated
20246e89e831SAlan Sternthe same as an int, and spin_lock(&s) is treated almost the same as:
20256e89e831SAlan Stern
20266e89e831SAlan Stern	while (cmpxchg_acquire(&s, 0, 1) != 0)
20276e89e831SAlan Stern		cpu_relax();
20286e89e831SAlan Stern
20296e89e831SAlan SternThis waits until s is equal to 0 and then atomically sets it to 1,
20306e89e831SAlan Sternand the read part of the cmpxchg operation acts as an acquire fence.
20316e89e831SAlan SternAn alternate way to express the same thing would be:
20326e89e831SAlan Stern
20336e89e831SAlan Stern	r = xchg_acquire(&s, 1);
20346e89e831SAlan Stern
20356e89e831SAlan Sternalong with a requirement that at the end, r = 0.  Similarly,
20366e89e831SAlan Sternspin_trylock(&s) is treated almost the same as:
20376e89e831SAlan Stern
20386e89e831SAlan Stern	return !cmpxchg_acquire(&s, 0, 1);
20396e89e831SAlan Stern
20406e89e831SAlan Sternwhich atomically sets s to 1 if it is currently equal to 0 and returns
20416e89e831SAlan Sterntrue if it succeeds (the read part of the cmpxchg operation acts as an
20426e89e831SAlan Sternacquire fence only if the operation is successful).  spin_unlock(&s)
20436e89e831SAlan Sternis treated almost the same as:
20446e89e831SAlan Stern
20456e89e831SAlan Stern	smp_store_release(&s, 0);
20466e89e831SAlan Stern
20476e89e831SAlan SternThe "almost" qualifiers above need some explanation.  In the LKMM, the
20486e89e831SAlan Sternstore-release in a spin_unlock() and the load-acquire which forms the
20496e89e831SAlan Sternfirst half of the atomic rmw update in a spin_lock() or a successful
20506e89e831SAlan Sternspin_trylock() -- we can call these things lock-releases and
20516e89e831SAlan Sternlock-acquires -- have two properties beyond those of ordinary releases
20526e89e831SAlan Sternand acquires.
20536e89e831SAlan Stern
2054ddfe1294SBoqun FengFirst, when a lock-acquire reads from or is po-after a lock-release,
2055ddfe1294SBoqun Fengthe LKMM requires that every instruction po-before the lock-release
2056ddfe1294SBoqun Fengmust execute before any instruction po-after the lock-acquire.  This
2057ddfe1294SBoqun Fengwould naturally hold if the release and acquire operations were on
2058ddfe1294SBoqun Fengdifferent CPUs and accessed the same lock variable, but the LKMM says
2059ddfe1294SBoqun Fengit also holds when they are on the same CPU, even if they access
2060ddfe1294SBoqun Fengdifferent lock variables.  For example:
20616e89e831SAlan Stern
20626e89e831SAlan Stern	int x, y;
2063ddfe1294SBoqun Feng	spinlock_t s, t;
20646e89e831SAlan Stern
20656e89e831SAlan Stern	P0()
20666e89e831SAlan Stern	{
20676e89e831SAlan Stern		int r1, r2;
20686e89e831SAlan Stern
20696e89e831SAlan Stern		spin_lock(&s);
20706e89e831SAlan Stern		r1 = READ_ONCE(x);
20716e89e831SAlan Stern		spin_unlock(&s);
2072ddfe1294SBoqun Feng		spin_lock(&t);
20736e89e831SAlan Stern		r2 = READ_ONCE(y);
2074ddfe1294SBoqun Feng		spin_unlock(&t);
20756e89e831SAlan Stern	}
20766e89e831SAlan Stern
20776e89e831SAlan Stern	P1()
20786e89e831SAlan Stern	{
20796e89e831SAlan Stern		WRITE_ONCE(y, 1);
20806e89e831SAlan Stern		smp_wmb();
20816e89e831SAlan Stern		WRITE_ONCE(x, 1);
20826e89e831SAlan Stern	}
20836e89e831SAlan Stern
2084ddfe1294SBoqun FengHere the second spin_lock() is po-after the first spin_unlock(), and
2085ddfe1294SBoqun Fengtherefore the load of x must execute before the load of y, even though
2086ddfe1294SBoqun Fengthe two locking operations use different locks.  Thus we cannot have
2087ddfe1294SBoqun Fengr1 = 1 and r2 = 0 at the end (this is an instance of the MP pattern).
20886e89e831SAlan Stern
20896e89e831SAlan SternThis requirement does not apply to ordinary release and acquire
20906e89e831SAlan Sternfences, only to lock-related operations.  For instance, suppose P0()
20916e89e831SAlan Sternin the example had been written as:
20926e89e831SAlan Stern
20936e89e831SAlan Stern	P0()
20946e89e831SAlan Stern	{
20956e89e831SAlan Stern		int r1, r2, r3;
20966e89e831SAlan Stern
20976e89e831SAlan Stern		r1 = READ_ONCE(x);
20986e89e831SAlan Stern		smp_store_release(&s, 1);
20996e89e831SAlan Stern		r3 = smp_load_acquire(&s);
21006e89e831SAlan Stern		r2 = READ_ONCE(y);
21016e89e831SAlan Stern	}
21026e89e831SAlan Stern
21036e89e831SAlan SternThen the CPU would be allowed to forward the s = 1 value from the
21046e89e831SAlan Sternsmp_store_release() to the smp_load_acquire(), executing the
21056e89e831SAlan Sterninstructions in the following order:
21066e89e831SAlan Stern
21076e89e831SAlan Stern		r3 = smp_load_acquire(&s);	// Obtains r3 = 1
21086e89e831SAlan Stern		r2 = READ_ONCE(y);
21096e89e831SAlan Stern		r1 = READ_ONCE(x);
21106e89e831SAlan Stern		smp_store_release(&s, 1);	// Value is forwarded
21116e89e831SAlan Stern
21126e89e831SAlan Sternand thus it could load y before x, obtaining r2 = 0 and r1 = 1.
21136e89e831SAlan Stern
2114ddfe1294SBoqun FengSecond, when a lock-acquire reads from or is po-after a lock-release,
2115ddfe1294SBoqun Fengand some other stores W and W' occur po-before the lock-release and
2116ddfe1294SBoqun Fengpo-after the lock-acquire respectively, the LKMM requires that W must
2117ddfe1294SBoqun Fengpropagate to each CPU before W' does.  For example, consider:
21186e89e831SAlan Stern
21196e89e831SAlan Stern	int x, y;
2120ddfe1294SBoqun Feng	spinlock_t s;
21216e89e831SAlan Stern
21226e89e831SAlan Stern	P0()
21236e89e831SAlan Stern	{
21246e89e831SAlan Stern		spin_lock(&s);
21256e89e831SAlan Stern		WRITE_ONCE(x, 1);
21266e89e831SAlan Stern		spin_unlock(&s);
21276e89e831SAlan Stern	}
21286e89e831SAlan Stern
21296e89e831SAlan Stern	P1()
21306e89e831SAlan Stern	{
21316e89e831SAlan Stern		int r1;
21326e89e831SAlan Stern
21336e89e831SAlan Stern		spin_lock(&s);
21346e89e831SAlan Stern		r1 = READ_ONCE(x);
21356e89e831SAlan Stern		WRITE_ONCE(y, 1);
21366e89e831SAlan Stern		spin_unlock(&s);
21376e89e831SAlan Stern	}
21386e89e831SAlan Stern
21396e89e831SAlan Stern	P2()
21406e89e831SAlan Stern	{
21416e89e831SAlan Stern		int r2, r3;
21426e89e831SAlan Stern
21436e89e831SAlan Stern		r2 = READ_ONCE(y);
21446e89e831SAlan Stern		smp_rmb();
21456e89e831SAlan Stern		r3 = READ_ONCE(x);
21466e89e831SAlan Stern	}
21476e89e831SAlan Stern
21486e89e831SAlan SternIf r1 = 1 at the end then the spin_lock() in P1 must have read from
21496e89e831SAlan Sternthe spin_unlock() in P0.  Hence the store to x must propagate to P2
2150ddfe1294SBoqun Fengbefore the store to y does, so we cannot have r2 = 1 and r3 = 0.  But
2151ddfe1294SBoqun Fengif P1 had used a lock variable different from s, the writes could have
2152ddfe1294SBoqun Fengpropagated in either order.  (On the other hand, if the code in P0 and
2153ddfe1294SBoqun FengP1 had all executed on a single CPU, as in the example before this
2154ddfe1294SBoqun Fengone, then the writes would have propagated in order even if the two
2155ddfe1294SBoqun Fengcritical sections used different lock variables.)
21566e89e831SAlan Stern
21576e89e831SAlan SternThese two special requirements for lock-release and lock-acquire do
21586e89e831SAlan Sternnot arise from the operational model.  Nevertheless, kernel developers
21596e89e831SAlan Sternhave come to expect and rely on them because they do hold on all
21606e89e831SAlan Sternarchitectures supported by the Linux kernel, albeit for various
21616e89e831SAlan Sterndiffering reasons.
21626e89e831SAlan Stern
21636e89e831SAlan Stern
2164c58a8017SAlan SternPLAIN ACCESSES AND DATA RACES
2165c58a8017SAlan Stern-----------------------------
2166c58a8017SAlan Stern
2167c58a8017SAlan SternIn the LKMM, memory accesses such as READ_ONCE(x), atomic_inc(&y),
2168c58a8017SAlan Sternsmp_load_acquire(&z), and so on are collectively referred to as
2169c58a8017SAlan Stern"marked" accesses, because they are all annotated with special
2170c58a8017SAlan Sternoperations of one kind or another.  Ordinary C-language memory
2171c58a8017SAlan Sternaccesses such as x or y = 0 are simply called "plain" accesses.
2172c58a8017SAlan Stern
2173c58a8017SAlan SternEarly versions of the LKMM had nothing to say about plain accesses.
2174c58a8017SAlan SternThe C standard allows compilers to assume that the variables affected
2175c58a8017SAlan Sternby plain accesses are not concurrently read or written by any other
2176c58a8017SAlan Sternthreads or CPUs.  This leaves compilers free to implement all manner
2177c58a8017SAlan Sternof transformations or optimizations of code containing plain accesses,
2178c58a8017SAlan Sternmaking such code very difficult for a memory model to handle.
2179c58a8017SAlan Stern
2180c58a8017SAlan SternHere is just one example of a possible pitfall:
2181c58a8017SAlan Stern
2182c58a8017SAlan Stern	int a = 6;
2183c58a8017SAlan Stern	int *x = &a;
2184c58a8017SAlan Stern
2185c58a8017SAlan Stern	P0()
2186c58a8017SAlan Stern	{
2187c58a8017SAlan Stern		int *r1;
2188c58a8017SAlan Stern		int r2 = 0;
2189c58a8017SAlan Stern
2190c58a8017SAlan Stern		r1 = x;
2191c58a8017SAlan Stern		if (r1 != NULL)
2192c58a8017SAlan Stern			r2 = READ_ONCE(*r1);
2193c58a8017SAlan Stern	}
2194c58a8017SAlan Stern
2195c58a8017SAlan Stern	P1()
2196c58a8017SAlan Stern	{
2197c58a8017SAlan Stern		WRITE_ONCE(x, NULL);
2198c58a8017SAlan Stern	}
2199c58a8017SAlan Stern
2200c58a8017SAlan SternOn the face of it, one would expect that when this code runs, the only
2201c58a8017SAlan Sternpossible final values for r2 are 6 and 0, depending on whether or not
2202c58a8017SAlan SternP1's store to x propagates to P0 before P0's load from x executes.
2203c58a8017SAlan SternBut since P0's load from x is a plain access, the compiler may decide
2204c58a8017SAlan Sternto carry out the load twice (for the comparison against NULL, then again
2205c58a8017SAlan Sternfor the READ_ONCE()) and eliminate the temporary variable r1.  The
2206c58a8017SAlan Sternobject code generated for P0 could therefore end up looking rather
2207c58a8017SAlan Sternlike this:
2208c58a8017SAlan Stern
2209c58a8017SAlan Stern	P0()
2210c58a8017SAlan Stern	{
2211c58a8017SAlan Stern		int r2 = 0;
2212c58a8017SAlan Stern
2213c58a8017SAlan Stern		if (x != NULL)
2214c58a8017SAlan Stern			r2 = READ_ONCE(*x);
2215c58a8017SAlan Stern	}
2216c58a8017SAlan Stern
2217c58a8017SAlan SternAnd now it is obvious that this code runs the risk of dereferencing a
2218c58a8017SAlan SternNULL pointer, because P1's store to x might propagate to P0 after the
2219c58a8017SAlan Sterntest against NULL has been made but before the READ_ONCE() executes.
2220c58a8017SAlan SternIf the original code had said "r1 = READ_ONCE(x)" instead of "r1 = x",
2221c58a8017SAlan Sternthe compiler would not have performed this optimization and there
2222c58a8017SAlan Sternwould be no possibility of a NULL-pointer dereference.
2223c58a8017SAlan Stern
2224c58a8017SAlan SternGiven the possibility of transformations like this one, the LKMM
2225c58a8017SAlan Sterndoesn't try to predict all possible outcomes of code containing plain
2226c58a8017SAlan Sternaccesses.  It is instead content to determine whether the code
2227c58a8017SAlan Sternviolates the compiler's assumptions, which would render the ultimate
2228c58a8017SAlan Sternoutcome undefined.
2229c58a8017SAlan Stern
2230c58a8017SAlan SternIn technical terms, the compiler is allowed to assume that when the
2231c58a8017SAlan Sternprogram executes, there will not be any data races.  A "data race"
2232c1b14609SMarco Elveroccurs when there are two memory accesses such that:
2233c58a8017SAlan Stern
2234c1b14609SMarco Elver1.	they access the same location,
2235c58a8017SAlan Stern
2236c1b14609SMarco Elver2.	at least one of them is a store,
2237c58a8017SAlan Stern
2238c1b14609SMarco Elver3.	at least one of them is plain,
2239c58a8017SAlan Stern
2240c1b14609SMarco Elver4.	they occur on different CPUs (or in different threads on the
2241c1b14609SMarco Elver	same CPU), and
2242c58a8017SAlan Stern
2243c1b14609SMarco Elver5.	they execute concurrently.
2244c58a8017SAlan Stern
2245c1b14609SMarco ElverIn the literature, two accesses are said to "conflict" if they satisfy
2246c1b14609SMarco Elver1 and 2 above.  We'll go a little farther and say that two accesses
2247c1b14609SMarco Elverare "race candidates" if they satisfy 1 - 4.  Thus, whether or not two
2248c1b14609SMarco Elverrace candidates actually do race in a given execution depends on
2249c1b14609SMarco Elverwhether they are concurrent.
2250c1b14609SMarco Elver
2251c1b14609SMarco ElverThe LKMM tries to determine whether a program contains race candidates
2252c1b14609SMarco Elverwhich may execute concurrently; if it does then the LKMM says there is
2253c1b14609SMarco Elvera potential data race and makes no predictions about the program's
2254c1b14609SMarco Elveroutcome.
2255c1b14609SMarco Elver
2256c1b14609SMarco ElverDetermining whether two accesses are race candidates is easy; you can
2257c1b14609SMarco Elversee that all the concepts involved in the definition above are already
2258c1b14609SMarco Elverpart of the memory model.  The hard part is telling whether they may
2259c1b14609SMarco Elverexecute concurrently.  The LKMM takes a conservative attitude,
2260c1b14609SMarco Elverassuming that accesses may be concurrent unless it can prove they
2261c1b14609SMarco Elverare not.
2262c58a8017SAlan Stern
2263c58a8017SAlan SternIf two memory accesses aren't concurrent then one must execute before
2264c58a8017SAlan Sternthe other.  Therefore the LKMM decides two accesses aren't concurrent
2265c58a8017SAlan Sternif they can be connected by a sequence of hb, pb, and rb links
2266c58a8017SAlan Stern(together referred to as xb, for "executes before").  However, there
2267c58a8017SAlan Sternare two complicating factors.
2268c58a8017SAlan Stern
2269c58a8017SAlan SternIf X is a load and X executes before a store Y, then indeed there is
2270c58a8017SAlan Sternno danger of X and Y being concurrent.  After all, Y can't have any
2271c58a8017SAlan Sterneffect on the value obtained by X until the memory subsystem has
2272c58a8017SAlan Sternpropagated Y from its own CPU to X's CPU, which won't happen until
2273c58a8017SAlan Sternsome time after Y executes and thus after X executes.  But if X is a
2274c58a8017SAlan Sternstore, then even if X executes before Y it is still possible that X
2275c58a8017SAlan Sternwill propagate to Y's CPU just as Y is executing.  In such a case X
2276c58a8017SAlan Sterncould very well interfere somehow with Y, and we would have to
2277c58a8017SAlan Sternconsider X and Y to be concurrent.
2278c58a8017SAlan Stern
2279c58a8017SAlan SternTherefore when X is a store, for X and Y to be non-concurrent the LKMM
2280c58a8017SAlan Sternrequires not only that X must execute before Y but also that X must
2281c58a8017SAlan Sternpropagate to Y's CPU before Y executes.  (Or vice versa, of course, if
2282c58a8017SAlan SternY executes before X -- then Y must propagate to X's CPU before X
2283c58a8017SAlan Sternexecutes if Y is a store.)  This is expressed by the visibility
2284c58a8017SAlan Sternrelation (vis), where X ->vis Y is defined to hold if there is an
2285c58a8017SAlan Sternintermediate event Z such that:
2286c58a8017SAlan Stern
2287c58a8017SAlan Stern	X is connected to Z by a possibly empty sequence of
2288c58a8017SAlan Stern	cumul-fence links followed by an optional rfe link (if none of
2289c58a8017SAlan Stern	these links are present, X and Z are the same event),
2290c58a8017SAlan Stern
2291c58a8017SAlan Sternand either:
2292c58a8017SAlan Stern
2293c58a8017SAlan Stern	Z is connected to Y by a strong-fence link followed by a
2294c58a8017SAlan Stern	possibly empty sequence of xb links,
2295c58a8017SAlan Stern
2296c58a8017SAlan Sternor:
2297c58a8017SAlan Stern
2298c58a8017SAlan Stern	Z is on the same CPU as Y and is connected to Y by a possibly
2299c58a8017SAlan Stern	empty sequence of xb links (again, if the sequence is empty it
2300c58a8017SAlan Stern	means Z and Y are the same event).
2301c58a8017SAlan Stern
2302c58a8017SAlan SternThe motivations behind this definition are straightforward:
2303c58a8017SAlan Stern
2304c58a8017SAlan Stern	cumul-fence memory barriers force stores that are po-before
2305c58a8017SAlan Stern	the barrier to propagate to other CPUs before stores that are
2306c58a8017SAlan Stern	po-after the barrier.
2307c58a8017SAlan Stern
2308c58a8017SAlan Stern	An rfe link from an event W to an event R says that R reads
2309c58a8017SAlan Stern	from W, which certainly means that W must have propagated to
2310c58a8017SAlan Stern	R's CPU before R executed.
2311c58a8017SAlan Stern
2312c58a8017SAlan Stern	strong-fence memory barriers force stores that are po-before
2313c58a8017SAlan Stern	the barrier, or that propagate to the barrier's CPU before the
2314c58a8017SAlan Stern	barrier executes, to propagate to all CPUs before any events
2315c58a8017SAlan Stern	po-after the barrier can execute.
2316c58a8017SAlan Stern
2317c58a8017SAlan SternTo see how this works out in practice, consider our old friend, the MP
2318c58a8017SAlan Sternpattern (with fences and statement labels, but without the conditional
2319c58a8017SAlan Sterntest):
2320c58a8017SAlan Stern
2321c58a8017SAlan Stern	int buf = 0, flag = 0;
2322c58a8017SAlan Stern
2323c58a8017SAlan Stern	P0()
2324c58a8017SAlan Stern	{
2325c58a8017SAlan Stern		X: WRITE_ONCE(buf, 1);
2326c58a8017SAlan Stern		   smp_wmb();
2327c58a8017SAlan Stern		W: WRITE_ONCE(flag, 1);
2328c58a8017SAlan Stern	}
2329c58a8017SAlan Stern
2330c58a8017SAlan Stern	P1()
2331c58a8017SAlan Stern	{
2332c58a8017SAlan Stern		int r1;
2333c58a8017SAlan Stern		int r2 = 0;
2334c58a8017SAlan Stern
2335c58a8017SAlan Stern		Z: r1 = READ_ONCE(flag);
2336c58a8017SAlan Stern		   smp_rmb();
2337c58a8017SAlan Stern		Y: r2 = READ_ONCE(buf);
2338c58a8017SAlan Stern	}
2339c58a8017SAlan Stern
2340c58a8017SAlan SternThe smp_wmb() memory barrier gives a cumul-fence link from X to W, and
2341c58a8017SAlan Sternassuming r1 = 1 at the end, there is an rfe link from W to Z.  This
2342c58a8017SAlan Sternmeans that the store to buf must propagate from P0 to P1 before Z
2343c58a8017SAlan Sternexecutes.  Next, Z and Y are on the same CPU and the smp_rmb() fence
2344c58a8017SAlan Sternprovides an xb link from Z to Y (i.e., it forces Z to execute before
2345c58a8017SAlan SternY).  Therefore we have X ->vis Y: X must propagate to Y's CPU before Y
2346c58a8017SAlan Sternexecutes.
2347c58a8017SAlan Stern
2348c58a8017SAlan SternThe second complicating factor mentioned above arises from the fact
2349c58a8017SAlan Sternthat when we are considering data races, some of the memory accesses
2350c58a8017SAlan Sternare plain.  Now, although we have not said so explicitly, up to this
2351c58a8017SAlan Sternpoint most of the relations defined by the LKMM (ppo, hb, prop,
2352c58a8017SAlan Sterncumul-fence, pb, and so on -- including vis) apply only to marked
2353c58a8017SAlan Sternaccesses.
2354c58a8017SAlan Stern
2355c58a8017SAlan SternThere are good reasons for this restriction.  The compiler is not
2356c58a8017SAlan Sternallowed to apply fancy transformations to marked accesses, and
2357c58a8017SAlan Sternconsequently each such access in the source code corresponds more or
2358c58a8017SAlan Sternless directly to a single machine instruction in the object code.  But
2359c58a8017SAlan Sternplain accesses are a different story; the compiler may combine them,
2360c58a8017SAlan Sternsplit them up, duplicate them, eliminate them, invent new ones, and
2361c58a8017SAlan Sternwho knows what else.  Seeing a plain access in the source code tells
2362c58a8017SAlan Sternyou almost nothing about what machine instructions will end up in the
2363c58a8017SAlan Sternobject code.
2364c58a8017SAlan Stern
2365c58a8017SAlan SternFortunately, the compiler isn't completely free; it is subject to some
2366c58a8017SAlan Sternlimitations.  For one, it is not allowed to introduce a data race into
2367c58a8017SAlan Sternthe object code if the source code does not already contain a data
2368c58a8017SAlan Sternrace (if it could, memory models would be useless and no multithreaded
2369c58a8017SAlan Sterncode would be safe!).  For another, it cannot move a plain access past
2370c58a8017SAlan Sterna compiler barrier.
2371c58a8017SAlan Stern
2372c58a8017SAlan SternA compiler barrier is a kind of fence, but as the name implies, it
2373c58a8017SAlan Sternonly affects the compiler; it does not necessarily have any effect on
2374c58a8017SAlan Sternhow instructions are executed by the CPU.  In Linux kernel source
2375c58a8017SAlan Sterncode, the barrier() function is a compiler barrier.  It doesn't give
2376c58a8017SAlan Sternrise directly to any machine instructions in the object code; rather,
2377c58a8017SAlan Sternit affects how the compiler generates the rest of the object code.
2378c58a8017SAlan SternGiven source code like this:
2379c58a8017SAlan Stern
2380c58a8017SAlan Stern	... some memory accesses ...
2381c58a8017SAlan Stern	barrier();
2382c58a8017SAlan Stern	... some other memory accesses ...
2383c58a8017SAlan Stern
2384c58a8017SAlan Sternthe barrier() function ensures that the machine instructions
2385c58a8017SAlan Sterncorresponding to the first group of accesses will all end po-before
2386c58a8017SAlan Sternany machine instructions corresponding to the second group of accesses
2387c58a8017SAlan Stern-- even if some of the accesses are plain.  (Of course, the CPU may
2388c58a8017SAlan Sternthen execute some of those accesses out of program order, but we
2389c58a8017SAlan Sternalready know how to deal with such issues.)  Without the barrier()
2390c58a8017SAlan Sternthere would be no such guarantee; the two groups of accesses could be
2391c58a8017SAlan Sternintermingled or even reversed in the object code.
2392c58a8017SAlan Stern
2393c58a8017SAlan SternThe LKMM doesn't say much about the barrier() function, but it does
2394c58a8017SAlan Sternrequire that all fences are also compiler barriers.  In addition, it
2395c58a8017SAlan Sternrequires that the ordering properties of memory barriers such as
2396c58a8017SAlan Sternsmp_rmb() or smp_store_release() apply to plain accesses as well as to
2397c58a8017SAlan Sternmarked accesses.
2398c58a8017SAlan Stern
2399c58a8017SAlan SternThis is the key to analyzing data races.  Consider the MP pattern
2400c58a8017SAlan Sternagain, now using plain accesses for buf:
2401c58a8017SAlan Stern
2402c58a8017SAlan Stern	int buf = 0, flag = 0;
2403c58a8017SAlan Stern
2404c58a8017SAlan Stern	P0()
2405c58a8017SAlan Stern	{
2406c58a8017SAlan Stern		U: buf = 1;
2407c58a8017SAlan Stern		   smp_wmb();
2408c58a8017SAlan Stern		X: WRITE_ONCE(flag, 1);
2409c58a8017SAlan Stern	}
2410c58a8017SAlan Stern
2411c58a8017SAlan Stern	P1()
2412c58a8017SAlan Stern	{
2413c58a8017SAlan Stern		int r1;
2414c58a8017SAlan Stern		int r2 = 0;
2415c58a8017SAlan Stern
2416c58a8017SAlan Stern		Y: r1 = READ_ONCE(flag);
2417c58a8017SAlan Stern		   if (r1) {
2418c58a8017SAlan Stern			   smp_rmb();
2419c58a8017SAlan Stern			V: r2 = buf;
2420c58a8017SAlan Stern		   }
2421c58a8017SAlan Stern	}
2422c58a8017SAlan Stern
2423c58a8017SAlan SternThis program does not contain a data race.  Although the U and V
2424c1b14609SMarco Elveraccesses are race candidates, the LKMM can prove they are not
2425c1b14609SMarco Elverconcurrent as follows:
2426c58a8017SAlan Stern
2427c58a8017SAlan Stern	The smp_wmb() fence in P0 is both a compiler barrier and a
2428c58a8017SAlan Stern	cumul-fence.  It guarantees that no matter what hash of
2429c58a8017SAlan Stern	machine instructions the compiler generates for the plain
2430c58a8017SAlan Stern	access U, all those instructions will be po-before the fence.
2431c58a8017SAlan Stern	Consequently U's store to buf, no matter how it is carried out
2432c58a8017SAlan Stern	at the machine level, must propagate to P1 before X's store to
2433c58a8017SAlan Stern	flag does.
2434c58a8017SAlan Stern
2435c58a8017SAlan Stern	X and Y are both marked accesses.  Hence an rfe link from X to
2436c58a8017SAlan Stern	Y is a valid indicator that X propagated to P1 before Y
2437c58a8017SAlan Stern	executed, i.e., X ->vis Y.  (And if there is no rfe link then
2438c58a8017SAlan Stern	r1 will be 0, so V will not be executed and ipso facto won't
2439c58a8017SAlan Stern	race with U.)
2440c58a8017SAlan Stern
2441c58a8017SAlan Stern	The smp_rmb() fence in P1 is a compiler barrier as well as a
2442c58a8017SAlan Stern	fence.  It guarantees that all the machine-level instructions
2443c58a8017SAlan Stern	corresponding to the access V will be po-after the fence, and
2444c58a8017SAlan Stern	therefore any loads among those instructions will execute
2445c58a8017SAlan Stern	after the fence does and hence after Y does.
2446c58a8017SAlan Stern
2447c58a8017SAlan SternThus U's store to buf is forced to propagate to P1 before V's load
2448c58a8017SAlan Sternexecutes (assuming V does execute), ruling out the possibility of a
2449c58a8017SAlan Sterndata race between them.
2450c58a8017SAlan Stern
2451c58a8017SAlan SternThis analysis illustrates how the LKMM deals with plain accesses in
2452c58a8017SAlan Sterngeneral.  Suppose R is a plain load and we want to show that R
2453c58a8017SAlan Sternexecutes before some marked access E.  We can do this by finding a
2454c58a8017SAlan Sternmarked access X such that R and X are ordered by a suitable fence and
2455c58a8017SAlan SternX ->xb* E.  If E was also a plain access, we would also look for a
2456c58a8017SAlan Sternmarked access Y such that X ->xb* Y, and Y and E are ordered by a
2457c58a8017SAlan Sternfence.  We describe this arrangement by saying that R is
2458c58a8017SAlan Stern"post-bounded" by X and E is "pre-bounded" by Y.
2459c58a8017SAlan Stern
2460c58a8017SAlan SternIn fact, we go one step further: Since R is a read, we say that R is
2461c58a8017SAlan Stern"r-post-bounded" by X.  Similarly, E would be "r-pre-bounded" or
2462c58a8017SAlan Stern"w-pre-bounded" by Y, depending on whether E was a store or a load.
2463c58a8017SAlan SternThis distinction is needed because some fences affect only loads
2464c58a8017SAlan Stern(i.e., smp_rmb()) and some affect only stores (smp_wmb()); otherwise
2465c58a8017SAlan Sternthe two types of bounds are the same.  And as a degenerate case, we
2466c58a8017SAlan Sternsay that a marked access pre-bounds and post-bounds itself (e.g., if R
2467c58a8017SAlan Sternabove were a marked load then X could simply be taken to be R itself.)
2468c58a8017SAlan Stern
2469c58a8017SAlan SternThe need to distinguish between r- and w-bounding raises yet another
2470c58a8017SAlan Sternissue.  When the source code contains a plain store, the compiler is
2471c58a8017SAlan Sternallowed to put plain loads of the same location into the object code.
2472c58a8017SAlan SternFor example, given the source code:
2473c58a8017SAlan Stern
2474c58a8017SAlan Stern	x = 1;
2475c58a8017SAlan Stern
2476c58a8017SAlan Sternthe compiler is theoretically allowed to generate object code that
2477c58a8017SAlan Sternlooks like:
2478c58a8017SAlan Stern
2479c58a8017SAlan Stern	if (x != 1)
2480c58a8017SAlan Stern		x = 1;
2481c58a8017SAlan Stern
2482c58a8017SAlan Sternthereby adding a load (and possibly replacing the store entirely).
2483c58a8017SAlan SternFor this reason, whenever the LKMM requires a plain store to be
2484c58a8017SAlan Sternw-pre-bounded or w-post-bounded by a marked access, it also requires
2485c58a8017SAlan Sternthe store to be r-pre-bounded or r-post-bounded, so as to handle cases
2486c58a8017SAlan Sternwhere the compiler adds a load.
2487c58a8017SAlan Stern
2488c58a8017SAlan Stern(This may be overly cautious.  We don't know of any examples where a
2489c58a8017SAlan Sterncompiler has augmented a store with a load in this fashion, and the
2490c58a8017SAlan SternLinux kernel developers would probably fight pretty hard to change a
2491c58a8017SAlan Sterncompiler if it ever did this.  Still, better safe than sorry.)
2492c58a8017SAlan Stern
2493c58a8017SAlan SternIncidentally, the other tranformation -- augmenting a plain load by
2494c58a8017SAlan Sternadding in a store to the same location -- is not allowed.  This is
2495c58a8017SAlan Sternbecause the compiler cannot know whether any other CPUs might perform
2496c58a8017SAlan Sterna concurrent load from that location.  Two concurrent loads don't
2497c58a8017SAlan Sternconstitute a race (they can't interfere with each other), but a store
2498c58a8017SAlan Sterndoes race with a concurrent load.  Thus adding a store might create a
2499c58a8017SAlan Sterndata race where one was not already present in the source code,
2500c58a8017SAlan Sternsomething the compiler is forbidden to do.  Augmenting a store with a
2501c58a8017SAlan Sternload, on the other hand, is acceptable because doing so won't create a
2502c58a8017SAlan Sterndata race unless one already existed.
2503c58a8017SAlan Stern
2504c58a8017SAlan SternThe LKMM includes a second way to pre-bound plain accesses, in
2505c58a8017SAlan Sternaddition to fences: an address dependency from a marked load.  That
2506c58a8017SAlan Sternis, in the sequence:
2507c58a8017SAlan Stern
2508c58a8017SAlan Stern	p = READ_ONCE(ptr);
2509c58a8017SAlan Stern	r = *p;
2510c58a8017SAlan Stern
2511c58a8017SAlan Sternthe LKMM says that the marked load of ptr pre-bounds the plain load of
2512c58a8017SAlan Stern*p; the marked load must execute before any of the machine
2513c58a8017SAlan Sterninstructions corresponding to the plain load.  This is a reasonable
2514c58a8017SAlan Sternstipulation, since after all, the CPU can't perform the load of *p
2515c58a8017SAlan Sternuntil it knows what value p will hold.  Furthermore, without some
2516c58a8017SAlan Sternassumption like this one, some usages typical of RCU would count as
2517c58a8017SAlan Sterndata races.  For example:
2518c58a8017SAlan Stern
2519c58a8017SAlan Stern	int a = 1, b;
2520c58a8017SAlan Stern	int *ptr = &a;
2521c58a8017SAlan Stern
2522c58a8017SAlan Stern	P0()
2523c58a8017SAlan Stern	{
2524c58a8017SAlan Stern		b = 2;
2525c58a8017SAlan Stern		rcu_assign_pointer(ptr, &b);
2526c58a8017SAlan Stern	}
2527c58a8017SAlan Stern
2528c58a8017SAlan Stern	P1()
2529c58a8017SAlan Stern	{
2530c58a8017SAlan Stern		int *p;
2531c58a8017SAlan Stern		int r;
2532c58a8017SAlan Stern
2533c58a8017SAlan Stern		rcu_read_lock();
2534c58a8017SAlan Stern		p = rcu_dereference(ptr);
2535c58a8017SAlan Stern		r = *p;
2536c58a8017SAlan Stern		rcu_read_unlock();
2537c58a8017SAlan Stern	}
2538c58a8017SAlan Stern
2539c58a8017SAlan Stern(In this example the rcu_read_lock() and rcu_read_unlock() calls don't
2540c58a8017SAlan Sternreally do anything, because there aren't any grace periods.  They are
2541c58a8017SAlan Sternincluded merely for the sake of good form; typically P0 would call
2542c58a8017SAlan Sternsynchronize_rcu() somewhere after the rcu_assign_pointer().)
2543c58a8017SAlan Stern
2544c58a8017SAlan Sternrcu_assign_pointer() performs a store-release, so the plain store to b
2545c58a8017SAlan Sternis definitely w-post-bounded before the store to ptr, and the two
2546c58a8017SAlan Sternstores will propagate to P1 in that order.  However, rcu_dereference()
2547c58a8017SAlan Sternis only equivalent to READ_ONCE().  While it is a marked access, it is
2548c58a8017SAlan Sternnot a fence or compiler barrier.  Hence the only guarantee we have
2549c58a8017SAlan Sternthat the load of ptr in P1 is r-pre-bounded before the load of *p
2550c58a8017SAlan Stern(thus avoiding a race) is the assumption about address dependencies.
2551c58a8017SAlan Stern
2552c58a8017SAlan SternThis is a situation where the compiler can undermine the memory model,
2553c58a8017SAlan Sternand a certain amount of care is required when programming constructs
2554c58a8017SAlan Sternlike this one.  In particular, comparisons between the pointer and
2555c58a8017SAlan Sternother known addresses can cause trouble.  If you have something like:
2556c58a8017SAlan Stern
2557c58a8017SAlan Stern	p = rcu_dereference(ptr);
2558c58a8017SAlan Stern	if (p == &x)
2559c58a8017SAlan Stern		r = *p;
2560c58a8017SAlan Stern
2561c58a8017SAlan Sternthen the compiler just might generate object code resembling:
2562c58a8017SAlan Stern
2563c58a8017SAlan Stern	p = rcu_dereference(ptr);
2564c58a8017SAlan Stern	if (p == &x)
2565c58a8017SAlan Stern		r = x;
2566c58a8017SAlan Stern
2567c58a8017SAlan Sternor even:
2568c58a8017SAlan Stern
2569c58a8017SAlan Stern	rtemp = x;
2570c58a8017SAlan Stern	p = rcu_dereference(ptr);
2571c58a8017SAlan Stern	if (p == &x)
2572c58a8017SAlan Stern		r = rtemp;
2573c58a8017SAlan Stern
2574c58a8017SAlan Sternwhich would invalidate the memory model's assumption, since the CPU
2575c58a8017SAlan Sterncould now perform the load of x before the load of ptr (there might be
2576c58a8017SAlan Sterna control dependency but no address dependency at the machine level).
2577c58a8017SAlan Stern
2578c58a8017SAlan SternFinally, it turns out there is a situation in which a plain write does
2579c1b14609SMarco Elvernot need to be w-post-bounded: when it is separated from the other
2580c1b14609SMarco Elverrace-candidate access by a fence.  At first glance this may seem
2581c1b14609SMarco Elverimpossible.  After all, to be race candidates the two accesses must
2582c1b14609SMarco Elverbe on different CPUs, and fences don't link events on different CPUs.
2583c1b14609SMarco ElverWell, normal fences don't -- but rcu-fence can!  Here's an example:
2584c58a8017SAlan Stern
2585c58a8017SAlan Stern	int x, y;
2586c58a8017SAlan Stern
2587c58a8017SAlan Stern	P0()
2588c58a8017SAlan Stern	{
2589c58a8017SAlan Stern		WRITE_ONCE(x, 1);
2590c58a8017SAlan Stern		synchronize_rcu();
2591c58a8017SAlan Stern		y = 3;
2592c58a8017SAlan Stern	}
2593c58a8017SAlan Stern
2594c58a8017SAlan Stern	P1()
2595c58a8017SAlan Stern	{
2596c58a8017SAlan Stern		rcu_read_lock();
2597c58a8017SAlan Stern		if (READ_ONCE(x) == 0)
2598c58a8017SAlan Stern			y = 2;
2599c58a8017SAlan Stern		rcu_read_unlock();
2600c58a8017SAlan Stern	}
2601c58a8017SAlan Stern
2602c58a8017SAlan SternDo the plain stores to y race?  Clearly not if P1 reads a non-zero
2603c58a8017SAlan Sternvalue for x, so let's assume the READ_ONCE(x) does obtain 0.  This
2604c58a8017SAlan Sternmeans that the read-side critical section in P1 must finish executing
2605c58a8017SAlan Sternbefore the grace period in P0 does, because RCU's Grace-Period
2606c58a8017SAlan SternGuarantee says that otherwise P0's store to x would have propagated to
2607c58a8017SAlan SternP1 before the critical section started and so would have been visible
2608c58a8017SAlan Sternto the READ_ONCE().  (Another way of putting it is that the fre link
2609c58a8017SAlan Sternfrom the READ_ONCE() to the WRITE_ONCE() gives rise to an rcu-link
2610c58a8017SAlan Sternbetween those two events.)
2611c58a8017SAlan Stern
2612c58a8017SAlan SternThis means there is an rcu-fence link from P1's "y = 2" store to P0's
2613c58a8017SAlan Stern"y = 3" store, and consequently the first must propagate from P1 to P0
2614c58a8017SAlan Sternbefore the second can execute.  Therefore the two stores cannot be
2615c58a8017SAlan Sternconcurrent and there is no race, even though P1's plain store to y
2616c58a8017SAlan Sternisn't w-post-bounded by any marked accesses.
2617c58a8017SAlan Stern
2618c58a8017SAlan SternPutting all this material together yields the following picture.  For
2619c1b14609SMarco Elverrace-candidate stores W and W', where W ->co W', the LKMM says the
2620c58a8017SAlan Sternstores don't race if W can be linked to W' by a
2621c58a8017SAlan Stern
2622c58a8017SAlan Stern	w-post-bounded ; vis ; w-pre-bounded
2623c58a8017SAlan Stern
2624c58a8017SAlan Sternsequence.  If W is plain then they also have to be linked by an
2625c58a8017SAlan Stern
2626c58a8017SAlan Stern	r-post-bounded ; xb* ; w-pre-bounded
2627c58a8017SAlan Stern
2628c58a8017SAlan Sternsequence, and if W' is plain then they also have to be linked by a
2629c58a8017SAlan Stern
2630c58a8017SAlan Stern	w-post-bounded ; vis ; r-pre-bounded
2631c58a8017SAlan Stern
2632c1b14609SMarco Elversequence.  For race-candidate load R and store W, the LKMM says the
2633c1b14609SMarco Elvertwo accesses don't race if R can be linked to W by an
2634c58a8017SAlan Stern
2635c58a8017SAlan Stern	r-post-bounded ; xb* ; w-pre-bounded
2636c58a8017SAlan Stern
2637c58a8017SAlan Sternsequence or if W can be linked to R by a
2638c58a8017SAlan Stern
2639c58a8017SAlan Stern	w-post-bounded ; vis ; r-pre-bounded
2640c58a8017SAlan Stern
2641c58a8017SAlan Sternsequence.  For the cases involving a vis link, the LKMM also accepts
2642c58a8017SAlan Sternsequences in which W is linked to W' or R by a
2643c58a8017SAlan Stern
2644c58a8017SAlan Stern	strong-fence ; xb* ; {w and/or r}-pre-bounded
2645c58a8017SAlan Stern
2646c58a8017SAlan Sternsequence with no post-bounding, and in every case the LKMM also allows
2647c58a8017SAlan Sternthe link simply to be a fence with no bounding at all.  If no sequence
2648c58a8017SAlan Sternof the appropriate sort exists, the LKMM says that the accesses race.
2649c58a8017SAlan Stern
2650c58a8017SAlan SternThere is one more part of the LKMM related to plain accesses (although
2651c58a8017SAlan Sternnot to data races) we should discuss.  Recall that many relations such
2652c58a8017SAlan Sternas hb are limited to marked accesses only.  As a result, the
2653c58a8017SAlan Sternhappens-before, propagates-before, and rcu axioms (which state that
2654c58a8017SAlan Sternvarious relation must not contain a cycle) doesn't apply to plain
2655c58a8017SAlan Sternaccesses.  Nevertheless, we do want to rule out such cycles, because
2656c58a8017SAlan Sternthey don't make sense even for plain accesses.
2657c58a8017SAlan Stern
2658c58a8017SAlan SternTo this end, the LKMM imposes three extra restrictions, together
2659c58a8017SAlan Sterncalled the "plain-coherence" axiom because of their resemblance to the
2660c58a8017SAlan Sternrules used by the operational model to ensure cache coherence (that
2661c58a8017SAlan Sternis, the rules governing the memory subsystem's choice of a store to
2662c58a8017SAlan Sternsatisfy a load request and its determination of where a store will
2663c58a8017SAlan Sternfall in the coherence order):
2664c58a8017SAlan Stern
2665c1b14609SMarco Elver	If R and W are race candidates and it is possible to link R to
2666c1b14609SMarco Elver	W by one of the xb* sequences listed above, then W ->rfe R is
2667c1b14609SMarco Elver	not allowed (i.e., a load cannot read from a store that it
2668c58a8017SAlan Stern	executes before, even if one or both is plain).
2669c58a8017SAlan Stern
2670c1b14609SMarco Elver	If W and R are race candidates and it is possible to link W to
2671c1b14609SMarco Elver	R by one of the vis sequences listed above, then R ->fre W is
2672c1b14609SMarco Elver	not allowed (i.e., if a store is visible to a load then the
2673c1b14609SMarco Elver	load must read from that store or one coherence-after it).
2674c58a8017SAlan Stern
2675c1b14609SMarco Elver	If W and W' are race candidates and it is possible to link W
2676c1b14609SMarco Elver	to W' by one of the vis sequences listed above, then W' ->co W
2677c1b14609SMarco Elver	is not allowed (i.e., if one store is visible to a second then
2678c1b14609SMarco Elver	the second must come after the first in the coherence order).
2679c58a8017SAlan Stern
2680c58a8017SAlan SternThis is the extent to which the LKMM deals with plain accesses.
2681c58a8017SAlan SternPerhaps it could say more (for example, plain accesses might
2682c58a8017SAlan Sterncontribute to the ppo relation), but at the moment it seems that this
2683c58a8017SAlan Sternminimal, conservative approach is good enough.
2684c58a8017SAlan Stern
2685c58a8017SAlan Stern
26861c27b644SPaul E. McKenneyODDS AND ENDS
26871c27b644SPaul E. McKenney-------------
26881c27b644SPaul E. McKenney
26891c27b644SPaul E. McKenneyThis section covers material that didn't quite fit anywhere in the
26901c27b644SPaul E. McKenneyearlier sections.
26911c27b644SPaul E. McKenney
26921c27b644SPaul E. McKenneyThe descriptions in this document don't always match the formal
26931c27b644SPaul E. McKenneyversion of the LKMM exactly.  For example, the actual formal
26941c27b644SPaul E. McKenneydefinition of the prop relation makes the initial coe or fre part
26951c27b644SPaul E. McKenneyoptional, and it doesn't require the events linked by the relation to
26961c27b644SPaul E. McKenneybe on the same CPU.  These differences are very unimportant; indeed,
26971c27b644SPaul E. McKenneyinstances where the coe/fre part of prop is missing are of no interest
26981c27b644SPaul E. McKenneybecause all the other parts (fences and rfe) are already included in
26991c27b644SPaul E. McKenneyhb anyway, and where the formal model adds prop into hb, it includes
27001c27b644SPaul E. McKenneyan explicit requirement that the events being linked are on the same
27011c27b644SPaul E. McKenneyCPU.
27021c27b644SPaul E. McKenney
27031c27b644SPaul E. McKenneyAnother minor difference has to do with events that are both memory
27041c27b644SPaul E. McKenneyaccesses and fences, such as those corresponding to smp_load_acquire()
27051c27b644SPaul E. McKenneycalls.  In the formal model, these events aren't actually both reads
27061c27b644SPaul E. McKenneyand fences; rather, they are read events with an annotation marking
27071c27b644SPaul E. McKenneythem as acquires.  (Or write events annotated as releases, in the case
27081c27b644SPaul E. McKenneysmp_store_release().)  The final effect is the same.
27091c27b644SPaul E. McKenney
27101c27b644SPaul E. McKenneyAlthough we didn't mention it above, the instruction execution
27111c27b644SPaul E. McKenneyordering provided by the smp_rmb() fence doesn't apply to read events
27121c27b644SPaul E. McKenneythat are part of a non-value-returning atomic update.  For instance,
27131c27b644SPaul E. McKenneygiven:
27141c27b644SPaul E. McKenney
27151c27b644SPaul E. McKenney	atomic_inc(&x);
27161c27b644SPaul E. McKenney	smp_rmb();
27171c27b644SPaul E. McKenney	r1 = READ_ONCE(y);
27181c27b644SPaul E. McKenney
27191c27b644SPaul E. McKenneyit is not guaranteed that the load from y will execute after the
27201c27b644SPaul E. McKenneyupdate to x.  This is because the ARMv8 architecture allows
27211c27b644SPaul E. McKenneynon-value-returning atomic operations effectively to be executed off
27221c27b644SPaul E. McKenneythe CPU.  Basically, the CPU tells the memory subsystem to increment
27231c27b644SPaul E. McKenneyx, and then the increment is carried out by the memory hardware with
27241c27b644SPaul E. McKenneyno further involvement from the CPU.  Since the CPU doesn't ever read
27251c27b644SPaul E. McKenneythe value of x, there is nothing for the smp_rmb() fence to act on.
27261c27b644SPaul E. McKenney
27271c27b644SPaul E. McKenneyThe LKMM defines a few extra synchronization operations in terms of
2728bf28ae56SAlan Sternthings we have already covered.  In particular, rcu_dereference() is
2729bf28ae56SAlan Sterntreated as READ_ONCE() and rcu_assign_pointer() is treated as
2730bf28ae56SAlan Sternsmp_store_release() -- which is basically how the Linux kernel treats
2731bf28ae56SAlan Sternthem.
27321c27b644SPaul E. McKenney
2733c58a8017SAlan SternAlthough we said that plain accesses are not linked by the ppo
27349ba7d3b3SJonas Oberhauserrelation, they do contribute to it indirectly.  Firstly, when there is
2735c58a8017SAlan Sternan address dependency from a marked load R to a plain store W,
2736c58a8017SAlan Sternfollowed by smp_wmb() and then a marked store W', the LKMM creates a
2737c58a8017SAlan Sternppo link from R to W'.  The reasoning behind this is perhaps a little
2738c58a8017SAlan Sternshaky, but essentially it says there is no way to generate object code
2739c58a8017SAlan Sternfor this source code in which W' could execute before R.  Just as with
2740c58a8017SAlan Sternpre-bounding by address dependencies, it is possible for the compiler
2741c58a8017SAlan Sternto undermine this relation if sufficient care is not taken.
2742c58a8017SAlan Stern
27439ba7d3b3SJonas OberhauserSecondly, plain accesses can carry dependencies: If a data dependency
27449ba7d3b3SJonas Oberhauserlinks a marked load R to a store W, and the store is read by a load R'
27459ba7d3b3SJonas Oberhauserfrom the same thread, then the data loaded by R' depends on the data
27469ba7d3b3SJonas Oberhauserloaded originally by R. Thus, if R' is linked to any access X by a
27479ba7d3b3SJonas Oberhauserdependency, R is also linked to access X by the same dependency, even
27489ba7d3b3SJonas Oberhauserif W' or R' (or both!) are plain.
27499ba7d3b3SJonas Oberhauser
27501c27b644SPaul E. McKenneyThere are a few oddball fences which need special treatment:
27511c27b644SPaul E. McKenneysmp_mb__before_atomic(), smp_mb__after_atomic(), and
27521c27b644SPaul E. McKenneysmp_mb__after_spinlock().  The LKMM uses fence events with special
27531c27b644SPaul E. McKenneyannotations for them; they act as strong fences just like smp_mb()
27541c27b644SPaul E. McKenneyexcept for the sets of events that they order.  Instead of ordering
27551c27b644SPaul E. McKenneyall po-earlier events against all po-later events, as smp_mb() does,
27561c27b644SPaul E. McKenneythey behave as follows:
27571c27b644SPaul E. McKenney
27581c27b644SPaul E. McKenney	smp_mb__before_atomic() orders all po-earlier events against
27591c27b644SPaul E. McKenney	po-later atomic updates and the events following them;
27601c27b644SPaul E. McKenney
27611c27b644SPaul E. McKenney	smp_mb__after_atomic() orders po-earlier atomic updates and
27621c27b644SPaul E. McKenney	the events preceding them against all po-later events;
27631c27b644SPaul E. McKenney
2764d25fba0eSBjörn Töpel	smp_mb__after_spinlock() orders po-earlier lock acquisition
27651c27b644SPaul E. McKenney	events and the events preceding them against all po-later
27661c27b644SPaul E. McKenney	events.
27671c27b644SPaul E. McKenney
27681c27b644SPaul E. McKenneyInterestingly, RCU and locking each introduce the possibility of
27691c27b644SPaul E. McKenneydeadlock.  When faced with code sequences such as:
27701c27b644SPaul E. McKenney
27711c27b644SPaul E. McKenney	spin_lock(&s);
27721c27b644SPaul E. McKenney	spin_lock(&s);
27731c27b644SPaul E. McKenney	spin_unlock(&s);
27741c27b644SPaul E. McKenney	spin_unlock(&s);
27751c27b644SPaul E. McKenney
27761c27b644SPaul E. McKenneyor:
27771c27b644SPaul E. McKenney
27781c27b644SPaul E. McKenney	rcu_read_lock();
27791c27b644SPaul E. McKenney	synchronize_rcu();
27801c27b644SPaul E. McKenney	rcu_read_unlock();
27811c27b644SPaul E. McKenney
27821c27b644SPaul E. McKenneywhat does the LKMM have to say?  Answer: It says there are no allowed
27831c27b644SPaul E. McKenneyexecutions at all, which makes sense.  But this can also lead to
27841c27b644SPaul E. McKenneymisleading results, because if a piece of code has multiple possible
27851c27b644SPaul E. McKenneyexecutions, some of which deadlock, the model will report only on the
27861c27b644SPaul E. McKenneynon-deadlocking executions.  For example:
27871c27b644SPaul E. McKenney
27881c27b644SPaul E. McKenney	int x, y;
27891c27b644SPaul E. McKenney
27901c27b644SPaul E. McKenney	P0()
27911c27b644SPaul E. McKenney	{
27921c27b644SPaul E. McKenney		int r0;
27931c27b644SPaul E. McKenney
27941c27b644SPaul E. McKenney		WRITE_ONCE(x, 1);
27951c27b644SPaul E. McKenney		r0 = READ_ONCE(y);
27961c27b644SPaul E. McKenney	}
27971c27b644SPaul E. McKenney
27981c27b644SPaul E. McKenney	P1()
27991c27b644SPaul E. McKenney	{
28001c27b644SPaul E. McKenney		rcu_read_lock();
28011c27b644SPaul E. McKenney		if (READ_ONCE(x) > 0) {
28021c27b644SPaul E. McKenney			WRITE_ONCE(y, 36);
28031c27b644SPaul E. McKenney			synchronize_rcu();
28041c27b644SPaul E. McKenney		}
28051c27b644SPaul E. McKenney		rcu_read_unlock();
28061c27b644SPaul E. McKenney	}
28071c27b644SPaul E. McKenney
28081c27b644SPaul E. McKenneyIs it possible to end up with r0 = 36 at the end?  The LKMM will tell
28091c27b644SPaul E. McKenneyyou it is not, but the model won't mention that this is because P1
28101c27b644SPaul E. McKenneywill self-deadlock in the executions where it stores 36 in y.
2811