1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7    Will Deacon <will.deacon@arm.com>
8    Peter Zijlstra <peterz@infradead.org>
9
10==========
11DISCLAIMER
12==========
13
14This document is not a specification; it is intentionally (for the sake of
15brevity) and unintentionally (due to being human) incomplete. This document is
16meant as a guide to using the various memory barriers provided by Linux, but
17in case of any doubt (and there are many) please ask.
18
19To repeat, this document is not a specification of what Linux expects from
20hardware.
21
22The purpose of this document is twofold:
23
24 (1) to specify the minimum functionality that one can rely on for any
25     particular barrier, and
26
27 (2) to provide a guide as to how to use the barriers that are available.
28
29Note that an architecture can provide more than the minimum requirement
30for any particular barrier, but if the architecure provides less than
31that, that architecture is incorrect.
32
33Note also that it is possible that a barrier may be a no-op for an
34architecture because the way that arch works renders an explicit barrier
35unnecessary in that case.
36
37
38========
39CONTENTS
40========
41
42 (*) Abstract memory access model.
43
44     - Device operations.
45     - Guarantees.
46
47 (*) What are memory barriers?
48
49     - Varieties of memory barrier.
50     - What may not be assumed about memory barriers?
51     - Data dependency barriers.
52     - Control dependencies.
53     - SMP barrier pairing.
54     - Examples of memory barrier sequences.
55     - Read memory barriers vs load speculation.
56     - Transitivity
57
58 (*) Explicit kernel barriers.
59
60     - Compiler barrier.
61     - CPU memory barriers.
62     - MMIO write barrier.
63
64 (*) Implicit kernel memory barriers.
65
66     - Lock acquisition functions.
67     - Interrupt disabling functions.
68     - Sleep and wake-up functions.
69     - Miscellaneous functions.
70
71 (*) Inter-CPU acquiring barrier effects.
72
73     - Acquires vs memory accesses.
74     - Acquires vs I/O accesses.
75
76 (*) Where are memory barriers needed?
77
78     - Interprocessor interaction.
79     - Atomic operations.
80     - Accessing devices.
81     - Interrupts.
82
83 (*) Kernel I/O barrier effects.
84
85 (*) Assumed minimum execution ordering model.
86
87 (*) The effects of the cpu cache.
88
89     - Cache coherency.
90     - Cache coherency vs DMA.
91     - Cache coherency vs MMIO.
92
93 (*) The things CPUs get up to.
94
95     - And then there's the Alpha.
96     - Virtual Machine Guests.
97
98 (*) Example uses.
99
100     - Circular buffers.
101
102 (*) References.
103
104
105============================
106ABSTRACT MEMORY ACCESS MODEL
107============================
108
109Consider the following abstract model of the system:
110
111		            :                :
112		            :                :
113		            :                :
114		+-------+   :   +--------+   :   +-------+
115		|       |   :   |        |   :   |       |
116		|       |   :   |        |   :   |       |
117		| CPU 1 |<----->| Memory |<----->| CPU 2 |
118		|       |   :   |        |   :   |       |
119		|       |   :   |        |   :   |       |
120		+-------+   :   +--------+   :   +-------+
121		    ^       :       ^        :       ^
122		    |       :       |        :       |
123		    |       :       |        :       |
124		    |       :       v        :       |
125		    |       :   +--------+   :       |
126		    |       :   |        |   :       |
127		    |       :   |        |   :       |
128		    +---------->| Device |<----------+
129		            :   |        |   :
130		            :   |        |   :
131		            :   +--------+   :
132		            :                :
133
134Each CPU executes a program that generates memory access operations.  In the
135abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
136perform the memory operations in any order it likes, provided program causality
137appears to be maintained.  Similarly, the compiler may also arrange the
138instructions it emits in any order it likes, provided it doesn't affect the
139apparent operation of the program.
140
141So in the above diagram, the effects of the memory operations performed by a
142CPU are perceived by the rest of the system as the operations cross the
143interface between the CPU and rest of the system (the dotted lines).
144
145
146For example, consider the following sequence of events:
147
148	CPU 1		CPU 2
149	===============	===============
150	{ A == 1; B == 2 }
151	A = 3;		x = B;
152	B = 4;		y = A;
153
154The set of accesses as seen by the memory system in the middle can be arranged
155in 24 different combinations:
156
157	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
158	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
159	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
160	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
161	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
162	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
163	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
164	STORE B=4, ...
165	...
166
167and can thus result in four different combinations of values:
168
169	x == 2, y == 1
170	x == 2, y == 3
171	x == 4, y == 1
172	x == 4, y == 3
173
174
175Furthermore, the stores committed by a CPU to the memory system may not be
176perceived by the loads made by another CPU in the same order as the stores were
177committed.
178
179
180As a further example, consider this sequence of events:
181
182	CPU 1		CPU 2
183	===============	===============
184	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
185	B = 4;		Q = P;
186	P = &B		D = *Q;
187
188There is an obvious data dependency here, as the value loaded into D depends on
189the address retrieved from P by CPU 2.  At the end of the sequence, any of the
190following results are possible:
191
192	(Q == &A) and (D == 1)
193	(Q == &B) and (D == 2)
194	(Q == &B) and (D == 4)
195
196Note that CPU 2 will never try and load C into D because the CPU will load P
197into Q before issuing the load of *Q.
198
199
200DEVICE OPERATIONS
201-----------------
202
203Some devices present their control interfaces as collections of memory
204locations, but the order in which the control registers are accessed is very
205important.  For instance, imagine an ethernet card with a set of internal
206registers that are accessed through an address port register (A) and a data
207port register (D).  To read internal register 5, the following code might then
208be used:
209
210	*A = 5;
211	x = *D;
212
213but this might show up as either of the following two sequences:
214
215	STORE *A = 5, x = LOAD *D
216	x = LOAD *D, STORE *A = 5
217
218the second of which will almost certainly result in a malfunction, since it set
219the address _after_ attempting to read the register.
220
221
222GUARANTEES
223----------
224
225There are some minimal guarantees that may be expected of a CPU:
226
227 (*) On any given CPU, dependent memory accesses will be issued in order, with
228     respect to itself.  This means that for:
229
230	Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
231
232     the CPU will issue the following memory operations:
233
234	Q = LOAD P, D = LOAD *Q
235
236     and always in that order.  On most systems, smp_read_barrier_depends()
237     does nothing, but it is required for DEC Alpha.  The READ_ONCE()
238     is required to prevent compiler mischief.  Please note that you
239     should normally use something like rcu_dereference() instead of
240     open-coding smp_read_barrier_depends().
241
242 (*) Overlapping loads and stores within a particular CPU will appear to be
243     ordered within that CPU.  This means that for:
244
245	a = READ_ONCE(*X); WRITE_ONCE(*X, b);
246
247     the CPU will only issue the following sequence of memory operations:
248
249	a = LOAD *X, STORE *X = b
250
251     And for:
252
253	WRITE_ONCE(*X, c); d = READ_ONCE(*X);
254
255     the CPU will only issue:
256
257	STORE *X = c, d = LOAD *X
258
259     (Loads and stores overlap if they are targeted at overlapping pieces of
260     memory).
261
262And there are a number of things that _must_ or _must_not_ be assumed:
263
264 (*) It _must_not_ be assumed that the compiler will do what you want
265     with memory references that are not protected by READ_ONCE() and
266     WRITE_ONCE().  Without them, the compiler is within its rights to
267     do all sorts of "creative" transformations, which are covered in
268     the COMPILER BARRIER section.
269
270 (*) It _must_not_ be assumed that independent loads and stores will be issued
271     in the order given.  This means that for:
272
273	X = *A; Y = *B; *D = Z;
274
275     we may get any of the following sequences:
276
277	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
278	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
279	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
280	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
281	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
282	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
283
284 (*) It _must_ be assumed that overlapping memory accesses may be merged or
285     discarded.  This means that for:
286
287	X = *A; Y = *(A + 4);
288
289     we may get any one of the following sequences:
290
291	X = LOAD *A; Y = LOAD *(A + 4);
292	Y = LOAD *(A + 4); X = LOAD *A;
293	{X, Y} = LOAD {*A, *(A + 4) };
294
295     And for:
296
297	*A = X; *(A + 4) = Y;
298
299     we may get any of:
300
301	STORE *A = X; STORE *(A + 4) = Y;
302	STORE *(A + 4) = Y; STORE *A = X;
303	STORE {*A, *(A + 4) } = {X, Y};
304
305And there are anti-guarantees:
306
307 (*) These guarantees do not apply to bitfields, because compilers often
308     generate code to modify these using non-atomic read-modify-write
309     sequences.  Do not attempt to use bitfields to synchronize parallel
310     algorithms.
311
312 (*) Even in cases where bitfields are protected by locks, all fields
313     in a given bitfield must be protected by one lock.  If two fields
314     in a given bitfield are protected by different locks, the compiler's
315     non-atomic read-modify-write sequences can cause an update to one
316     field to corrupt the value of an adjacent field.
317
318 (*) These guarantees apply only to properly aligned and sized scalar
319     variables.  "Properly sized" currently means variables that are
320     the same size as "char", "short", "int" and "long".  "Properly
321     aligned" means the natural alignment, thus no constraints for
322     "char", two-byte alignment for "short", four-byte alignment for
323     "int", and either four-byte or eight-byte alignment for "long",
324     on 32-bit and 64-bit systems, respectively.  Note that these
325     guarantees were introduced into the C11 standard, so beware when
326     using older pre-C11 compilers (for example, gcc 4.6).  The portion
327     of the standard containing this guarantee is Section 3.14, which
328     defines "memory location" as follows:
329
330     	memory location
331		either an object of scalar type, or a maximal sequence
332		of adjacent bit-fields all having nonzero width
333
334		NOTE 1: Two threads of execution can update and access
335		separate memory locations without interfering with
336		each other.
337
338		NOTE 2: A bit-field and an adjacent non-bit-field member
339		are in separate memory locations. The same applies
340		to two bit-fields, if one is declared inside a nested
341		structure declaration and the other is not, or if the two
342		are separated by a zero-length bit-field declaration,
343		or if they are separated by a non-bit-field member
344		declaration. It is not safe to concurrently update two
345		bit-fields in the same structure if all members declared
346		between them are also bit-fields, no matter what the
347		sizes of those intervening bit-fields happen to be.
348
349
350=========================
351WHAT ARE MEMORY BARRIERS?
352=========================
353
354As can be seen above, independent memory operations are effectively performed
355in random order, but this can be a problem for CPU-CPU interaction and for I/O.
356What is required is some way of intervening to instruct the compiler and the
357CPU to restrict the order.
358
359Memory barriers are such interventions.  They impose a perceived partial
360ordering over the memory operations on either side of the barrier.
361
362Such enforcement is important because the CPUs and other devices in a system
363can use a variety of tricks to improve performance, including reordering,
364deferral and combination of memory operations; speculative loads; speculative
365branch prediction and various types of caching.  Memory barriers are used to
366override or suppress these tricks, allowing the code to sanely control the
367interaction of multiple CPUs and/or devices.
368
369
370VARIETIES OF MEMORY BARRIER
371---------------------------
372
373Memory barriers come in four basic varieties:
374
375 (1) Write (or store) memory barriers.
376
377     A write memory barrier gives a guarantee that all the STORE operations
378     specified before the barrier will appear to happen before all the STORE
379     operations specified after the barrier with respect to the other
380     components of the system.
381
382     A write barrier is a partial ordering on stores only; it is not required
383     to have any effect on loads.
384
385     A CPU can be viewed as committing a sequence of store operations to the
386     memory system as time progresses.  All stores before a write barrier will
387     occur in the sequence _before_ all the stores after the write barrier.
388
389     [!] Note that write barriers should normally be paired with read or data
390     dependency barriers; see the "SMP barrier pairing" subsection.
391
392
393 (2) Data dependency barriers.
394
395     A data dependency barrier is a weaker form of read barrier.  In the case
396     where two loads are performed such that the second depends on the result
397     of the first (eg: the first load retrieves the address to which the second
398     load will be directed), a data dependency barrier would be required to
399     make sure that the target of the second load is updated before the address
400     obtained by the first load is accessed.
401
402     A data dependency barrier is a partial ordering on interdependent loads
403     only; it is not required to have any effect on stores, independent loads
404     or overlapping loads.
405
406     As mentioned in (1), the other CPUs in the system can be viewed as
407     committing sequences of stores to the memory system that the CPU being
408     considered can then perceive.  A data dependency barrier issued by the CPU
409     under consideration guarantees that for any load preceding it, if that
410     load touches one of a sequence of stores from another CPU, then by the
411     time the barrier completes, the effects of all the stores prior to that
412     touched by the load will be perceptible to any loads issued after the data
413     dependency barrier.
414
415     See the "Examples of memory barrier sequences" subsection for diagrams
416     showing the ordering constraints.
417
418     [!] Note that the first load really has to have a _data_ dependency and
419     not a control dependency.  If the address for the second load is dependent
420     on the first load, but the dependency is through a conditional rather than
421     actually loading the address itself, then it's a _control_ dependency and
422     a full read barrier or better is required.  See the "Control dependencies"
423     subsection for more information.
424
425     [!] Note that data dependency barriers should normally be paired with
426     write barriers; see the "SMP barrier pairing" subsection.
427
428
429 (3) Read (or load) memory barriers.
430
431     A read barrier is a data dependency barrier plus a guarantee that all the
432     LOAD operations specified before the barrier will appear to happen before
433     all the LOAD operations specified after the barrier with respect to the
434     other components of the system.
435
436     A read barrier is a partial ordering on loads only; it is not required to
437     have any effect on stores.
438
439     Read memory barriers imply data dependency barriers, and so can substitute
440     for them.
441
442     [!] Note that read barriers should normally be paired with write barriers;
443     see the "SMP barrier pairing" subsection.
444
445
446 (4) General memory barriers.
447
448     A general memory barrier gives a guarantee that all the LOAD and STORE
449     operations specified before the barrier will appear to happen before all
450     the LOAD and STORE operations specified after the barrier with respect to
451     the other components of the system.
452
453     A general memory barrier is a partial ordering over both loads and stores.
454
455     General memory barriers imply both read and write memory barriers, and so
456     can substitute for either.
457
458
459And a couple of implicit varieties:
460
461 (5) ACQUIRE operations.
462
463     This acts as a one-way permeable barrier.  It guarantees that all memory
464     operations after the ACQUIRE operation will appear to happen after the
465     ACQUIRE operation with respect to the other components of the system.
466     ACQUIRE operations include LOCK operations and both smp_load_acquire()
467     and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
468     semantics from relying on a control dependency and smp_rmb().
469
470     Memory operations that occur before an ACQUIRE operation may appear to
471     happen after it completes.
472
473     An ACQUIRE operation should almost always be paired with a RELEASE
474     operation.
475
476
477 (6) RELEASE operations.
478
479     This also acts as a one-way permeable barrier.  It guarantees that all
480     memory operations before the RELEASE operation will appear to happen
481     before the RELEASE operation with respect to the other components of the
482     system. RELEASE operations include UNLOCK operations and
483     smp_store_release() operations.
484
485     Memory operations that occur after a RELEASE operation may appear to
486     happen before it completes.
487
488     The use of ACQUIRE and RELEASE operations generally precludes the need
489     for other sorts of memory barrier (but note the exceptions mentioned in
490     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
491     pair is -not- guaranteed to act as a full memory barrier.  However, after
492     an ACQUIRE on a given variable, all memory accesses preceding any prior
493     RELEASE on that same variable are guaranteed to be visible.  In other
494     words, within a given variable's critical section, all accesses of all
495     previous critical sections for that variable are guaranteed to have
496     completed.
497
498     This means that ACQUIRE acts as a minimal "acquire" operation and
499     RELEASE acts as a minimal "release" operation.
500
501A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
502and RELEASE variants in addition to fully-ordered and relaxed (no barrier
503semantics) definitions.  For compound atomics performing both a load and a
504store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
505only to the store portion of the operation.
506
507Memory barriers are only required where there's a possibility of interaction
508between two CPUs or between a CPU and a device.  If it can be guaranteed that
509there won't be any such interaction in any particular piece of code, then
510memory barriers are unnecessary in that piece of code.
511
512
513Note that these are the _minimum_ guarantees.  Different architectures may give
514more substantial guarantees, but they may _not_ be relied upon outside of arch
515specific code.
516
517
518WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
519----------------------------------------------
520
521There are certain things that the Linux kernel memory barriers do not guarantee:
522
523 (*) There is no guarantee that any of the memory accesses specified before a
524     memory barrier will be _complete_ by the completion of a memory barrier
525     instruction; the barrier can be considered to draw a line in that CPU's
526     access queue that accesses of the appropriate type may not cross.
527
528 (*) There is no guarantee that issuing a memory barrier on one CPU will have
529     any direct effect on another CPU or any other hardware in the system.  The
530     indirect effect will be the order in which the second CPU sees the effects
531     of the first CPU's accesses occur, but see the next point:
532
533 (*) There is no guarantee that a CPU will see the correct order of effects
534     from a second CPU's accesses, even _if_ the second CPU uses a memory
535     barrier, unless the first CPU _also_ uses a matching memory barrier (see
536     the subsection on "SMP Barrier Pairing").
537
538 (*) There is no guarantee that some intervening piece of off-the-CPU
539     hardware[*] will not reorder the memory accesses.  CPU cache coherency
540     mechanisms should propagate the indirect effects of a memory barrier
541     between CPUs, but might not do so in order.
542
543	[*] For information on bus mastering DMA and coherency please read:
544
545	    Documentation/PCI/pci.txt
546	    Documentation/DMA-API-HOWTO.txt
547	    Documentation/DMA-API.txt
548
549
550DATA DEPENDENCY BARRIERS
551------------------------
552
553The usage requirements of data dependency barriers are a little subtle, and
554it's not always obvious that they're needed.  To illustrate, consider the
555following sequence of events:
556
557	CPU 1		      CPU 2
558	===============	      ===============
559	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
560	B = 4;
561	<write barrier>
562	WRITE_ONCE(P, &B)
563			      Q = READ_ONCE(P);
564			      D = *Q;
565
566There's a clear data dependency here, and it would seem that by the end of the
567sequence, Q must be either &A or &B, and that:
568
569	(Q == &A) implies (D == 1)
570	(Q == &B) implies (D == 4)
571
572But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
573leading to the following situation:
574
575	(Q == &B) and (D == 2) ????
576
577Whilst this may seem like a failure of coherency or causality maintenance, it
578isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
579Alpha).
580
581To deal with this, a data dependency barrier or better must be inserted
582between the address load and the data load:
583
584	CPU 1		      CPU 2
585	===============	      ===============
586	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
587	B = 4;
588	<write barrier>
589	WRITE_ONCE(P, &B);
590			      Q = READ_ONCE(P);
591			      <data dependency barrier>
592			      D = *Q;
593
594This enforces the occurrence of one of the two implications, and prevents the
595third possibility from arising.
596
597A data-dependency barrier must also order against dependent writes:
598
599	CPU 1		      CPU 2
600	===============	      ===============
601	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
602	B = 4;
603	<write barrier>
604	WRITE_ONCE(P, &B);
605			      Q = READ_ONCE(P);
606			      <data dependency barrier>
607			      *Q = 5;
608
609The data-dependency barrier must order the read into Q with the store
610into *Q.  This prohibits this outcome:
611
612	(Q == B) && (B == 4)
613
614Please note that this pattern should be rare.  After all, the whole point
615of dependency ordering is to -prevent- writes to the data structure, along
616with the expensive cache misses associated with those writes.  This pattern
617can be used to record rare error conditions and the like, and the ordering
618prevents such records from being lost.
619
620
621[!] Note that this extremely counterintuitive situation arises most easily on
622machines with split caches, so that, for example, one cache bank processes
623even-numbered cache lines and the other bank processes odd-numbered cache
624lines.  The pointer P might be stored in an odd-numbered cache line, and the
625variable B might be stored in an even-numbered cache line.  Then, if the
626even-numbered bank of the reading CPU's cache is extremely busy while the
627odd-numbered bank is idle, one can see the new value of the pointer P (&B),
628but the old value of the variable B (2).
629
630
631The data dependency barrier is very important to the RCU system,
632for example.  See rcu_assign_pointer() and rcu_dereference() in
633include/linux/rcupdate.h.  This permits the current target of an RCU'd
634pointer to be replaced with a new modified target, without the replacement
635target appearing to be incompletely initialised.
636
637See also the subsection on "Cache Coherency" for a more thorough example.
638
639
640CONTROL DEPENDENCIES
641--------------------
642
643A load-load control dependency requires a full read memory barrier, not
644simply a data dependency barrier to make it work correctly.  Consider the
645following bit of code:
646
647	q = READ_ONCE(a);
648	if (q) {
649		<data dependency barrier>  /* BUG: No data dependency!!! */
650		p = READ_ONCE(b);
651	}
652
653This will not have the desired effect because there is no actual data
654dependency, but rather a control dependency that the CPU may short-circuit
655by attempting to predict the outcome in advance, so that other CPUs see
656the load from b as having happened before the load from a.  In such a
657case what's actually required is:
658
659	q = READ_ONCE(a);
660	if (q) {
661		<read barrier>
662		p = READ_ONCE(b);
663	}
664
665However, stores are not speculated.  This means that ordering -is- provided
666for load-store control dependencies, as in the following example:
667
668	q = READ_ONCE(a);
669	if (q) {
670		WRITE_ONCE(b, p);
671	}
672
673Control dependencies pair normally with other types of barriers.  That
674said, please note that READ_ONCE() is not optional! Without the
675READ_ONCE(), the compiler might combine the load from 'a' with other
676loads from 'a', and the store to 'b' with other stores to 'b', with
677possible highly counterintuitive effects on ordering.
678
679Worse yet, if the compiler is able to prove (say) that the value of
680variable 'a' is always non-zero, it would be well within its rights
681to optimize the original example by eliminating the "if" statement
682as follows:
683
684	q = a;
685	b = p;  /* BUG: Compiler and CPU can both reorder!!! */
686
687So don't leave out the READ_ONCE().
688
689It is tempting to try to enforce ordering on identical stores on both
690branches of the "if" statement as follows:
691
692	q = READ_ONCE(a);
693	if (q) {
694		barrier();
695		WRITE_ONCE(b, p);
696		do_something();
697	} else {
698		barrier();
699		WRITE_ONCE(b, p);
700		do_something_else();
701	}
702
703Unfortunately, current compilers will transform this as follows at high
704optimization levels:
705
706	q = READ_ONCE(a);
707	barrier();
708	WRITE_ONCE(b, p);  /* BUG: No ordering vs. load from a!!! */
709	if (q) {
710		/* WRITE_ONCE(b, p); -- moved up, BUG!!! */
711		do_something();
712	} else {
713		/* WRITE_ONCE(b, p); -- moved up, BUG!!! */
714		do_something_else();
715	}
716
717Now there is no conditional between the load from 'a' and the store to
718'b', which means that the CPU is within its rights to reorder them:
719The conditional is absolutely required, and must be present in the
720assembly code even after all compiler optimizations have been applied.
721Therefore, if you need ordering in this example, you need explicit
722memory barriers, for example, smp_store_release():
723
724	q = READ_ONCE(a);
725	if (q) {
726		smp_store_release(&b, p);
727		do_something();
728	} else {
729		smp_store_release(&b, p);
730		do_something_else();
731	}
732
733In contrast, without explicit memory barriers, two-legged-if control
734ordering is guaranteed only when the stores differ, for example:
735
736	q = READ_ONCE(a);
737	if (q) {
738		WRITE_ONCE(b, p);
739		do_something();
740	} else {
741		WRITE_ONCE(b, r);
742		do_something_else();
743	}
744
745The initial READ_ONCE() is still required to prevent the compiler from
746proving the value of 'a'.
747
748In addition, you need to be careful what you do with the local variable 'q',
749otherwise the compiler might be able to guess the value and again remove
750the needed conditional.  For example:
751
752	q = READ_ONCE(a);
753	if (q % MAX) {
754		WRITE_ONCE(b, p);
755		do_something();
756	} else {
757		WRITE_ONCE(b, r);
758		do_something_else();
759	}
760
761If MAX is defined to be 1, then the compiler knows that (q % MAX) is
762equal to zero, in which case the compiler is within its rights to
763transform the above code into the following:
764
765	q = READ_ONCE(a);
766	WRITE_ONCE(b, p);
767	do_something_else();
768
769Given this transformation, the CPU is not required to respect the ordering
770between the load from variable 'a' and the store to variable 'b'.  It is
771tempting to add a barrier(), but this does not help.  The conditional
772is gone, and the barrier won't bring it back.  Therefore, if you are
773relying on this ordering, you should make sure that MAX is greater than
774one, perhaps as follows:
775
776	q = READ_ONCE(a);
777	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
778	if (q % MAX) {
779		WRITE_ONCE(b, p);
780		do_something();
781	} else {
782		WRITE_ONCE(b, r);
783		do_something_else();
784	}
785
786Please note once again that the stores to 'b' differ.  If they were
787identical, as noted earlier, the compiler could pull this store outside
788of the 'if' statement.
789
790You must also be careful not to rely too much on boolean short-circuit
791evaluation.  Consider this example:
792
793	q = READ_ONCE(a);
794	if (q || 1 > 0)
795		WRITE_ONCE(b, 1);
796
797Because the first condition cannot fault and the second condition is
798always true, the compiler can transform this example as following,
799defeating control dependency:
800
801	q = READ_ONCE(a);
802	WRITE_ONCE(b, 1);
803
804This example underscores the need to ensure that the compiler cannot
805out-guess your code.  More generally, although READ_ONCE() does force
806the compiler to actually emit code for a given load, it does not force
807the compiler to use the results.
808
809In addition, control dependencies apply only to the then-clause and
810else-clause of the if-statement in question.  In particular, it does
811not necessarily apply to code following the if-statement:
812
813	q = READ_ONCE(a);
814	if (q) {
815		WRITE_ONCE(b, p);
816	} else {
817		WRITE_ONCE(b, r);
818	}
819	WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from "a". */
820
821It is tempting to argue that there in fact is ordering because the
822compiler cannot reorder volatile accesses and also cannot reorder
823the writes to "b" with the condition.  Unfortunately for this line
824of reasoning, the compiler might compile the two writes to "b" as
825conditional-move instructions, as in this fanciful pseudo-assembly
826language:
827
828	ld r1,a
829	ld r2,p
830	ld r3,r
831	cmp r1,$0
832	cmov,ne r4,r2
833	cmov,eq r4,r3
834	st r4,b
835	st $1,c
836
837A weakly ordered CPU would have no dependency of any sort between the load
838from "a" and the store to "c".  The control dependencies would extend
839only to the pair of cmov instructions and the store depending on them.
840In short, control dependencies apply only to the stores in the then-clause
841and else-clause of the if-statement in question (including functions
842invoked by those two clauses), not to code following that if-statement.
843
844Finally, control dependencies do -not- provide transitivity.  This is
845demonstrated by two related examples, with the initial values of
846x and y both being zero:
847
848	CPU 0                     CPU 1
849	=======================   =======================
850	r1 = READ_ONCE(x);        r2 = READ_ONCE(y);
851	if (r1 > 0)               if (r2 > 0)
852	  WRITE_ONCE(y, 1);         WRITE_ONCE(x, 1);
853
854	assert(!(r1 == 1 && r2 == 1));
855
856The above two-CPU example will never trigger the assert().  However,
857if control dependencies guaranteed transitivity (which they do not),
858then adding the following CPU would guarantee a related assertion:
859
860	CPU 2
861	=====================
862	WRITE_ONCE(x, 2);
863
864	assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
865
866But because control dependencies do -not- provide transitivity, the above
867assertion can fail after the combined three-CPU example completes.  If you
868need the three-CPU example to provide ordering, you will need smp_mb()
869between the loads and stores in the CPU 0 and CPU 1 code fragments,
870that is, just before or just after the "if" statements.  Furthermore,
871the original two-CPU example is very fragile and should be avoided.
872
873These two examples are the LB and WWC litmus tests from this paper:
874http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
875site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
876
877In summary:
878
879  (*) Control dependencies can order prior loads against later stores.
880      However, they do -not- guarantee any other sort of ordering:
881      Not prior loads against later loads, nor prior stores against
882      later anything.  If you need these other forms of ordering,
883      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
884      later loads, smp_mb().
885
886  (*) If both legs of the "if" statement begin with identical stores to
887      the same variable, then those stores must be ordered, either by
888      preceding both of them with smp_mb() or by using smp_store_release()
889      to carry out the stores.  Please note that it is -not- sufficient
890      to use barrier() at beginning of each leg of the "if" statement
891      because, as shown by the example above, optimizing compilers can
892      destroy the control dependency while respecting the letter of the
893      barrier() law.
894
895  (*) Control dependencies require at least one run-time conditional
896      between the prior load and the subsequent store, and this
897      conditional must involve the prior load.  If the compiler is able
898      to optimize the conditional away, it will have also optimized
899      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
900      can help to preserve the needed conditional.
901
902  (*) Control dependencies require that the compiler avoid reordering the
903      dependency into nonexistence.  Careful use of READ_ONCE() or
904      atomic{,64}_read() can help to preserve your control dependency.
905      Please see the COMPILER BARRIER section for more information.
906
907  (*) Control dependencies apply only to the then-clause and else-clause
908      of the if-statement containing the control dependency, including
909      any functions that these two clauses call.  Control dependencies
910      do -not- apply to code following the if-statement containing the
911      control dependency.
912
913  (*) Control dependencies pair normally with other types of barriers.
914
915  (*) Control dependencies do -not- provide transitivity.  If you
916      need transitivity, use smp_mb().
917
918
919SMP BARRIER PAIRING
920-------------------
921
922When dealing with CPU-CPU interactions, certain types of memory barrier should
923always be paired.  A lack of appropriate pairing is almost certainly an error.
924
925General barriers pair with each other, though they also pair with most
926other types of barriers, albeit without transitivity.  An acquire barrier
927pairs with a release barrier, but both may also pair with other barriers,
928including of course general barriers.  A write barrier pairs with a data
929dependency barrier, a control dependency, an acquire barrier, a release
930barrier, a read barrier, or a general barrier.  Similarly a read barrier,
931control dependency, or a data dependency barrier pairs with a write
932barrier, an acquire barrier, a release barrier, or a general barrier:
933
934	CPU 1		      CPU 2
935	===============	      ===============
936	WRITE_ONCE(a, 1);
937	<write barrier>
938	WRITE_ONCE(b, 2);     x = READ_ONCE(b);
939			      <read barrier>
940			      y = READ_ONCE(a);
941
942Or:
943
944	CPU 1		      CPU 2
945	===============	      ===============================
946	a = 1;
947	<write barrier>
948	WRITE_ONCE(b, &a);    x = READ_ONCE(b);
949			      <data dependency barrier>
950			      y = *x;
951
952Or even:
953
954	CPU 1		      CPU 2
955	===============	      ===============================
956	r1 = READ_ONCE(y);
957	<general barrier>
958	WRITE_ONCE(y, 1);     if (r2 = READ_ONCE(x)) {
959			         <implicit control dependency>
960			         WRITE_ONCE(y, 1);
961			      }
962
963	assert(r1 == 0 || r2 == 0);
964
965Basically, the read barrier always has to be there, even though it can be of
966the "weaker" type.
967
968[!] Note that the stores before the write barrier would normally be expected to
969match the loads after the read barrier or the data dependency barrier, and vice
970versa:
971
972	CPU 1                               CPU 2
973	===================                 ===================
974	WRITE_ONCE(a, 1);    }----   --->{  v = READ_ONCE(c);
975	WRITE_ONCE(b, 2);    }    \ /    {  w = READ_ONCE(d);
976	<write barrier>            \        <read barrier>
977	WRITE_ONCE(c, 3);    }    / \    {  x = READ_ONCE(a);
978	WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
979
980
981EXAMPLES OF MEMORY BARRIER SEQUENCES
982------------------------------------
983
984Firstly, write barriers act as partial orderings on store operations.
985Consider the following sequence of events:
986
987	CPU 1
988	=======================
989	STORE A = 1
990	STORE B = 2
991	STORE C = 3
992	<write barrier>
993	STORE D = 4
994	STORE E = 5
995
996This sequence of events is committed to the memory coherence system in an order
997that the rest of the system might perceive as the unordered set of { STORE A,
998STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
999}:
1000
1001	+-------+       :      :
1002	|       |       +------+
1003	|       |------>| C=3  |     }     /\
1004	|       |  :    +------+     }-----  \  -----> Events perceptible to
1005	|       |  :    | A=1  |     }        \/       the rest of the system
1006	|       |  :    +------+     }
1007	| CPU 1 |  :    | B=2  |     }
1008	|       |       +------+     }
1009	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
1010	|       |       +------+     }        requires all stores prior to the
1011	|       |  :    | E=5  |     }        barrier to be committed before
1012	|       |  :    +------+     }        further stores may take place
1013	|       |------>| D=4  |     }
1014	|       |       +------+
1015	+-------+       :      :
1016	                   |
1017	                   | Sequence in which stores are committed to the
1018	                   | memory system by CPU 1
1019	                   V
1020
1021
1022Secondly, data dependency barriers act as partial orderings on data-dependent
1023loads.  Consider the following sequence of events:
1024
1025	CPU 1			CPU 2
1026	=======================	=======================
1027		{ B = 7; X = 9; Y = 8; C = &Y }
1028	STORE A = 1
1029	STORE B = 2
1030	<write barrier>
1031	STORE C = &B		LOAD X
1032	STORE D = 4		LOAD C (gets &B)
1033				LOAD *C (reads B)
1034
1035Without intervention, CPU 2 may perceive the events on CPU 1 in some
1036effectively random order, despite the write barrier issued by CPU 1:
1037
1038	+-------+       :      :                :       :
1039	|       |       +------+                +-------+  | Sequence of update
1040	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
1041	|       |  :    +------+     \          +-------+  | CPU 2
1042	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
1043	|       |       +------+       |        +-------+
1044	|       |   wwwwwwwwwwwwwwww   |        :       :
1045	|       |       +------+       |        :       :
1046	|       |  :    | C=&B |---    |        :       :       +-------+
1047	|       |  :    +------+   \   |        +-------+       |       |
1048	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1049	|       |       +------+       |        +-------+       |       |
1050	+-------+       :      :       |        :       :       |       |
1051	                               |        :       :       |       |
1052	                               |        :       :       | CPU 2 |
1053	                               |        +-------+       |       |
1054	    Apparently incorrect --->  |        | B->7  |------>|       |
1055	    perception of B (!)        |        +-------+       |       |
1056	                               |        :       :       |       |
1057	                               |        +-------+       |       |
1058	    The load of X holds --->    \       | X->9  |------>|       |
1059	    up the maintenance           \      +-------+       |       |
1060	    of coherence of B             ----->| B->2  |       +-------+
1061	                                        +-------+
1062	                                        :       :
1063
1064
1065In the above example, CPU 2 perceives that B is 7, despite the load of *C
1066(which would be B) coming after the LOAD of C.
1067
1068If, however, a data dependency barrier were to be placed between the load of C
1069and the load of *C (ie: B) on CPU 2:
1070
1071	CPU 1			CPU 2
1072	=======================	=======================
1073		{ B = 7; X = 9; Y = 8; C = &Y }
1074	STORE A = 1
1075	STORE B = 2
1076	<write barrier>
1077	STORE C = &B		LOAD X
1078	STORE D = 4		LOAD C (gets &B)
1079				<data dependency barrier>
1080				LOAD *C (reads B)
1081
1082then the following will occur:
1083
1084	+-------+       :      :                :       :
1085	|       |       +------+                +-------+
1086	|       |------>| B=2  |-----       --->| Y->8  |
1087	|       |  :    +------+     \          +-------+
1088	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
1089	|       |       +------+       |        +-------+
1090	|       |   wwwwwwwwwwwwwwww   |        :       :
1091	|       |       +------+       |        :       :
1092	|       |  :    | C=&B |---    |        :       :       +-------+
1093	|       |  :    +------+   \   |        +-------+       |       |
1094	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1095	|       |       +------+       |        +-------+       |       |
1096	+-------+       :      :       |        :       :       |       |
1097	                               |        :       :       |       |
1098	                               |        :       :       | CPU 2 |
1099	                               |        +-------+       |       |
1100	                               |        | X->9  |------>|       |
1101	                               |        +-------+       |       |
1102	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
1103	  prior to the store of C        \      +-------+       |       |
1104	  are perceptible to              ----->| B->2  |------>|       |
1105	  subsequent loads                      +-------+       |       |
1106	                                        :       :       +-------+
1107
1108
1109And thirdly, a read barrier acts as a partial order on loads.  Consider the
1110following sequence of events:
1111
1112	CPU 1			CPU 2
1113	=======================	=======================
1114		{ A = 0, B = 9 }
1115	STORE A=1
1116	<write barrier>
1117	STORE B=2
1118				LOAD B
1119				LOAD A
1120
1121Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1122some effectively random order, despite the write barrier issued by CPU 1:
1123
1124	+-------+       :      :                :       :
1125	|       |       +------+                +-------+
1126	|       |------>| A=1  |------      --->| A->0  |
1127	|       |       +------+      \         +-------+
1128	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1129	|       |       +------+        |       +-------+
1130	|       |------>| B=2  |---     |       :       :
1131	|       |       +------+   \    |       :       :       +-------+
1132	+-------+       :      :    \   |       +-------+       |       |
1133	                             ---------->| B->2  |------>|       |
1134	                                |       +-------+       | CPU 2 |
1135	                                |       | A->0  |------>|       |
1136	                                |       +-------+       |       |
1137	                                |       :       :       +-------+
1138	                                 \      :       :
1139	                                  \     +-------+
1140	                                   ---->| A->1  |
1141	                                        +-------+
1142	                                        :       :
1143
1144
1145If, however, a read barrier were to be placed between the load of B and the
1146load of A on CPU 2:
1147
1148	CPU 1			CPU 2
1149	=======================	=======================
1150		{ A = 0, B = 9 }
1151	STORE A=1
1152	<write barrier>
1153	STORE B=2
1154				LOAD B
1155				<read barrier>
1156				LOAD A
1157
1158then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
11592:
1160
1161	+-------+       :      :                :       :
1162	|       |       +------+                +-------+
1163	|       |------>| A=1  |------      --->| A->0  |
1164	|       |       +------+      \         +-------+
1165	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1166	|       |       +------+        |       +-------+
1167	|       |------>| B=2  |---     |       :       :
1168	|       |       +------+   \    |       :       :       +-------+
1169	+-------+       :      :    \   |       +-------+       |       |
1170	                             ---------->| B->2  |------>|       |
1171	                                |       +-------+       | CPU 2 |
1172	                                |       :       :       |       |
1173	                                |       :       :       |       |
1174	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1175	  barrier causes all effects      \     +-------+       |       |
1176	  prior to the storage of B        ---->| A->1  |------>|       |
1177	  to be perceptible to CPU 2            +-------+       |       |
1178	                                        :       :       +-------+
1179
1180
1181To illustrate this more completely, consider what could happen if the code
1182contained a load of A either side of the read barrier:
1183
1184	CPU 1			CPU 2
1185	=======================	=======================
1186		{ A = 0, B = 9 }
1187	STORE A=1
1188	<write barrier>
1189	STORE B=2
1190				LOAD B
1191				LOAD A [first load of A]
1192				<read barrier>
1193				LOAD A [second load of A]
1194
1195Even though the two loads of A both occur after the load of B, they may both
1196come up with different values:
1197
1198	+-------+       :      :                :       :
1199	|       |       +------+                +-------+
1200	|       |------>| A=1  |------      --->| A->0  |
1201	|       |       +------+      \         +-------+
1202	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1203	|       |       +------+        |       +-------+
1204	|       |------>| B=2  |---     |       :       :
1205	|       |       +------+   \    |       :       :       +-------+
1206	+-------+       :      :    \   |       +-------+       |       |
1207	                             ---------->| B->2  |------>|       |
1208	                                |       +-------+       | CPU 2 |
1209	                                |       :       :       |       |
1210	                                |       :       :       |       |
1211	                                |       +-------+       |       |
1212	                                |       | A->0  |------>| 1st   |
1213	                                |       +-------+       |       |
1214	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1215	  barrier causes all effects      \     +-------+       |       |
1216	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1217	  to be perceptible to CPU 2            +-------+       |       |
1218	                                        :       :       +-------+
1219
1220
1221But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1222before the read barrier completes anyway:
1223
1224	+-------+       :      :                :       :
1225	|       |       +------+                +-------+
1226	|       |------>| A=1  |------      --->| A->0  |
1227	|       |       +------+      \         +-------+
1228	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1229	|       |       +------+        |       +-------+
1230	|       |------>| B=2  |---     |       :       :
1231	|       |       +------+   \    |       :       :       +-------+
1232	+-------+       :      :    \   |       +-------+       |       |
1233	                             ---------->| B->2  |------>|       |
1234	                                |       +-------+       | CPU 2 |
1235	                                |       :       :       |       |
1236	                                 \      :       :       |       |
1237	                                  \     +-------+       |       |
1238	                                   ---->| A->1  |------>| 1st   |
1239	                                        +-------+       |       |
1240	                                    rrrrrrrrrrrrrrrrr   |       |
1241	                                        +-------+       |       |
1242	                                        | A->1  |------>| 2nd   |
1243	                                        +-------+       |       |
1244	                                        :       :       +-------+
1245
1246
1247The guarantee is that the second load will always come up with A == 1 if the
1248load of B came up with B == 2.  No such guarantee exists for the first load of
1249A; that may come up with either A == 0 or A == 1.
1250
1251
1252READ MEMORY BARRIERS VS LOAD SPECULATION
1253----------------------------------------
1254
1255Many CPUs speculate with loads: that is they see that they will need to load an
1256item from memory, and they find a time where they're not using the bus for any
1257other loads, and so do the load in advance - even though they haven't actually
1258got to that point in the instruction execution flow yet.  This permits the
1259actual load instruction to potentially complete immediately because the CPU
1260already has the value to hand.
1261
1262It may turn out that the CPU didn't actually need the value - perhaps because a
1263branch circumvented the load - in which case it can discard the value or just
1264cache it for later use.
1265
1266Consider:
1267
1268	CPU 1			CPU 2
1269	=======================	=======================
1270				LOAD B
1271				DIVIDE		} Divide instructions generally
1272				DIVIDE		} take a long time to perform
1273				LOAD A
1274
1275Which might appear as this:
1276
1277	                                        :       :       +-------+
1278	                                        +-------+       |       |
1279	                                    --->| B->2  |------>|       |
1280	                                        +-------+       | CPU 2 |
1281	                                        :       :DIVIDE |       |
1282	                                        +-------+       |       |
1283	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1284	division speculates on the              +-------+   ~   |       |
1285	LOAD of A                               :       :   ~   |       |
1286	                                        :       :DIVIDE |       |
1287	                                        :       :   ~   |       |
1288	Once the divisions are complete -->     :       :   ~-->|       |
1289	the CPU can then perform the            :       :       |       |
1290	LOAD with immediate effect              :       :       +-------+
1291
1292
1293Placing a read barrier or a data dependency barrier just before the second
1294load:
1295
1296	CPU 1			CPU 2
1297	=======================	=======================
1298				LOAD B
1299				DIVIDE
1300				DIVIDE
1301				<read barrier>
1302				LOAD A
1303
1304will force any value speculatively obtained to be reconsidered to an extent
1305dependent on the type of barrier used.  If there was no change made to the
1306speculated memory location, then the speculated value will just be used:
1307
1308	                                        :       :       +-------+
1309	                                        +-------+       |       |
1310	                                    --->| B->2  |------>|       |
1311	                                        +-------+       | CPU 2 |
1312	                                        :       :DIVIDE |       |
1313	                                        +-------+       |       |
1314	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1315	division speculates on the              +-------+   ~   |       |
1316	LOAD of A                               :       :   ~   |       |
1317	                                        :       :DIVIDE |       |
1318	                                        :       :   ~   |       |
1319	                                        :       :   ~   |       |
1320	                                    rrrrrrrrrrrrrrrr~   |       |
1321	                                        :       :   ~   |       |
1322	                                        :       :   ~-->|       |
1323	                                        :       :       |       |
1324	                                        :       :       +-------+
1325
1326
1327but if there was an update or an invalidation from another CPU pending, then
1328the speculation will be cancelled and the value reloaded:
1329
1330	                                        :       :       +-------+
1331	                                        +-------+       |       |
1332	                                    --->| B->2  |------>|       |
1333	                                        +-------+       | CPU 2 |
1334	                                        :       :DIVIDE |       |
1335	                                        +-------+       |       |
1336	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1337	division speculates on the              +-------+   ~   |       |
1338	LOAD of A                               :       :   ~   |       |
1339	                                        :       :DIVIDE |       |
1340	                                        :       :   ~   |       |
1341	                                        :       :   ~   |       |
1342	                                    rrrrrrrrrrrrrrrrr   |       |
1343	                                        +-------+       |       |
1344	The speculation is discarded --->   --->| A->1  |------>|       |
1345	and an updated value is                 +-------+       |       |
1346	retrieved                               :       :       +-------+
1347
1348
1349TRANSITIVITY
1350------------
1351
1352Transitivity is a deeply intuitive notion about ordering that is not
1353always provided by real computer systems.  The following example
1354demonstrates transitivity:
1355
1356	CPU 1			CPU 2			CPU 3
1357	=======================	=======================	=======================
1358		{ X = 0, Y = 0 }
1359	STORE X=1		LOAD X			STORE Y=1
1360				<general barrier>	<general barrier>
1361				LOAD Y			LOAD X
1362
1363Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1364This indicates that CPU 2's load from X in some sense follows CPU 1's
1365store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1366store to Y.  The question is then "Can CPU 3's load from X return 0?"
1367
1368Because CPU 2's load from X in some sense came after CPU 1's store, it
1369is natural to expect that CPU 3's load from X must therefore return 1.
1370This expectation is an example of transitivity: if a load executing on
1371CPU A follows a load from the same variable executing on CPU B, then
1372CPU A's load must either return the same value that CPU B's load did,
1373or must return some later value.
1374
1375In the Linux kernel, use of general memory barriers guarantees
1376transitivity.  Therefore, in the above example, if CPU 2's load from X
1377returns 1 and its load from Y returns 0, then CPU 3's load from X must
1378also return 1.
1379
1380However, transitivity is -not- guaranteed for read or write barriers.
1381For example, suppose that CPU 2's general barrier in the above example
1382is changed to a read barrier as shown below:
1383
1384	CPU 1			CPU 2			CPU 3
1385	=======================	=======================	=======================
1386		{ X = 0, Y = 0 }
1387	STORE X=1		LOAD X			STORE Y=1
1388				<read barrier>		<general barrier>
1389				LOAD Y			LOAD X
1390
1391This substitution destroys transitivity: in this example, it is perfectly
1392legal for CPU 2's load from X to return 1, its load from Y to return 0,
1393and CPU 3's load from X to return 0.
1394
1395The key point is that although CPU 2's read barrier orders its pair
1396of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1397this example runs on a system where CPUs 1 and 2 share a store buffer
1398or a level of cache, CPU 2 might have early access to CPU 1's writes.
1399General barriers are therefore required to ensure that all CPUs agree
1400on the combined order of CPU 1's and CPU 2's accesses.
1401
1402General barriers provide "global transitivity", so that all CPUs will
1403agree on the order of operations.  In contrast, a chain of release-acquire
1404pairs provides only "local transitivity", so that only those CPUs on
1405the chain are guaranteed to agree on the combined order of the accesses.
1406For example, switching to C code in deference to Herman Hollerith:
1407
1408	int u, v, x, y, z;
1409
1410	void cpu0(void)
1411	{
1412		r0 = smp_load_acquire(&x);
1413		WRITE_ONCE(u, 1);
1414		smp_store_release(&y, 1);
1415	}
1416
1417	void cpu1(void)
1418	{
1419		r1 = smp_load_acquire(&y);
1420		r4 = READ_ONCE(v);
1421		r5 = READ_ONCE(u);
1422		smp_store_release(&z, 1);
1423	}
1424
1425	void cpu2(void)
1426	{
1427		r2 = smp_load_acquire(&z);
1428		smp_store_release(&x, 1);
1429	}
1430
1431	void cpu3(void)
1432	{
1433		WRITE_ONCE(v, 1);
1434		smp_mb();
1435		r3 = READ_ONCE(u);
1436	}
1437
1438Because cpu0(), cpu1(), and cpu2() participate in a local transitive
1439chain of smp_store_release()/smp_load_acquire() pairs, the following
1440outcome is prohibited:
1441
1442	r0 == 1 && r1 == 1 && r2 == 1
1443
1444Furthermore, because of the release-acquire relationship between cpu0()
1445and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1446outcome is prohibited:
1447
1448	r1 == 1 && r5 == 0
1449
1450However, the transitivity of release-acquire is local to the participating
1451CPUs and does not apply to cpu3().  Therefore, the following outcome
1452is possible:
1453
1454	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1455
1456As an aside, the following outcome is also possible:
1457
1458	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1459
1460Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1461writes in order, CPUs not involved in the release-acquire chain might
1462well disagree on the order.  This disagreement stems from the fact that
1463the weak memory-barrier instructions used to implement smp_load_acquire()
1464and smp_store_release() are not required to order prior stores against
1465subsequent loads in all cases.  This means that cpu3() can see cpu0()'s
1466store to u as happening -after- cpu1()'s load from v, even though
1467both cpu0() and cpu1() agree that these two operations occurred in the
1468intended order.
1469
1470However, please keep in mind that smp_load_acquire() is not magic.
1471In particular, it simply reads from its argument with ordering.  It does
1472-not- ensure that any particular value will be read.  Therefore, the
1473following outcome is possible:
1474
1475	r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1476
1477Note that this outcome can happen even on a mythical sequentially
1478consistent system where nothing is ever reordered.
1479
1480To reiterate, if your code requires global transitivity, use general
1481barriers throughout.
1482
1483
1484========================
1485EXPLICIT KERNEL BARRIERS
1486========================
1487
1488The Linux kernel has a variety of different barriers that act at different
1489levels:
1490
1491  (*) Compiler barrier.
1492
1493  (*) CPU memory barriers.
1494
1495  (*) MMIO write barrier.
1496
1497
1498COMPILER BARRIER
1499----------------
1500
1501The Linux kernel has an explicit compiler barrier function that prevents the
1502compiler from moving the memory accesses either side of it to the other side:
1503
1504	barrier();
1505
1506This is a general barrier -- there are no read-read or write-write
1507variants of barrier().  However, READ_ONCE() and WRITE_ONCE() can be
1508thought of as weak forms of barrier() that affect only the specific
1509accesses flagged by the READ_ONCE() or WRITE_ONCE().
1510
1511The barrier() function has the following effects:
1512
1513 (*) Prevents the compiler from reordering accesses following the
1514     barrier() to precede any accesses preceding the barrier().
1515     One example use for this property is to ease communication between
1516     interrupt-handler code and the code that was interrupted.
1517
1518 (*) Within a loop, forces the compiler to load the variables used
1519     in that loop's conditional on each pass through that loop.
1520
1521The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1522optimizations that, while perfectly safe in single-threaded code, can
1523be fatal in concurrent code.  Here are some examples of these sorts
1524of optimizations:
1525
1526 (*) The compiler is within its rights to reorder loads and stores
1527     to the same variable, and in some cases, the CPU is within its
1528     rights to reorder loads to the same variable.  This means that
1529     the following code:
1530
1531	a[0] = x;
1532	a[1] = x;
1533
1534     Might result in an older value of x stored in a[1] than in a[0].
1535     Prevent both the compiler and the CPU from doing this as follows:
1536
1537	a[0] = READ_ONCE(x);
1538	a[1] = READ_ONCE(x);
1539
1540     In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1541     accesses from multiple CPUs to a single variable.
1542
1543 (*) The compiler is within its rights to merge successive loads from
1544     the same variable.  Such merging can cause the compiler to "optimize"
1545     the following code:
1546
1547	while (tmp = a)
1548		do_something_with(tmp);
1549
1550     into the following code, which, although in some sense legitimate
1551     for single-threaded code, is almost certainly not what the developer
1552     intended:
1553
1554	if (tmp = a)
1555		for (;;)
1556			do_something_with(tmp);
1557
1558     Use READ_ONCE() to prevent the compiler from doing this to you:
1559
1560	while (tmp = READ_ONCE(a))
1561		do_something_with(tmp);
1562
1563 (*) The compiler is within its rights to reload a variable, for example,
1564     in cases where high register pressure prevents the compiler from
1565     keeping all data of interest in registers.  The compiler might
1566     therefore optimize the variable 'tmp' out of our previous example:
1567
1568	while (tmp = a)
1569		do_something_with(tmp);
1570
1571     This could result in the following code, which is perfectly safe in
1572     single-threaded code, but can be fatal in concurrent code:
1573
1574	while (a)
1575		do_something_with(a);
1576
1577     For example, the optimized version of this code could result in
1578     passing a zero to do_something_with() in the case where the variable
1579     a was modified by some other CPU between the "while" statement and
1580     the call to do_something_with().
1581
1582     Again, use READ_ONCE() to prevent the compiler from doing this:
1583
1584	while (tmp = READ_ONCE(a))
1585		do_something_with(tmp);
1586
1587     Note that if the compiler runs short of registers, it might save
1588     tmp onto the stack.  The overhead of this saving and later restoring
1589     is why compilers reload variables.  Doing so is perfectly safe for
1590     single-threaded code, so you need to tell the compiler about cases
1591     where it is not safe.
1592
1593 (*) The compiler is within its rights to omit a load entirely if it knows
1594     what the value will be.  For example, if the compiler can prove that
1595     the value of variable 'a' is always zero, it can optimize this code:
1596
1597	while (tmp = a)
1598		do_something_with(tmp);
1599
1600     Into this:
1601
1602	do { } while (0);
1603
1604     This transformation is a win for single-threaded code because it
1605     gets rid of a load and a branch.  The problem is that the compiler
1606     will carry out its proof assuming that the current CPU is the only
1607     one updating variable 'a'.  If variable 'a' is shared, then the
1608     compiler's proof will be erroneous.  Use READ_ONCE() to tell the
1609     compiler that it doesn't know as much as it thinks it does:
1610
1611	while (tmp = READ_ONCE(a))
1612		do_something_with(tmp);
1613
1614     But please note that the compiler is also closely watching what you
1615     do with the value after the READ_ONCE().  For example, suppose you
1616     do the following and MAX is a preprocessor macro with the value 1:
1617
1618	while ((tmp = READ_ONCE(a)) % MAX)
1619		do_something_with(tmp);
1620
1621     Then the compiler knows that the result of the "%" operator applied
1622     to MAX will always be zero, again allowing the compiler to optimize
1623     the code into near-nonexistence.  (It will still load from the
1624     variable 'a'.)
1625
1626 (*) Similarly, the compiler is within its rights to omit a store entirely
1627     if it knows that the variable already has the value being stored.
1628     Again, the compiler assumes that the current CPU is the only one
1629     storing into the variable, which can cause the compiler to do the
1630     wrong thing for shared variables.  For example, suppose you have
1631     the following:
1632
1633	a = 0;
1634	... Code that does not store to variable a ...
1635	a = 0;
1636
1637     The compiler sees that the value of variable 'a' is already zero, so
1638     it might well omit the second store.  This would come as a fatal
1639     surprise if some other CPU might have stored to variable 'a' in the
1640     meantime.
1641
1642     Use WRITE_ONCE() to prevent the compiler from making this sort of
1643     wrong guess:
1644
1645	WRITE_ONCE(a, 0);
1646	... Code that does not store to variable a ...
1647	WRITE_ONCE(a, 0);
1648
1649 (*) The compiler is within its rights to reorder memory accesses unless
1650     you tell it not to.  For example, consider the following interaction
1651     between process-level code and an interrupt handler:
1652
1653	void process_level(void)
1654	{
1655		msg = get_message();
1656		flag = true;
1657	}
1658
1659	void interrupt_handler(void)
1660	{
1661		if (flag)
1662			process_message(msg);
1663	}
1664
1665     There is nothing to prevent the compiler from transforming
1666     process_level() to the following, in fact, this might well be a
1667     win for single-threaded code:
1668
1669	void process_level(void)
1670	{
1671		flag = true;
1672		msg = get_message();
1673	}
1674
1675     If the interrupt occurs between these two statement, then
1676     interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
1677     to prevent this as follows:
1678
1679	void process_level(void)
1680	{
1681		WRITE_ONCE(msg, get_message());
1682		WRITE_ONCE(flag, true);
1683	}
1684
1685	void interrupt_handler(void)
1686	{
1687		if (READ_ONCE(flag))
1688			process_message(READ_ONCE(msg));
1689	}
1690
1691     Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1692     interrupt_handler() are needed if this interrupt handler can itself
1693     be interrupted by something that also accesses 'flag' and 'msg',
1694     for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
1695     and WRITE_ONCE() are not needed in interrupt_handler() other than
1696     for documentation purposes.  (Note also that nested interrupts
1697     do not typically occur in modern Linux kernels, in fact, if an
1698     interrupt handler returns with interrupts enabled, you will get a
1699     WARN_ONCE() splat.)
1700
1701     You should assume that the compiler can move READ_ONCE() and
1702     WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1703     barrier(), or similar primitives.
1704
1705     This effect could also be achieved using barrier(), but READ_ONCE()
1706     and WRITE_ONCE() are more selective:  With READ_ONCE() and
1707     WRITE_ONCE(), the compiler need only forget the contents of the
1708     indicated memory locations, while with barrier() the compiler must
1709     discard the value of all memory locations that it has currented
1710     cached in any machine registers.  Of course, the compiler must also
1711     respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1712     though the CPU of course need not do so.
1713
1714 (*) The compiler is within its rights to invent stores to a variable,
1715     as in the following example:
1716
1717	if (a)
1718		b = a;
1719	else
1720		b = 42;
1721
1722     The compiler might save a branch by optimizing this as follows:
1723
1724	b = 42;
1725	if (a)
1726		b = a;
1727
1728     In single-threaded code, this is not only safe, but also saves
1729     a branch.  Unfortunately, in concurrent code, this optimization
1730     could cause some other CPU to see a spurious value of 42 -- even
1731     if variable 'a' was never zero -- when loading variable 'b'.
1732     Use WRITE_ONCE() to prevent this as follows:
1733
1734	if (a)
1735		WRITE_ONCE(b, a);
1736	else
1737		WRITE_ONCE(b, 42);
1738
1739     The compiler can also invent loads.  These are usually less
1740     damaging, but they can result in cache-line bouncing and thus in
1741     poor performance and scalability.  Use READ_ONCE() to prevent
1742     invented loads.
1743
1744 (*) For aligned memory locations whose size allows them to be accessed
1745     with a single memory-reference instruction, prevents "load tearing"
1746     and "store tearing," in which a single large access is replaced by
1747     multiple smaller accesses.  For example, given an architecture having
1748     16-bit store instructions with 7-bit immediate fields, the compiler
1749     might be tempted to use two 16-bit store-immediate instructions to
1750     implement the following 32-bit store:
1751
1752	p = 0x00010002;
1753
1754     Please note that GCC really does use this sort of optimization,
1755     which is not surprising given that it would likely take more
1756     than two instructions to build the constant and then store it.
1757     This optimization can therefore be a win in single-threaded code.
1758     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1759     this optimization in a volatile store.  In the absence of such bugs,
1760     use of WRITE_ONCE() prevents store tearing in the following example:
1761
1762	WRITE_ONCE(p, 0x00010002);
1763
1764     Use of packed structures can also result in load and store tearing,
1765     as in this example:
1766
1767	struct __attribute__((__packed__)) foo {
1768		short a;
1769		int b;
1770		short c;
1771	};
1772	struct foo foo1, foo2;
1773	...
1774
1775	foo2.a = foo1.a;
1776	foo2.b = foo1.b;
1777	foo2.c = foo1.c;
1778
1779     Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1780     volatile markings, the compiler would be well within its rights to
1781     implement these three assignment statements as a pair of 32-bit
1782     loads followed by a pair of 32-bit stores.  This would result in
1783     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
1784     and WRITE_ONCE() again prevent tearing in this example:
1785
1786	foo2.a = foo1.a;
1787	WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1788	foo2.c = foo1.c;
1789
1790All that aside, it is never necessary to use READ_ONCE() and
1791WRITE_ONCE() on a variable that has been marked volatile.  For example,
1792because 'jiffies' is marked volatile, it is never necessary to
1793say READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
1794WRITE_ONCE() are implemented as volatile casts, which has no effect when
1795its argument is already marked volatile.
1796
1797Please note that these compiler barriers have no direct effect on the CPU,
1798which may then reorder things however it wishes.
1799
1800
1801CPU MEMORY BARRIERS
1802-------------------
1803
1804The Linux kernel has eight basic CPU memory barriers:
1805
1806	TYPE		MANDATORY		SMP CONDITIONAL
1807	===============	=======================	===========================
1808	GENERAL		mb()			smp_mb()
1809	WRITE		wmb()			smp_wmb()
1810	READ		rmb()			smp_rmb()
1811	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1812
1813
1814All memory barriers except the data dependency barriers imply a compiler
1815barrier.  Data dependencies do not impose any additional compiler ordering.
1816
1817Aside: In the case of data dependencies, the compiler would be expected
1818to issue the loads in the correct order (eg. `a[b]` would have to load
1819the value of b before loading a[b]), however there is no guarantee in
1820the C specification that the compiler may not speculate the value of b
1821(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1822tmp = a[b]; ).  There is also the problem of a compiler reloading b after
1823having loaded a[b], thus having a newer copy of b than a[b].  A consensus
1824has not yet been reached about these problems, however the READ_ONCE()
1825macro is a good place to start looking.
1826
1827SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1828systems because it is assumed that a CPU will appear to be self-consistent,
1829and will order overlapping accesses correctly with respect to itself.
1830However, see the subsection on "Virtual Machine Guests" below.
1831
1832[!] Note that SMP memory barriers _must_ be used to control the ordering of
1833references to shared memory on SMP systems, though the use of locking instead
1834is sufficient.
1835
1836Mandatory barriers should not be used to control SMP effects, since mandatory
1837barriers impose unnecessary overhead on both SMP and UP systems. They may,
1838however, be used to control MMIO effects on accesses through relaxed memory I/O
1839windows.  These barriers are required even on non-SMP systems as they affect
1840the order in which memory operations appear to a device by prohibiting both the
1841compiler and the CPU from reordering them.
1842
1843
1844There are some more advanced barrier functions:
1845
1846 (*) smp_store_mb(var, value)
1847
1848     This assigns the value to the variable and then inserts a full memory
1849     barrier after it.  It isn't guaranteed to insert anything more than a
1850     compiler barrier in a UP compilation.
1851
1852
1853 (*) smp_mb__before_atomic();
1854 (*) smp_mb__after_atomic();
1855
1856     These are for use with atomic (such as add, subtract, increment and
1857     decrement) functions that don't return a value, especially when used for
1858     reference counting.  These functions do not imply memory barriers.
1859
1860     These are also used for atomic bitop functions that do not return a
1861     value (such as set_bit and clear_bit).
1862
1863     As an example, consider a piece of code that marks an object as being dead
1864     and then decrements the object's reference count:
1865
1866	obj->dead = 1;
1867	smp_mb__before_atomic();
1868	atomic_dec(&obj->ref_count);
1869
1870     This makes sure that the death mark on the object is perceived to be set
1871     *before* the reference counter is decremented.
1872
1873     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1874     operations" subsection for information on where to use these.
1875
1876
1877 (*) lockless_dereference();
1878
1879     This can be thought of as a pointer-fetch wrapper around the
1880     smp_read_barrier_depends() data-dependency barrier.
1881
1882     This is also similar to rcu_dereference(), but in cases where
1883     object lifetime is handled by some mechanism other than RCU, for
1884     example, when the objects removed only when the system goes down.
1885     In addition, lockless_dereference() is used in some data structures
1886     that can be used both with and without RCU.
1887
1888
1889 (*) dma_wmb();
1890 (*) dma_rmb();
1891
1892     These are for use with consistent memory to guarantee the ordering
1893     of writes or reads of shared memory accessible to both the CPU and a
1894     DMA capable device.
1895
1896     For example, consider a device driver that shares memory with a device
1897     and uses a descriptor status value to indicate if the descriptor belongs
1898     to the device or the CPU, and a doorbell to notify it when new
1899     descriptors are available:
1900
1901	if (desc->status != DEVICE_OWN) {
1902		/* do not read data until we own descriptor */
1903		dma_rmb();
1904
1905		/* read/modify data */
1906		read_data = desc->data;
1907		desc->data = write_data;
1908
1909		/* flush modifications before status update */
1910		dma_wmb();
1911
1912		/* assign ownership */
1913		desc->status = DEVICE_OWN;
1914
1915		/* force memory to sync before notifying device via MMIO */
1916		wmb();
1917
1918		/* notify device of new descriptors */
1919		writel(DESC_NOTIFY, doorbell);
1920	}
1921
1922     The dma_rmb() allows us guarantee the device has released ownership
1923     before we read the data from the descriptor, and the dma_wmb() allows
1924     us to guarantee the data is written to the descriptor before the device
1925     can see it now has ownership.  The wmb() is needed to guarantee that the
1926     cache coherent memory writes have completed before attempting a write to
1927     the cache incoherent MMIO region.
1928
1929     See Documentation/DMA-API.txt for more information on consistent memory.
1930
1931MMIO WRITE BARRIER
1932------------------
1933
1934The Linux kernel also has a special barrier for use with memory-mapped I/O
1935writes:
1936
1937	mmiowb();
1938
1939This is a variation on the mandatory write barrier that causes writes to weakly
1940ordered I/O regions to be partially ordered.  Its effects may go beyond the
1941CPU->Hardware interface and actually affect the hardware at some level.
1942
1943See the subsection "Acquires vs I/O accesses" for more information.
1944
1945
1946===============================
1947IMPLICIT KERNEL MEMORY BARRIERS
1948===============================
1949
1950Some of the other functions in the linux kernel imply memory barriers, amongst
1951which are locking and scheduling functions.
1952
1953This specification is a _minimum_ guarantee; any particular architecture may
1954provide more substantial guarantees, but these may not be relied upon outside
1955of arch specific code.
1956
1957
1958LOCK ACQUISITION FUNCTIONS
1959--------------------------
1960
1961The Linux kernel has a number of locking constructs:
1962
1963 (*) spin locks
1964 (*) R/W spin locks
1965 (*) mutexes
1966 (*) semaphores
1967 (*) R/W semaphores
1968
1969In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1970for each construct.  These operations all imply certain barriers:
1971
1972 (1) ACQUIRE operation implication:
1973
1974     Memory operations issued after the ACQUIRE will be completed after the
1975     ACQUIRE operation has completed.
1976
1977     Memory operations issued before the ACQUIRE may be completed after
1978     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1979     combined with a following ACQUIRE, orders prior stores against
1980     subsequent loads and stores.  Note that this is weaker than smp_mb()!
1981     The smp_mb__before_spinlock() primitive is free on many architectures.
1982
1983 (2) RELEASE operation implication:
1984
1985     Memory operations issued before the RELEASE will be completed before the
1986     RELEASE operation has completed.
1987
1988     Memory operations issued after the RELEASE may be completed before the
1989     RELEASE operation has completed.
1990
1991 (3) ACQUIRE vs ACQUIRE implication:
1992
1993     All ACQUIRE operations issued before another ACQUIRE operation will be
1994     completed before that ACQUIRE operation.
1995
1996 (4) ACQUIRE vs RELEASE implication:
1997
1998     All ACQUIRE operations issued before a RELEASE operation will be
1999     completed before the RELEASE operation.
2000
2001 (5) Failed conditional ACQUIRE implication:
2002
2003     Certain locking variants of the ACQUIRE operation may fail, either due to
2004     being unable to get the lock immediately, or due to receiving an unblocked
2005     signal whilst asleep waiting for the lock to become available.  Failed
2006     locks do not imply any sort of barrier.
2007
2008[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2009one-way barriers is that the effects of instructions outside of a critical
2010section may seep into the inside of the critical section.
2011
2012An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2013because it is possible for an access preceding the ACQUIRE to happen after the
2014ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2015the two accesses can themselves then cross:
2016
2017	*A = a;
2018	ACQUIRE M
2019	RELEASE M
2020	*B = b;
2021
2022may occur as:
2023
2024	ACQUIRE M, STORE *B, STORE *A, RELEASE M
2025
2026When the ACQUIRE and RELEASE are a lock acquisition and release,
2027respectively, this same reordering can occur if the lock's ACQUIRE and
2028RELEASE are to the same lock variable, but only from the perspective of
2029another CPU not holding that lock.  In short, a ACQUIRE followed by an
2030RELEASE may -not- be assumed to be a full memory barrier.
2031
2032Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2033not imply a full memory barrier.  Therefore, the CPU's execution of the
2034critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2035so that:
2036
2037	*A = a;
2038	RELEASE M
2039	ACQUIRE N
2040	*B = b;
2041
2042could occur as:
2043
2044	ACQUIRE N, STORE *B, STORE *A, RELEASE M
2045
2046It might appear that this reordering could introduce a deadlock.
2047However, this cannot happen because if such a deadlock threatened,
2048the RELEASE would simply complete, thereby avoiding the deadlock.
2049
2050	Why does this work?
2051
2052	One key point is that we are only talking about the CPU doing
2053	the reordering, not the compiler.  If the compiler (or, for
2054	that matter, the developer) switched the operations, deadlock
2055	-could- occur.
2056
2057	But suppose the CPU reordered the operations.  In this case,
2058	the unlock precedes the lock in the assembly code.  The CPU
2059	simply elected to try executing the later lock operation first.
2060	If there is a deadlock, this lock operation will simply spin (or
2061	try to sleep, but more on that later).	The CPU will eventually
2062	execute the unlock operation (which preceded the lock operation
2063	in the assembly code), which will unravel the potential deadlock,
2064	allowing the lock operation to succeed.
2065
2066	But what if the lock is a sleeplock?  In that case, the code will
2067	try to enter the scheduler, where it will eventually encounter
2068	a memory barrier, which will force the earlier unlock operation
2069	to complete, again unraveling the deadlock.  There might be
2070	a sleep-unlock race, but the locking primitive needs to resolve
2071	such races properly in any case.
2072
2073Locks and semaphores may not provide any guarantee of ordering on UP compiled
2074systems, and so cannot be counted on in such a situation to actually achieve
2075anything at all - especially with respect to I/O accesses - unless combined
2076with interrupt disabling operations.
2077
2078See also the section on "Inter-CPU locking barrier effects".
2079
2080
2081As an example, consider the following:
2082
2083	*A = a;
2084	*B = b;
2085	ACQUIRE
2086	*C = c;
2087	*D = d;
2088	RELEASE
2089	*E = e;
2090	*F = f;
2091
2092The following sequence of events is acceptable:
2093
2094	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2095
2096	[+] Note that {*F,*A} indicates a combined access.
2097
2098But none of the following are:
2099
2100	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
2101	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
2102	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
2103	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
2104
2105
2106
2107INTERRUPT DISABLING FUNCTIONS
2108-----------------------------
2109
2110Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2111(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
2112barriers are required in such a situation, they must be provided from some
2113other means.
2114
2115
2116SLEEP AND WAKE-UP FUNCTIONS
2117---------------------------
2118
2119Sleeping and waking on an event flagged in global data can be viewed as an
2120interaction between two pieces of data: the task state of the task waiting for
2121the event and the global data used to indicate the event.  To make sure that
2122these appear to happen in the right order, the primitives to begin the process
2123of going to sleep, and the primitives to initiate a wake up imply certain
2124barriers.
2125
2126Firstly, the sleeper normally follows something like this sequence of events:
2127
2128	for (;;) {
2129		set_current_state(TASK_UNINTERRUPTIBLE);
2130		if (event_indicated)
2131			break;
2132		schedule();
2133	}
2134
2135A general memory barrier is interpolated automatically by set_current_state()
2136after it has altered the task state:
2137
2138	CPU 1
2139	===============================
2140	set_current_state();
2141	  smp_store_mb();
2142	    STORE current->state
2143	    <general barrier>
2144	LOAD event_indicated
2145
2146set_current_state() may be wrapped by:
2147
2148	prepare_to_wait();
2149	prepare_to_wait_exclusive();
2150
2151which therefore also imply a general memory barrier after setting the state.
2152The whole sequence above is available in various canned forms, all of which
2153interpolate the memory barrier in the right place:
2154
2155	wait_event();
2156	wait_event_interruptible();
2157	wait_event_interruptible_exclusive();
2158	wait_event_interruptible_timeout();
2159	wait_event_killable();
2160	wait_event_timeout();
2161	wait_on_bit();
2162	wait_on_bit_lock();
2163
2164
2165Secondly, code that performs a wake up normally follows something like this:
2166
2167	event_indicated = 1;
2168	wake_up(&event_wait_queue);
2169
2170or:
2171
2172	event_indicated = 1;
2173	wake_up_process(event_daemon);
2174
2175A write memory barrier is implied by wake_up() and co.  if and only if they
2176wake something up.  The barrier occurs before the task state is cleared, and so
2177sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2178
2179	CPU 1				CPU 2
2180	===============================	===============================
2181	set_current_state();		STORE event_indicated
2182	  smp_store_mb();		wake_up();
2183	    STORE current->state	  <write barrier>
2184	    <general barrier>		  STORE current->state
2185	LOAD event_indicated
2186
2187To repeat, this write memory barrier is present if and only if something
2188is actually awakened.  To see this, consider the following sequence of
2189events, where X and Y are both initially zero:
2190
2191	CPU 1				CPU 2
2192	===============================	===============================
2193	X = 1;				STORE event_indicated
2194	smp_mb();			wake_up();
2195	Y = 1;				wait_event(wq, Y == 1);
2196	wake_up();			  load from Y sees 1, no memory barrier
2197					load from X might see 0
2198
2199In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2200to see 1.
2201
2202The available waker functions include:
2203
2204	complete();
2205	wake_up();
2206	wake_up_all();
2207	wake_up_bit();
2208	wake_up_interruptible();
2209	wake_up_interruptible_all();
2210	wake_up_interruptible_nr();
2211	wake_up_interruptible_poll();
2212	wake_up_interruptible_sync();
2213	wake_up_interruptible_sync_poll();
2214	wake_up_locked();
2215	wake_up_locked_poll();
2216	wake_up_nr();
2217	wake_up_poll();
2218	wake_up_process();
2219
2220
2221[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2222order multiple stores before the wake-up with respect to loads of those stored
2223values after the sleeper has called set_current_state().  For instance, if the
2224sleeper does:
2225
2226	set_current_state(TASK_INTERRUPTIBLE);
2227	if (event_indicated)
2228		break;
2229	__set_current_state(TASK_RUNNING);
2230	do_something(my_data);
2231
2232and the waker does:
2233
2234	my_data = value;
2235	event_indicated = 1;
2236	wake_up(&event_wait_queue);
2237
2238there's no guarantee that the change to event_indicated will be perceived by
2239the sleeper as coming after the change to my_data.  In such a circumstance, the
2240code on both sides must interpolate its own memory barriers between the
2241separate data accesses.  Thus the above sleeper ought to do:
2242
2243	set_current_state(TASK_INTERRUPTIBLE);
2244	if (event_indicated) {
2245		smp_rmb();
2246		do_something(my_data);
2247	}
2248
2249and the waker should do:
2250
2251	my_data = value;
2252	smp_wmb();
2253	event_indicated = 1;
2254	wake_up(&event_wait_queue);
2255
2256
2257MISCELLANEOUS FUNCTIONS
2258-----------------------
2259
2260Other functions that imply barriers:
2261
2262 (*) schedule() and similar imply full memory barriers.
2263
2264
2265===================================
2266INTER-CPU ACQUIRING BARRIER EFFECTS
2267===================================
2268
2269On SMP systems locking primitives give a more substantial form of barrier: one
2270that does affect memory access ordering on other CPUs, within the context of
2271conflict on any particular lock.
2272
2273
2274ACQUIRES VS MEMORY ACCESSES
2275---------------------------
2276
2277Consider the following: the system has a pair of spinlocks (M) and (Q), and
2278three CPUs; then should the following sequence of events occur:
2279
2280	CPU 1				CPU 2
2281	===============================	===============================
2282	WRITE_ONCE(*A, a);		WRITE_ONCE(*E, e);
2283	ACQUIRE M			ACQUIRE Q
2284	WRITE_ONCE(*B, b);		WRITE_ONCE(*F, f);
2285	WRITE_ONCE(*C, c);		WRITE_ONCE(*G, g);
2286	RELEASE M			RELEASE Q
2287	WRITE_ONCE(*D, d);		WRITE_ONCE(*H, h);
2288
2289Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2290through *H occur in, other than the constraints imposed by the separate locks
2291on the separate CPUs.  It might, for example, see:
2292
2293	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2294
2295But it won't see any of:
2296
2297	*B, *C or *D preceding ACQUIRE M
2298	*A, *B or *C following RELEASE M
2299	*F, *G or *H preceding ACQUIRE Q
2300	*E, *F or *G following RELEASE Q
2301
2302
2303
2304ACQUIRES VS I/O ACCESSES
2305------------------------
2306
2307Under certain circumstances (especially involving NUMA), I/O accesses within
2308two spinlocked sections on two different CPUs may be seen as interleaved by the
2309PCI bridge, because the PCI bridge does not necessarily participate in the
2310cache-coherence protocol, and is therefore incapable of issuing the required
2311read memory barriers.
2312
2313For example:
2314
2315	CPU 1				CPU 2
2316	===============================	===============================
2317	spin_lock(Q)
2318	writel(0, ADDR)
2319	writel(1, DATA);
2320	spin_unlock(Q);
2321					spin_lock(Q);
2322					writel(4, ADDR);
2323					writel(5, DATA);
2324					spin_unlock(Q);
2325
2326may be seen by the PCI bridge as follows:
2327
2328	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2329
2330which would probably cause the hardware to malfunction.
2331
2332
2333What is necessary here is to intervene with an mmiowb() before dropping the
2334spinlock, for example:
2335
2336	CPU 1				CPU 2
2337	===============================	===============================
2338	spin_lock(Q)
2339	writel(0, ADDR)
2340	writel(1, DATA);
2341	mmiowb();
2342	spin_unlock(Q);
2343					spin_lock(Q);
2344					writel(4, ADDR);
2345					writel(5, DATA);
2346					mmiowb();
2347					spin_unlock(Q);
2348
2349this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2350before either of the stores issued on CPU 2.
2351
2352
2353Furthermore, following a store by a load from the same device obviates the need
2354for the mmiowb(), because the load forces the store to complete before the load
2355is performed:
2356
2357	CPU 1				CPU 2
2358	===============================	===============================
2359	spin_lock(Q)
2360	writel(0, ADDR)
2361	a = readl(DATA);
2362	spin_unlock(Q);
2363					spin_lock(Q);
2364					writel(4, ADDR);
2365					b = readl(DATA);
2366					spin_unlock(Q);
2367
2368
2369See Documentation/DocBook/deviceiobook.tmpl for more information.
2370
2371
2372=================================
2373WHERE ARE MEMORY BARRIERS NEEDED?
2374=================================
2375
2376Under normal operation, memory operation reordering is generally not going to
2377be a problem as a single-threaded linear piece of code will still appear to
2378work correctly, even if it's in an SMP kernel.  There are, however, four
2379circumstances in which reordering definitely _could_ be a problem:
2380
2381 (*) Interprocessor interaction.
2382
2383 (*) Atomic operations.
2384
2385 (*) Accessing devices.
2386
2387 (*) Interrupts.
2388
2389
2390INTERPROCESSOR INTERACTION
2391--------------------------
2392
2393When there's a system with more than one processor, more than one CPU in the
2394system may be working on the same data set at the same time.  This can cause
2395synchronisation problems, and the usual way of dealing with them is to use
2396locks.  Locks, however, are quite expensive, and so it may be preferable to
2397operate without the use of a lock if at all possible.  In such a case
2398operations that affect both CPUs may have to be carefully ordered to prevent
2399a malfunction.
2400
2401Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2402queued on the semaphore, by virtue of it having a piece of its stack linked to
2403the semaphore's list of waiting processes:
2404
2405	struct rw_semaphore {
2406		...
2407		spinlock_t lock;
2408		struct list_head waiters;
2409	};
2410
2411	struct rwsem_waiter {
2412		struct list_head list;
2413		struct task_struct *task;
2414	};
2415
2416To wake up a particular waiter, the up_read() or up_write() functions have to:
2417
2418 (1) read the next pointer from this waiter's record to know as to where the
2419     next waiter record is;
2420
2421 (2) read the pointer to the waiter's task structure;
2422
2423 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2424
2425 (4) call wake_up_process() on the task; and
2426
2427 (5) release the reference held on the waiter's task struct.
2428
2429In other words, it has to perform this sequence of events:
2430
2431	LOAD waiter->list.next;
2432	LOAD waiter->task;
2433	STORE waiter->task;
2434	CALL wakeup
2435	RELEASE task
2436
2437and if any of these steps occur out of order, then the whole thing may
2438malfunction.
2439
2440Once it has queued itself and dropped the semaphore lock, the waiter does not
2441get the lock again; it instead just waits for its task pointer to be cleared
2442before proceeding.  Since the record is on the waiter's stack, this means that
2443if the task pointer is cleared _before_ the next pointer in the list is read,
2444another CPU might start processing the waiter and might clobber the waiter's
2445stack before the up*() function has a chance to read the next pointer.
2446
2447Consider then what might happen to the above sequence of events:
2448
2449	CPU 1				CPU 2
2450	===============================	===============================
2451					down_xxx()
2452					Queue waiter
2453					Sleep
2454	up_yyy()
2455	LOAD waiter->task;
2456	STORE waiter->task;
2457					Woken up by other event
2458	<preempt>
2459					Resume processing
2460					down_xxx() returns
2461					call foo()
2462					foo() clobbers *waiter
2463	</preempt>
2464	LOAD waiter->list.next;
2465	--- OOPS ---
2466
2467This could be dealt with using the semaphore lock, but then the down_xxx()
2468function has to needlessly get the spinlock again after being woken up.
2469
2470The way to deal with this is to insert a general SMP memory barrier:
2471
2472	LOAD waiter->list.next;
2473	LOAD waiter->task;
2474	smp_mb();
2475	STORE waiter->task;
2476	CALL wakeup
2477	RELEASE task
2478
2479In this case, the barrier makes a guarantee that all memory accesses before the
2480barrier will appear to happen before all the memory accesses after the barrier
2481with respect to the other CPUs on the system.  It does _not_ guarantee that all
2482the memory accesses before the barrier will be complete by the time the barrier
2483instruction itself is complete.
2484
2485On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2486compiler barrier, thus making sure the compiler emits the instructions in the
2487right order without actually intervening in the CPU.  Since there's only one
2488CPU, that CPU's dependency ordering logic will take care of everything else.
2489
2490
2491ATOMIC OPERATIONS
2492-----------------
2493
2494Whilst they are technically interprocessor interaction considerations, atomic
2495operations are noted specially as some of them imply full memory barriers and
2496some don't, but they're very heavily relied on as a group throughout the
2497kernel.
2498
2499Any atomic operation that modifies some state in memory and returns information
2500about the state (old or new) implies an SMP-conditional general memory barrier
2501(smp_mb()) on each side of the actual operation (with the exception of
2502explicit lock operations, described later).  These include:
2503
2504	xchg();
2505	atomic_xchg();			atomic_long_xchg();
2506	atomic_inc_return();		atomic_long_inc_return();
2507	atomic_dec_return();		atomic_long_dec_return();
2508	atomic_add_return();		atomic_long_add_return();
2509	atomic_sub_return();		atomic_long_sub_return();
2510	atomic_inc_and_test();		atomic_long_inc_and_test();
2511	atomic_dec_and_test();		atomic_long_dec_and_test();
2512	atomic_sub_and_test();		atomic_long_sub_and_test();
2513	atomic_add_negative();		atomic_long_add_negative();
2514	test_and_set_bit();
2515	test_and_clear_bit();
2516	test_and_change_bit();
2517
2518	/* when succeeds */
2519	cmpxchg();
2520	atomic_cmpxchg();		atomic_long_cmpxchg();
2521	atomic_add_unless();		atomic_long_add_unless();
2522
2523These are used for such things as implementing ACQUIRE-class and RELEASE-class
2524operations and adjusting reference counters towards object destruction, and as
2525such the implicit memory barrier effects are necessary.
2526
2527
2528The following operations are potential problems as they do _not_ imply memory
2529barriers, but might be used for implementing such things as RELEASE-class
2530operations:
2531
2532	atomic_set();
2533	set_bit();
2534	clear_bit();
2535	change_bit();
2536
2537With these the appropriate explicit memory barrier should be used if necessary
2538(smp_mb__before_atomic() for instance).
2539
2540
2541The following also do _not_ imply memory barriers, and so may require explicit
2542memory barriers under some circumstances (smp_mb__before_atomic() for
2543instance):
2544
2545	atomic_add();
2546	atomic_sub();
2547	atomic_inc();
2548	atomic_dec();
2549
2550If they're used for statistics generation, then they probably don't need memory
2551barriers, unless there's a coupling between statistical data.
2552
2553If they're used for reference counting on an object to control its lifetime,
2554they probably don't need memory barriers because either the reference count
2555will be adjusted inside a locked section, or the caller will already hold
2556sufficient references to make the lock, and thus a memory barrier unnecessary.
2557
2558If they're used for constructing a lock of some description, then they probably
2559do need memory barriers as a lock primitive generally has to do things in a
2560specific order.
2561
2562Basically, each usage case has to be carefully considered as to whether memory
2563barriers are needed or not.
2564
2565The following operations are special locking primitives:
2566
2567	test_and_set_bit_lock();
2568	clear_bit_unlock();
2569	__clear_bit_unlock();
2570
2571These implement ACQUIRE-class and RELEASE-class operations.  These should be
2572used in preference to other operations when implementing locking primitives,
2573because their implementations can be optimised on many architectures.
2574
2575[!] Note that special memory barrier primitives are available for these
2576situations because on some CPUs the atomic instructions used imply full memory
2577barriers, and so barrier instructions are superfluous in conjunction with them,
2578and in such cases the special barrier primitives will be no-ops.
2579
2580See Documentation/atomic_ops.txt for more information.
2581
2582
2583ACCESSING DEVICES
2584-----------------
2585
2586Many devices can be memory mapped, and so appear to the CPU as if they're just
2587a set of memory locations.  To control such a device, the driver usually has to
2588make the right memory accesses in exactly the right order.
2589
2590However, having a clever CPU or a clever compiler creates a potential problem
2591in that the carefully sequenced accesses in the driver code won't reach the
2592device in the requisite order if the CPU or the compiler thinks it is more
2593efficient to reorder, combine or merge accesses - something that would cause
2594the device to malfunction.
2595
2596Inside of the Linux kernel, I/O should be done through the appropriate accessor
2597routines - such as inb() or writel() - which know how to make such accesses
2598appropriately sequential.  Whilst this, for the most part, renders the explicit
2599use of memory barriers unnecessary, there are a couple of situations where they
2600might be needed:
2601
2602 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2603     so for _all_ general drivers locks should be used and mmiowb() must be
2604     issued prior to unlocking the critical section.
2605
2606 (2) If the accessor functions are used to refer to an I/O memory window with
2607     relaxed memory access properties, then _mandatory_ memory barriers are
2608     required to enforce ordering.
2609
2610See Documentation/DocBook/deviceiobook.tmpl for more information.
2611
2612
2613INTERRUPTS
2614----------
2615
2616A driver may be interrupted by its own interrupt service routine, and thus the
2617two parts of the driver may interfere with each other's attempts to control or
2618access the device.
2619
2620This may be alleviated - at least in part - by disabling local interrupts (a
2621form of locking), such that the critical operations are all contained within
2622the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2623routine is executing, the driver's core may not run on the same CPU, and its
2624interrupt is not permitted to happen again until the current interrupt has been
2625handled, thus the interrupt handler does not need to lock against that.
2626
2627However, consider a driver that was talking to an ethernet card that sports an
2628address register and a data register.  If that driver's core talks to the card
2629under interrupt-disablement and then the driver's interrupt handler is invoked:
2630
2631	LOCAL IRQ DISABLE
2632	writew(ADDR, 3);
2633	writew(DATA, y);
2634	LOCAL IRQ ENABLE
2635	<interrupt>
2636	writew(ADDR, 4);
2637	q = readw(DATA);
2638	</interrupt>
2639
2640The store to the data register might happen after the second store to the
2641address register if ordering rules are sufficiently relaxed:
2642
2643	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2644
2645
2646If ordering rules are relaxed, it must be assumed that accesses done inside an
2647interrupt disabled section may leak outside of it and may interleave with
2648accesses performed in an interrupt - and vice versa - unless implicit or
2649explicit barriers are used.
2650
2651Normally this won't be a problem because the I/O accesses done inside such
2652sections will include synchronous load operations on strictly ordered I/O
2653registers that form implicit I/O barriers.  If this isn't sufficient then an
2654mmiowb() may need to be used explicitly.
2655
2656
2657A similar situation may occur between an interrupt routine and two routines
2658running on separate CPUs that communicate with each other.  If such a case is
2659likely, then interrupt-disabling locks should be used to guarantee ordering.
2660
2661
2662==========================
2663KERNEL I/O BARRIER EFFECTS
2664==========================
2665
2666When accessing I/O memory, drivers should use the appropriate accessor
2667functions:
2668
2669 (*) inX(), outX():
2670
2671     These are intended to talk to I/O space rather than memory space, but
2672     that's primarily a CPU-specific concept.  The i386 and x86_64 processors
2673     do indeed have special I/O space access cycles and instructions, but many
2674     CPUs don't have such a concept.
2675
2676     The PCI bus, amongst others, defines an I/O space concept which - on such
2677     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2678     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2679     memory map, particularly on those CPUs that don't support alternate I/O
2680     spaces.
2681
2682     Accesses to this space may be fully synchronous (as on i386), but
2683     intermediary bridges (such as the PCI host bridge) may not fully honour
2684     that.
2685
2686     They are guaranteed to be fully ordered with respect to each other.
2687
2688     They are not guaranteed to be fully ordered with respect to other types of
2689     memory and I/O operation.
2690
2691 (*) readX(), writeX():
2692
2693     Whether these are guaranteed to be fully ordered and uncombined with
2694     respect to each other on the issuing CPU depends on the characteristics
2695     defined for the memory window through which they're accessing.  On later
2696     i386 architecture machines, for example, this is controlled by way of the
2697     MTRR registers.
2698
2699     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2700     provided they're not accessing a prefetchable device.
2701
2702     However, intermediary hardware (such as a PCI bridge) may indulge in
2703     deferral if it so wishes; to flush a store, a load from the same location
2704     is preferred[*], but a load from the same device or from configuration
2705     space should suffice for PCI.
2706
2707     [*] NOTE! attempting to load from the same location as was written to may
2708	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2709	 example.
2710
2711     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2712     force stores to be ordered.
2713
2714     Please refer to the PCI specification for more information on interactions
2715     between PCI transactions.
2716
2717 (*) readX_relaxed(), writeX_relaxed()
2718
2719     These are similar to readX() and writeX(), but provide weaker memory
2720     ordering guarantees.  Specifically, they do not guarantee ordering with
2721     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2722     ordering with respect to LOCK or UNLOCK operations.  If the latter is
2723     required, an mmiowb() barrier can be used.  Note that relaxed accesses to
2724     the same peripheral are guaranteed to be ordered with respect to each
2725     other.
2726
2727 (*) ioreadX(), iowriteX()
2728
2729     These will perform appropriately for the type of access they're actually
2730     doing, be it inX()/outX() or readX()/writeX().
2731
2732
2733========================================
2734ASSUMED MINIMUM EXECUTION ORDERING MODEL
2735========================================
2736
2737It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2738maintain the appearance of program causality with respect to itself.  Some CPUs
2739(such as i386 or x86_64) are more constrained than others (such as powerpc or
2740frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2741of arch-specific code.
2742
2743This means that it must be considered that the CPU will execute its instruction
2744stream in any order it feels like - or even in parallel - provided that if an
2745instruction in the stream depends on an earlier instruction, then that
2746earlier instruction must be sufficiently complete[*] before the later
2747instruction may proceed; in other words: provided that the appearance of
2748causality is maintained.
2749
2750 [*] Some instructions have more than one effect - such as changing the
2751     condition codes, changing registers or changing memory - and different
2752     instructions may depend on different effects.
2753
2754A CPU may also discard any instruction sequence that winds up having no
2755ultimate effect.  For example, if two adjacent instructions both load an
2756immediate value into the same register, the first may be discarded.
2757
2758
2759Similarly, it has to be assumed that compiler might reorder the instruction
2760stream in any way it sees fit, again provided the appearance of causality is
2761maintained.
2762
2763
2764============================
2765THE EFFECTS OF THE CPU CACHE
2766============================
2767
2768The way cached memory operations are perceived across the system is affected to
2769a certain extent by the caches that lie between CPUs and memory, and by the
2770memory coherence system that maintains the consistency of state in the system.
2771
2772As far as the way a CPU interacts with another part of the system through the
2773caches goes, the memory system has to include the CPU's caches, and memory
2774barriers for the most part act at the interface between the CPU and its cache
2775(memory barriers logically act on the dotted line in the following diagram):
2776
2777	    <--- CPU --->         :       <----------- Memory ----------->
2778	                          :
2779	+--------+    +--------+  :   +--------+    +-----------+
2780	|        |    |        |  :   |        |    |           |    +--------+
2781	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2782	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2783	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2784	|        |    |        |  :   |        |    |           |    |        |
2785	+--------+    +--------+  :   +--------+    |           |    |        |
2786	                          :                 | Cache     |    +--------+
2787	                          :                 | Coherency |
2788	                          :                 | Mechanism |    +--------+
2789	+--------+    +--------+  :   +--------+    |           |    |	      |
2790	|        |    |        |  :   |        |    |           |    |        |
2791	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2792	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2793	|        |    | Queue  |  :   |        |    |           |    |        |
2794	|        |    |        |  :   |        |    |           |    +--------+
2795	+--------+    +--------+  :   +--------+    +-----------+
2796	                          :
2797	                          :
2798
2799Although any particular load or store may not actually appear outside of the
2800CPU that issued it since it may have been satisfied within the CPU's own cache,
2801it will still appear as if the full memory access had taken place as far as the
2802other CPUs are concerned since the cache coherency mechanisms will migrate the
2803cacheline over to the accessing CPU and propagate the effects upon conflict.
2804
2805The CPU core may execute instructions in any order it deems fit, provided the
2806expected program causality appears to be maintained.  Some of the instructions
2807generate load and store operations which then go into the queue of memory
2808accesses to be performed.  The core may place these in the queue in any order
2809it wishes, and continue execution until it is forced to wait for an instruction
2810to complete.
2811
2812What memory barriers are concerned with is controlling the order in which
2813accesses cross from the CPU side of things to the memory side of things, and
2814the order in which the effects are perceived to happen by the other observers
2815in the system.
2816
2817[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2818their own loads and stores as if they had happened in program order.
2819
2820[!] MMIO or other device accesses may bypass the cache system.  This depends on
2821the properties of the memory window through which devices are accessed and/or
2822the use of any special device communication instructions the CPU may have.
2823
2824
2825CACHE COHERENCY
2826---------------
2827
2828Life isn't quite as simple as it may appear above, however: for while the
2829caches are expected to be coherent, there's no guarantee that that coherency
2830will be ordered.  This means that whilst changes made on one CPU will
2831eventually become visible on all CPUs, there's no guarantee that they will
2832become apparent in the same order on those other CPUs.
2833
2834
2835Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2836has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2837
2838	            :
2839	            :                          +--------+
2840	            :      +---------+         |        |
2841	+--------+  : +--->| Cache A |<------->|        |
2842	|        |  : |    +---------+         |        |
2843	|  CPU 1 |<---+                        |        |
2844	|        |  : |    +---------+         |        |
2845	+--------+  : +--->| Cache B |<------->|        |
2846	            :      +---------+         |        |
2847	            :                          | Memory |
2848	            :      +---------+         | System |
2849	+--------+  : +--->| Cache C |<------->|        |
2850	|        |  : |    +---------+         |        |
2851	|  CPU 2 |<---+                        |        |
2852	|        |  : |    +---------+         |        |
2853	+--------+  : +--->| Cache D |<------->|        |
2854	            :      +---------+         |        |
2855	            :                          +--------+
2856	            :
2857
2858Imagine the system has the following properties:
2859
2860 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2861     resident in memory;
2862
2863 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2864     resident in memory;
2865
2866 (*) whilst the CPU core is interrogating one cache, the other cache may be
2867     making use of the bus to access the rest of the system - perhaps to
2868     displace a dirty cacheline or to do a speculative load;
2869
2870 (*) each cache has a queue of operations that need to be applied to that cache
2871     to maintain coherency with the rest of the system;
2872
2873 (*) the coherency queue is not flushed by normal loads to lines already
2874     present in the cache, even though the contents of the queue may
2875     potentially affect those loads.
2876
2877Imagine, then, that two writes are made on the first CPU, with a write barrier
2878between them to guarantee that they will appear to reach that CPU's caches in
2879the requisite order:
2880
2881	CPU 1		CPU 2		COMMENT
2882	===============	===============	=======================================
2883					u == 0, v == 1 and p == &u, q == &u
2884	v = 2;
2885	smp_wmb();			Make sure change to v is visible before
2886					 change to p
2887	<A:modify v=2>			v is now in cache A exclusively
2888	p = &v;
2889	<B:modify p=&v>			p is now in cache B exclusively
2890
2891The write memory barrier forces the other CPUs in the system to perceive that
2892the local CPU's caches have apparently been updated in the correct order.  But
2893now imagine that the second CPU wants to read those values:
2894
2895	CPU 1		CPU 2		COMMENT
2896	===============	===============	=======================================
2897	...
2898			q = p;
2899			x = *q;
2900
2901The above pair of reads may then fail to happen in the expected order, as the
2902cacheline holding p may get updated in one of the second CPU's caches whilst
2903the update to the cacheline holding v is delayed in the other of the second
2904CPU's caches by some other cache event:
2905
2906	CPU 1		CPU 2		COMMENT
2907	===============	===============	=======================================
2908					u == 0, v == 1 and p == &u, q == &u
2909	v = 2;
2910	smp_wmb();
2911	<A:modify v=2>	<C:busy>
2912			<C:queue v=2>
2913	p = &v;		q = p;
2914			<D:request p>
2915	<B:modify p=&v>	<D:commit p=&v>
2916			<D:read p>
2917			x = *q;
2918			<C:read *q>	Reads from v before v updated in cache
2919			<C:unbusy>
2920			<C:commit v=2>
2921
2922Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2923no guarantee that, without intervention, the order of update will be the same
2924as that committed on CPU 1.
2925
2926
2927To intervene, we need to interpolate a data dependency barrier or a read
2928barrier between the loads.  This will force the cache to commit its coherency
2929queue before processing any further requests:
2930
2931	CPU 1		CPU 2		COMMENT
2932	===============	===============	=======================================
2933					u == 0, v == 1 and p == &u, q == &u
2934	v = 2;
2935	smp_wmb();
2936	<A:modify v=2>	<C:busy>
2937			<C:queue v=2>
2938	p = &v;		q = p;
2939			<D:request p>
2940	<B:modify p=&v>	<D:commit p=&v>
2941			<D:read p>
2942			smp_read_barrier_depends()
2943			<C:unbusy>
2944			<C:commit v=2>
2945			x = *q;
2946			<C:read *q>	Reads from v after v updated in cache
2947
2948
2949This sort of problem can be encountered on DEC Alpha processors as they have a
2950split cache that improves performance by making better use of the data bus.
2951Whilst most CPUs do imply a data dependency barrier on the read when a memory
2952access depends on a read, not all do, so it may not be relied on.
2953
2954Other CPUs may also have split caches, but must coordinate between the various
2955cachelets for normal memory accesses.  The semantics of the Alpha removes the
2956need for coordination in the absence of memory barriers.
2957
2958
2959CACHE COHERENCY VS DMA
2960----------------------
2961
2962Not all systems maintain cache coherency with respect to devices doing DMA.  In
2963such cases, a device attempting DMA may obtain stale data from RAM because
2964dirty cache lines may be resident in the caches of various CPUs, and may not
2965have been written back to RAM yet.  To deal with this, the appropriate part of
2966the kernel must flush the overlapping bits of cache on each CPU (and maybe
2967invalidate them as well).
2968
2969In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2970cache lines being written back to RAM from a CPU's cache after the device has
2971installed its own data, or cache lines present in the CPU's cache may simply
2972obscure the fact that RAM has been updated, until at such time as the cacheline
2973is discarded from the CPU's cache and reloaded.  To deal with this, the
2974appropriate part of the kernel must invalidate the overlapping bits of the
2975cache on each CPU.
2976
2977See Documentation/cachetlb.txt for more information on cache management.
2978
2979
2980CACHE COHERENCY VS MMIO
2981-----------------------
2982
2983Memory mapped I/O usually takes place through memory locations that are part of
2984a window in the CPU's memory space that has different properties assigned than
2985the usual RAM directed window.
2986
2987Amongst these properties is usually the fact that such accesses bypass the
2988caching entirely and go directly to the device buses.  This means MMIO accesses
2989may, in effect, overtake accesses to cached memory that were emitted earlier.
2990A memory barrier isn't sufficient in such a case, but rather the cache must be
2991flushed between the cached memory write and the MMIO access if the two are in
2992any way dependent.
2993
2994
2995=========================
2996THE THINGS CPUS GET UP TO
2997=========================
2998
2999A programmer might take it for granted that the CPU will perform memory
3000operations in exactly the order specified, so that if the CPU is, for example,
3001given the following piece of code to execute:
3002
3003	a = READ_ONCE(*A);
3004	WRITE_ONCE(*B, b);
3005	c = READ_ONCE(*C);
3006	d = READ_ONCE(*D);
3007	WRITE_ONCE(*E, e);
3008
3009they would then expect that the CPU will complete the memory operation for each
3010instruction before moving on to the next one, leading to a definite sequence of
3011operations as seen by external observers in the system:
3012
3013	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
3014
3015
3016Reality is, of course, much messier.  With many CPUs and compilers, the above
3017assumption doesn't hold because:
3018
3019 (*) loads are more likely to need to be completed immediately to permit
3020     execution progress, whereas stores can often be deferred without a
3021     problem;
3022
3023 (*) loads may be done speculatively, and the result discarded should it prove
3024     to have been unnecessary;
3025
3026 (*) loads may be done speculatively, leading to the result having been fetched
3027     at the wrong time in the expected sequence of events;
3028
3029 (*) the order of the memory accesses may be rearranged to promote better use
3030     of the CPU buses and caches;
3031
3032 (*) loads and stores may be combined to improve performance when talking to
3033     memory or I/O hardware that can do batched accesses of adjacent locations,
3034     thus cutting down on transaction setup costs (memory and PCI devices may
3035     both be able to do this); and
3036
3037 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
3038     mechanisms may alleviate this - once the store has actually hit the cache
3039     - there's no guarantee that the coherency management will be propagated in
3040     order to other CPUs.
3041
3042So what another CPU, say, might actually observe from the above piece of code
3043is:
3044
3045	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
3046
3047	(Where "LOAD {*C,*D}" is a combined load)
3048
3049
3050However, it is guaranteed that a CPU will be self-consistent: it will see its
3051_own_ accesses appear to be correctly ordered, without the need for a memory
3052barrier.  For instance with the following code:
3053
3054	U = READ_ONCE(*A);
3055	WRITE_ONCE(*A, V);
3056	WRITE_ONCE(*A, W);
3057	X = READ_ONCE(*A);
3058	WRITE_ONCE(*A, Y);
3059	Z = READ_ONCE(*A);
3060
3061and assuming no intervention by an external influence, it can be assumed that
3062the final result will appear to be:
3063
3064	U == the original value of *A
3065	X == W
3066	Z == Y
3067	*A == Y
3068
3069The code above may cause the CPU to generate the full sequence of memory
3070accesses:
3071
3072	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
3073
3074in that order, but, without intervention, the sequence may have almost any
3075combination of elements combined or discarded, provided the program's view
3076of the world remains consistent.  Note that READ_ONCE() and WRITE_ONCE()
3077are -not- optional in the above example, as there are architectures
3078where a given CPU might reorder successive loads to the same location.
3079On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
3080necessary to prevent this, for example, on Itanium the volatile casts
3081used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3082and st.rel instructions (respectively) that prevent such reordering.
3083
3084The compiler may also combine, discard or defer elements of the sequence before
3085the CPU even sees them.
3086
3087For instance:
3088
3089	*A = V;
3090	*A = W;
3091
3092may be reduced to:
3093
3094	*A = W;
3095
3096since, without either a write barrier or an WRITE_ONCE(), it can be
3097assumed that the effect of the storage of V to *A is lost.  Similarly:
3098
3099	*A = Y;
3100	Z = *A;
3101
3102may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3103reduced to:
3104
3105	*A = Y;
3106	Z = Y;
3107
3108and the LOAD operation never appear outside of the CPU.
3109
3110
3111AND THEN THERE'S THE ALPHA
3112--------------------------
3113
3114The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
3115some versions of the Alpha CPU have a split data cache, permitting them to have
3116two semantically-related cache lines updated at separate times.  This is where
3117the data dependency barrier really becomes necessary as this synchronises both
3118caches with the memory coherence system, thus making it seem like pointer
3119changes vs new data occur in the right order.
3120
3121The Alpha defines the Linux kernel's memory barrier model.
3122
3123See the subsection on "Cache Coherency" above.
3124
3125
3126VIRTUAL MACHINE GUESTS
3127----------------------
3128
3129Guests running within virtual machines might be affected by SMP effects even if
3130the guest itself is compiled without SMP support.  This is an artifact of
3131interfacing with an SMP host while running an UP kernel.  Using mandatory
3132barriers for this use-case would be possible but is often suboptimal.
3133
3134To handle this case optimally, low-level virt_mb() etc macros are available.
3135These have the same effect as smp_mb() etc when SMP is enabled, but generate
3136identical code for SMP and non-SMP systems.  For example, virtual machine guests
3137should use virt_mb() rather than smp_mb() when synchronizing against a
3138(possibly SMP) host.
3139
3140These are equivalent to smp_mb() etc counterparts in all other respects,
3141in particular, they do not control MMIO effects: to control
3142MMIO effects, use mandatory barriers.
3143
3144
3145============
3146EXAMPLE USES
3147============
3148
3149CIRCULAR BUFFERS
3150----------------
3151
3152Memory barriers can be used to implement circular buffering without the need
3153of a lock to serialise the producer with the consumer.  See:
3154
3155	Documentation/circular-buffers.txt
3156
3157for details.
3158
3159
3160==========
3161REFERENCES
3162==========
3163
3164Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3165Digital Press)
3166	Chapter 5.2: Physical Address Space Characteristics
3167	Chapter 5.4: Caches and Write Buffers
3168	Chapter 5.5: Data Sharing
3169	Chapter 5.6: Read/Write Ordering
3170
3171AMD64 Architecture Programmer's Manual Volume 2: System Programming
3172	Chapter 7.1: Memory-Access Ordering
3173	Chapter 7.4: Buffering and Combining Memory Writes
3174
3175IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3176System Programming Guide
3177	Chapter 7.1: Locked Atomic Operations
3178	Chapter 7.2: Memory Ordering
3179	Chapter 7.4: Serializing Instructions
3180
3181The SPARC Architecture Manual, Version 9
3182	Chapter 8: Memory Models
3183	Appendix D: Formal Specification of the Memory Models
3184	Appendix J: Programming with the Memory Models
3185
3186UltraSPARC Programmer Reference Manual
3187	Chapter 5: Memory Accesses and Cacheability
3188	Chapter 15: Sparc-V9 Memory Models
3189
3190UltraSPARC III Cu User's Manual
3191	Chapter 9: Memory Models
3192
3193UltraSPARC IIIi Processor User's Manual
3194	Chapter 8: Memory Models
3195
3196UltraSPARC Architecture 2005
3197	Chapter 9: Memory
3198	Appendix D: Formal Specifications of the Memory Models
3199
3200UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3201	Chapter 8: Memory Models
3202	Appendix F: Caches and Cache Coherency
3203
3204Solaris Internals, Core Kernel Architecture, p63-68:
3205	Chapter 3.3: Hardware Considerations for Locks and
3206			Synchronization
3207
3208Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3209for Kernel Programmers:
3210	Chapter 13: Other Memory Models
3211
3212Intel Itanium Architecture Software Developer's Manual: Volume 1:
3213	Section 2.6: Speculation
3214	Section 4.4: Memory Access
3215