1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7    Will Deacon <will.deacon@arm.com>
8    Peter Zijlstra <peterz@infradead.org>
9
10==========
11DISCLAIMER
12==========
13
14This document is not a specification; it is intentionally (for the sake of
15brevity) and unintentionally (due to being human) incomplete. This document is
16meant as a guide to using the various memory barriers provided by Linux, but
17in case of any doubt (and there are many) please ask.
18
19To repeat, this document is not a specification of what Linux expects from
20hardware.
21
22The purpose of this document is twofold:
23
24 (1) to specify the minimum functionality that one can rely on for any
25     particular barrier, and
26
27 (2) to provide a guide as to how to use the barriers that are available.
28
29Note that an architecture can provide more than the minimum requirement
30for any particular barrier, but if the architecure provides less than
31that, that architecture is incorrect.
32
33Note also that it is possible that a barrier may be a no-op for an
34architecture because the way that arch works renders an explicit barrier
35unnecessary in that case.
36
37
38========
39CONTENTS
40========
41
42 (*) Abstract memory access model.
43
44     - Device operations.
45     - Guarantees.
46
47 (*) What are memory barriers?
48
49     - Varieties of memory barrier.
50     - What may not be assumed about memory barriers?
51     - Data dependency barriers.
52     - Control dependencies.
53     - SMP barrier pairing.
54     - Examples of memory barrier sequences.
55     - Read memory barriers vs load speculation.
56     - Transitivity
57
58 (*) Explicit kernel barriers.
59
60     - Compiler barrier.
61     - CPU memory barriers.
62     - MMIO write barrier.
63
64 (*) Implicit kernel memory barriers.
65
66     - Lock acquisition functions.
67     - Interrupt disabling functions.
68     - Sleep and wake-up functions.
69     - Miscellaneous functions.
70
71 (*) Inter-CPU acquiring barrier effects.
72
73     - Acquires vs memory accesses.
74     - Acquires vs I/O accesses.
75
76 (*) Where are memory barriers needed?
77
78     - Interprocessor interaction.
79     - Atomic operations.
80     - Accessing devices.
81     - Interrupts.
82
83 (*) Kernel I/O barrier effects.
84
85 (*) Assumed minimum execution ordering model.
86
87 (*) The effects of the cpu cache.
88
89     - Cache coherency.
90     - Cache coherency vs DMA.
91     - Cache coherency vs MMIO.
92
93 (*) The things CPUs get up to.
94
95     - And then there's the Alpha.
96     - Virtual Machine Guests.
97
98 (*) Example uses.
99
100     - Circular buffers.
101
102 (*) References.
103
104
105============================
106ABSTRACT MEMORY ACCESS MODEL
107============================
108
109Consider the following abstract model of the system:
110
111		            :                :
112		            :                :
113		            :                :
114		+-------+   :   +--------+   :   +-------+
115		|       |   :   |        |   :   |       |
116		|       |   :   |        |   :   |       |
117		| CPU 1 |<----->| Memory |<----->| CPU 2 |
118		|       |   :   |        |   :   |       |
119		|       |   :   |        |   :   |       |
120		+-------+   :   +--------+   :   +-------+
121		    ^       :       ^        :       ^
122		    |       :       |        :       |
123		    |       :       |        :       |
124		    |       :       v        :       |
125		    |       :   +--------+   :       |
126		    |       :   |        |   :       |
127		    |       :   |        |   :       |
128		    +---------->| Device |<----------+
129		            :   |        |   :
130		            :   |        |   :
131		            :   +--------+   :
132		            :                :
133
134Each CPU executes a program that generates memory access operations.  In the
135abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
136perform the memory operations in any order it likes, provided program causality
137appears to be maintained.  Similarly, the compiler may also arrange the
138instructions it emits in any order it likes, provided it doesn't affect the
139apparent operation of the program.
140
141So in the above diagram, the effects of the memory operations performed by a
142CPU are perceived by the rest of the system as the operations cross the
143interface between the CPU and rest of the system (the dotted lines).
144
145
146For example, consider the following sequence of events:
147
148	CPU 1		CPU 2
149	===============	===============
150	{ A == 1; B == 2 }
151	A = 3;		x = B;
152	B = 4;		y = A;
153
154The set of accesses as seen by the memory system in the middle can be arranged
155in 24 different combinations:
156
157	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
158	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
159	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
160	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
161	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
162	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
163	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
164	STORE B=4, ...
165	...
166
167and can thus result in four different combinations of values:
168
169	x == 2, y == 1
170	x == 2, y == 3
171	x == 4, y == 1
172	x == 4, y == 3
173
174
175Furthermore, the stores committed by a CPU to the memory system may not be
176perceived by the loads made by another CPU in the same order as the stores were
177committed.
178
179
180As a further example, consider this sequence of events:
181
182	CPU 1		CPU 2
183	===============	===============
184	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
185	B = 4;		Q = P;
186	P = &B		D = *Q;
187
188There is an obvious data dependency here, as the value loaded into D depends on
189the address retrieved from P by CPU 2.  At the end of the sequence, any of the
190following results are possible:
191
192	(Q == &A) and (D == 1)
193	(Q == &B) and (D == 2)
194	(Q == &B) and (D == 4)
195
196Note that CPU 2 will never try and load C into D because the CPU will load P
197into Q before issuing the load of *Q.
198
199
200DEVICE OPERATIONS
201-----------------
202
203Some devices present their control interfaces as collections of memory
204locations, but the order in which the control registers are accessed is very
205important.  For instance, imagine an ethernet card with a set of internal
206registers that are accessed through an address port register (A) and a data
207port register (D).  To read internal register 5, the following code might then
208be used:
209
210	*A = 5;
211	x = *D;
212
213but this might show up as either of the following two sequences:
214
215	STORE *A = 5, x = LOAD *D
216	x = LOAD *D, STORE *A = 5
217
218the second of which will almost certainly result in a malfunction, since it set
219the address _after_ attempting to read the register.
220
221
222GUARANTEES
223----------
224
225There are some minimal guarantees that may be expected of a CPU:
226
227 (*) On any given CPU, dependent memory accesses will be issued in order, with
228     respect to itself.  This means that for:
229
230	Q = READ_ONCE(P); smp_read_barrier_depends(); D = READ_ONCE(*Q);
231
232     the CPU will issue the following memory operations:
233
234	Q = LOAD P, D = LOAD *Q
235
236     and always in that order.  On most systems, smp_read_barrier_depends()
237     does nothing, but it is required for DEC Alpha.  The READ_ONCE()
238     is required to prevent compiler mischief.  Please note that you
239     should normally use something like rcu_dereference() instead of
240     open-coding smp_read_barrier_depends().
241
242 (*) Overlapping loads and stores within a particular CPU will appear to be
243     ordered within that CPU.  This means that for:
244
245	a = READ_ONCE(*X); WRITE_ONCE(*X, b);
246
247     the CPU will only issue the following sequence of memory operations:
248
249	a = LOAD *X, STORE *X = b
250
251     And for:
252
253	WRITE_ONCE(*X, c); d = READ_ONCE(*X);
254
255     the CPU will only issue:
256
257	STORE *X = c, d = LOAD *X
258
259     (Loads and stores overlap if they are targeted at overlapping pieces of
260     memory).
261
262And there are a number of things that _must_ or _must_not_ be assumed:
263
264 (*) It _must_not_ be assumed that the compiler will do what you want
265     with memory references that are not protected by READ_ONCE() and
266     WRITE_ONCE().  Without them, the compiler is within its rights to
267     do all sorts of "creative" transformations, which are covered in
268     the COMPILER BARRIER section.
269
270 (*) It _must_not_ be assumed that independent loads and stores will be issued
271     in the order given.  This means that for:
272
273	X = *A; Y = *B; *D = Z;
274
275     we may get any of the following sequences:
276
277	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
278	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
279	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
280	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
281	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
282	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
283
284 (*) It _must_ be assumed that overlapping memory accesses may be merged or
285     discarded.  This means that for:
286
287	X = *A; Y = *(A + 4);
288
289     we may get any one of the following sequences:
290
291	X = LOAD *A; Y = LOAD *(A + 4);
292	Y = LOAD *(A + 4); X = LOAD *A;
293	{X, Y} = LOAD {*A, *(A + 4) };
294
295     And for:
296
297	*A = X; *(A + 4) = Y;
298
299     we may get any of:
300
301	STORE *A = X; STORE *(A + 4) = Y;
302	STORE *(A + 4) = Y; STORE *A = X;
303	STORE {*A, *(A + 4) } = {X, Y};
304
305And there are anti-guarantees:
306
307 (*) These guarantees do not apply to bitfields, because compilers often
308     generate code to modify these using non-atomic read-modify-write
309     sequences.  Do not attempt to use bitfields to synchronize parallel
310     algorithms.
311
312 (*) Even in cases where bitfields are protected by locks, all fields
313     in a given bitfield must be protected by one lock.  If two fields
314     in a given bitfield are protected by different locks, the compiler's
315     non-atomic read-modify-write sequences can cause an update to one
316     field to corrupt the value of an adjacent field.
317
318 (*) These guarantees apply only to properly aligned and sized scalar
319     variables.  "Properly sized" currently means variables that are
320     the same size as "char", "short", "int" and "long".  "Properly
321     aligned" means the natural alignment, thus no constraints for
322     "char", two-byte alignment for "short", four-byte alignment for
323     "int", and either four-byte or eight-byte alignment for "long",
324     on 32-bit and 64-bit systems, respectively.  Note that these
325     guarantees were introduced into the C11 standard, so beware when
326     using older pre-C11 compilers (for example, gcc 4.6).  The portion
327     of the standard containing this guarantee is Section 3.14, which
328     defines "memory location" as follows:
329
330     	memory location
331		either an object of scalar type, or a maximal sequence
332		of adjacent bit-fields all having nonzero width
333
334		NOTE 1: Two threads of execution can update and access
335		separate memory locations without interfering with
336		each other.
337
338		NOTE 2: A bit-field and an adjacent non-bit-field member
339		are in separate memory locations. The same applies
340		to two bit-fields, if one is declared inside a nested
341		structure declaration and the other is not, or if the two
342		are separated by a zero-length bit-field declaration,
343		or if they are separated by a non-bit-field member
344		declaration. It is not safe to concurrently update two
345		bit-fields in the same structure if all members declared
346		between them are also bit-fields, no matter what the
347		sizes of those intervening bit-fields happen to be.
348
349
350=========================
351WHAT ARE MEMORY BARRIERS?
352=========================
353
354As can be seen above, independent memory operations are effectively performed
355in random order, but this can be a problem for CPU-CPU interaction and for I/O.
356What is required is some way of intervening to instruct the compiler and the
357CPU to restrict the order.
358
359Memory barriers are such interventions.  They impose a perceived partial
360ordering over the memory operations on either side of the barrier.
361
362Such enforcement is important because the CPUs and other devices in a system
363can use a variety of tricks to improve performance, including reordering,
364deferral and combination of memory operations; speculative loads; speculative
365branch prediction and various types of caching.  Memory barriers are used to
366override or suppress these tricks, allowing the code to sanely control the
367interaction of multiple CPUs and/or devices.
368
369
370VARIETIES OF MEMORY BARRIER
371---------------------------
372
373Memory barriers come in four basic varieties:
374
375 (1) Write (or store) memory barriers.
376
377     A write memory barrier gives a guarantee that all the STORE operations
378     specified before the barrier will appear to happen before all the STORE
379     operations specified after the barrier with respect to the other
380     components of the system.
381
382     A write barrier is a partial ordering on stores only; it is not required
383     to have any effect on loads.
384
385     A CPU can be viewed as committing a sequence of store operations to the
386     memory system as time progresses.  All stores before a write barrier will
387     occur in the sequence _before_ all the stores after the write barrier.
388
389     [!] Note that write barriers should normally be paired with read or data
390     dependency barriers; see the "SMP barrier pairing" subsection.
391
392
393 (2) Data dependency barriers.
394
395     A data dependency barrier is a weaker form of read barrier.  In the case
396     where two loads are performed such that the second depends on the result
397     of the first (eg: the first load retrieves the address to which the second
398     load will be directed), a data dependency barrier would be required to
399     make sure that the target of the second load is updated before the address
400     obtained by the first load is accessed.
401
402     A data dependency barrier is a partial ordering on interdependent loads
403     only; it is not required to have any effect on stores, independent loads
404     or overlapping loads.
405
406     As mentioned in (1), the other CPUs in the system can be viewed as
407     committing sequences of stores to the memory system that the CPU being
408     considered can then perceive.  A data dependency barrier issued by the CPU
409     under consideration guarantees that for any load preceding it, if that
410     load touches one of a sequence of stores from another CPU, then by the
411     time the barrier completes, the effects of all the stores prior to that
412     touched by the load will be perceptible to any loads issued after the data
413     dependency barrier.
414
415     See the "Examples of memory barrier sequences" subsection for diagrams
416     showing the ordering constraints.
417
418     [!] Note that the first load really has to have a _data_ dependency and
419     not a control dependency.  If the address for the second load is dependent
420     on the first load, but the dependency is through a conditional rather than
421     actually loading the address itself, then it's a _control_ dependency and
422     a full read barrier or better is required.  See the "Control dependencies"
423     subsection for more information.
424
425     [!] Note that data dependency barriers should normally be paired with
426     write barriers; see the "SMP barrier pairing" subsection.
427
428
429 (3) Read (or load) memory barriers.
430
431     A read barrier is a data dependency barrier plus a guarantee that all the
432     LOAD operations specified before the barrier will appear to happen before
433     all the LOAD operations specified after the barrier with respect to the
434     other components of the system.
435
436     A read barrier is a partial ordering on loads only; it is not required to
437     have any effect on stores.
438
439     Read memory barriers imply data dependency barriers, and so can substitute
440     for them.
441
442     [!] Note that read barriers should normally be paired with write barriers;
443     see the "SMP barrier pairing" subsection.
444
445
446 (4) General memory barriers.
447
448     A general memory barrier gives a guarantee that all the LOAD and STORE
449     operations specified before the barrier will appear to happen before all
450     the LOAD and STORE operations specified after the barrier with respect to
451     the other components of the system.
452
453     A general memory barrier is a partial ordering over both loads and stores.
454
455     General memory barriers imply both read and write memory barriers, and so
456     can substitute for either.
457
458
459And a couple of implicit varieties:
460
461 (5) ACQUIRE operations.
462
463     This acts as a one-way permeable barrier.  It guarantees that all memory
464     operations after the ACQUIRE operation will appear to happen after the
465     ACQUIRE operation with respect to the other components of the system.
466     ACQUIRE operations include LOCK operations and both smp_load_acquire()
467     and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
468     semantics from relying on a control dependency and smp_rmb().
469
470     Memory operations that occur before an ACQUIRE operation may appear to
471     happen after it completes.
472
473     An ACQUIRE operation should almost always be paired with a RELEASE
474     operation.
475
476
477 (6) RELEASE operations.
478
479     This also acts as a one-way permeable barrier.  It guarantees that all
480     memory operations before the RELEASE operation will appear to happen
481     before the RELEASE operation with respect to the other components of the
482     system. RELEASE operations include UNLOCK operations and
483     smp_store_release() operations.
484
485     Memory operations that occur after a RELEASE operation may appear to
486     happen before it completes.
487
488     The use of ACQUIRE and RELEASE operations generally precludes the need
489     for other sorts of memory barrier (but note the exceptions mentioned in
490     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
491     pair is -not- guaranteed to act as a full memory barrier.  However, after
492     an ACQUIRE on a given variable, all memory accesses preceding any prior
493     RELEASE on that same variable are guaranteed to be visible.  In other
494     words, within a given variable's critical section, all accesses of all
495     previous critical sections for that variable are guaranteed to have
496     completed.
497
498     This means that ACQUIRE acts as a minimal "acquire" operation and
499     RELEASE acts as a minimal "release" operation.
500
501A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
502and RELEASE variants in addition to fully-ordered and relaxed (no barrier
503semantics) definitions.  For compound atomics performing both a load and a
504store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
505only to the store portion of the operation.
506
507Memory barriers are only required where there's a possibility of interaction
508between two CPUs or between a CPU and a device.  If it can be guaranteed that
509there won't be any such interaction in any particular piece of code, then
510memory barriers are unnecessary in that piece of code.
511
512
513Note that these are the _minimum_ guarantees.  Different architectures may give
514more substantial guarantees, but they may _not_ be relied upon outside of arch
515specific code.
516
517
518WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
519----------------------------------------------
520
521There are certain things that the Linux kernel memory barriers do not guarantee:
522
523 (*) There is no guarantee that any of the memory accesses specified before a
524     memory barrier will be _complete_ by the completion of a memory barrier
525     instruction; the barrier can be considered to draw a line in that CPU's
526     access queue that accesses of the appropriate type may not cross.
527
528 (*) There is no guarantee that issuing a memory barrier on one CPU will have
529     any direct effect on another CPU or any other hardware in the system.  The
530     indirect effect will be the order in which the second CPU sees the effects
531     of the first CPU's accesses occur, but see the next point:
532
533 (*) There is no guarantee that a CPU will see the correct order of effects
534     from a second CPU's accesses, even _if_ the second CPU uses a memory
535     barrier, unless the first CPU _also_ uses a matching memory barrier (see
536     the subsection on "SMP Barrier Pairing").
537
538 (*) There is no guarantee that some intervening piece of off-the-CPU
539     hardware[*] will not reorder the memory accesses.  CPU cache coherency
540     mechanisms should propagate the indirect effects of a memory barrier
541     between CPUs, but might not do so in order.
542
543	[*] For information on bus mastering DMA and coherency please read:
544
545	    Documentation/PCI/pci.txt
546	    Documentation/DMA-API-HOWTO.txt
547	    Documentation/DMA-API.txt
548
549
550DATA DEPENDENCY BARRIERS
551------------------------
552
553The usage requirements of data dependency barriers are a little subtle, and
554it's not always obvious that they're needed.  To illustrate, consider the
555following sequence of events:
556
557	CPU 1		      CPU 2
558	===============	      ===============
559	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
560	B = 4;
561	<write barrier>
562	WRITE_ONCE(P, &B)
563			      Q = READ_ONCE(P);
564			      D = *Q;
565
566There's a clear data dependency here, and it would seem that by the end of the
567sequence, Q must be either &A or &B, and that:
568
569	(Q == &A) implies (D == 1)
570	(Q == &B) implies (D == 4)
571
572But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
573leading to the following situation:
574
575	(Q == &B) and (D == 2) ????
576
577Whilst this may seem like a failure of coherency or causality maintenance, it
578isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
579Alpha).
580
581To deal with this, a data dependency barrier or better must be inserted
582between the address load and the data load:
583
584	CPU 1		      CPU 2
585	===============	      ===============
586	{ A == 1, B == 2, C == 3, P == &A, Q == &C }
587	B = 4;
588	<write barrier>
589	WRITE_ONCE(P, &B);
590			      Q = READ_ONCE(P);
591			      <data dependency barrier>
592			      D = *Q;
593
594This enforces the occurrence of one of the two implications, and prevents the
595third possibility from arising.
596
597A data-dependency barrier must also order against dependent writes:
598
599	CPU 1		      CPU 2
600	===============	      ===============
601	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
602	B = 4;
603	<write barrier>
604	WRITE_ONCE(P, &B);
605			      Q = READ_ONCE(P);
606			      <data dependency barrier>
607			      *Q = 5;
608
609The data-dependency barrier must order the read into Q with the store
610into *Q.  This prohibits this outcome:
611
612	(Q == &B) && (B == 4)
613
614Please note that this pattern should be rare.  After all, the whole point
615of dependency ordering is to -prevent- writes to the data structure, along
616with the expensive cache misses associated with those writes.  This pattern
617can be used to record rare error conditions and the like, and the ordering
618prevents such records from being lost.
619
620
621[!] Note that this extremely counterintuitive situation arises most easily on
622machines with split caches, so that, for example, one cache bank processes
623even-numbered cache lines and the other bank processes odd-numbered cache
624lines.  The pointer P might be stored in an odd-numbered cache line, and the
625variable B might be stored in an even-numbered cache line.  Then, if the
626even-numbered bank of the reading CPU's cache is extremely busy while the
627odd-numbered bank is idle, one can see the new value of the pointer P (&B),
628but the old value of the variable B (2).
629
630
631The data dependency barrier is very important to the RCU system,
632for example.  See rcu_assign_pointer() and rcu_dereference() in
633include/linux/rcupdate.h.  This permits the current target of an RCU'd
634pointer to be replaced with a new modified target, without the replacement
635target appearing to be incompletely initialised.
636
637See also the subsection on "Cache Coherency" for a more thorough example.
638
639
640CONTROL DEPENDENCIES
641--------------------
642
643Control dependencies can be a bit tricky because current compilers do
644not understand them.  The purpose of this section is to help you prevent
645the compiler's ignorance from breaking your code.
646
647A load-load control dependency requires a full read memory barrier, not
648simply a data dependency barrier to make it work correctly.  Consider the
649following bit of code:
650
651	q = READ_ONCE(a);
652	if (q) {
653		<data dependency barrier>  /* BUG: No data dependency!!! */
654		p = READ_ONCE(b);
655	}
656
657This will not have the desired effect because there is no actual data
658dependency, but rather a control dependency that the CPU may short-circuit
659by attempting to predict the outcome in advance, so that other CPUs see
660the load from b as having happened before the load from a.  In such a
661case what's actually required is:
662
663	q = READ_ONCE(a);
664	if (q) {
665		<read barrier>
666		p = READ_ONCE(b);
667	}
668
669However, stores are not speculated.  This means that ordering -is- provided
670for load-store control dependencies, as in the following example:
671
672	q = READ_ONCE(a);
673	if (q) {
674		WRITE_ONCE(b, 1);
675	}
676
677Control dependencies pair normally with other types of barriers.
678That said, please note that neither READ_ONCE() nor WRITE_ONCE()
679are optional! Without the READ_ONCE(), the compiler might combine the
680load from 'a' with other loads from 'a'.  Without the WRITE_ONCE(),
681the compiler might combine the store to 'b' with other stores to 'b'.
682Either can result in highly counterintuitive effects on ordering.
683
684Worse yet, if the compiler is able to prove (say) that the value of
685variable 'a' is always non-zero, it would be well within its rights
686to optimize the original example by eliminating the "if" statement
687as follows:
688
689	q = a;
690	b = 1;  /* BUG: Compiler and CPU can both reorder!!! */
691
692So don't leave out the READ_ONCE().
693
694It is tempting to try to enforce ordering on identical stores on both
695branches of the "if" statement as follows:
696
697	q = READ_ONCE(a);
698	if (q) {
699		barrier();
700		WRITE_ONCE(b, 1);
701		do_something();
702	} else {
703		barrier();
704		WRITE_ONCE(b, 1);
705		do_something_else();
706	}
707
708Unfortunately, current compilers will transform this as follows at high
709optimization levels:
710
711	q = READ_ONCE(a);
712	barrier();
713	WRITE_ONCE(b, 1);  /* BUG: No ordering vs. load from a!!! */
714	if (q) {
715		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
716		do_something();
717	} else {
718		/* WRITE_ONCE(b, 1); -- moved up, BUG!!! */
719		do_something_else();
720	}
721
722Now there is no conditional between the load from 'a' and the store to
723'b', which means that the CPU is within its rights to reorder them:
724The conditional is absolutely required, and must be present in the
725assembly code even after all compiler optimizations have been applied.
726Therefore, if you need ordering in this example, you need explicit
727memory barriers, for example, smp_store_release():
728
729	q = READ_ONCE(a);
730	if (q) {
731		smp_store_release(&b, 1);
732		do_something();
733	} else {
734		smp_store_release(&b, 1);
735		do_something_else();
736	}
737
738In contrast, without explicit memory barriers, two-legged-if control
739ordering is guaranteed only when the stores differ, for example:
740
741	q = READ_ONCE(a);
742	if (q) {
743		WRITE_ONCE(b, 1);
744		do_something();
745	} else {
746		WRITE_ONCE(b, 2);
747		do_something_else();
748	}
749
750The initial READ_ONCE() is still required to prevent the compiler from
751proving the value of 'a'.
752
753In addition, you need to be careful what you do with the local variable 'q',
754otherwise the compiler might be able to guess the value and again remove
755the needed conditional.  For example:
756
757	q = READ_ONCE(a);
758	if (q % MAX) {
759		WRITE_ONCE(b, 1);
760		do_something();
761	} else {
762		WRITE_ONCE(b, 2);
763		do_something_else();
764	}
765
766If MAX is defined to be 1, then the compiler knows that (q % MAX) is
767equal to zero, in which case the compiler is within its rights to
768transform the above code into the following:
769
770	q = READ_ONCE(a);
771	WRITE_ONCE(b, 1);
772	do_something_else();
773
774Given this transformation, the CPU is not required to respect the ordering
775between the load from variable 'a' and the store to variable 'b'.  It is
776tempting to add a barrier(), but this does not help.  The conditional
777is gone, and the barrier won't bring it back.  Therefore, if you are
778relying on this ordering, you should make sure that MAX is greater than
779one, perhaps as follows:
780
781	q = READ_ONCE(a);
782	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
783	if (q % MAX) {
784		WRITE_ONCE(b, 1);
785		do_something();
786	} else {
787		WRITE_ONCE(b, 2);
788		do_something_else();
789	}
790
791Please note once again that the stores to 'b' differ.  If they were
792identical, as noted earlier, the compiler could pull this store outside
793of the 'if' statement.
794
795You must also be careful not to rely too much on boolean short-circuit
796evaluation.  Consider this example:
797
798	q = READ_ONCE(a);
799	if (q || 1 > 0)
800		WRITE_ONCE(b, 1);
801
802Because the first condition cannot fault and the second condition is
803always true, the compiler can transform this example as following,
804defeating control dependency:
805
806	q = READ_ONCE(a);
807	WRITE_ONCE(b, 1);
808
809This example underscores the need to ensure that the compiler cannot
810out-guess your code.  More generally, although READ_ONCE() does force
811the compiler to actually emit code for a given load, it does not force
812the compiler to use the results.
813
814In addition, control dependencies apply only to the then-clause and
815else-clause of the if-statement in question.  In particular, it does
816not necessarily apply to code following the if-statement:
817
818	q = READ_ONCE(a);
819	if (q) {
820		WRITE_ONCE(b, 1);
821	} else {
822		WRITE_ONCE(b, 2);
823	}
824	WRITE_ONCE(c, 1);  /* BUG: No ordering against the read from 'a'. */
825
826It is tempting to argue that there in fact is ordering because the
827compiler cannot reorder volatile accesses and also cannot reorder
828the writes to 'b' with the condition.  Unfortunately for this line
829of reasoning, the compiler might compile the two writes to 'b' as
830conditional-move instructions, as in this fanciful pseudo-assembly
831language:
832
833	ld r1,a
834	cmp r1,$0
835	cmov,ne r4,$1
836	cmov,eq r4,$2
837	st r4,b
838	st $1,c
839
840A weakly ordered CPU would have no dependency of any sort between the load
841from 'a' and the store to 'c'.  The control dependencies would extend
842only to the pair of cmov instructions and the store depending on them.
843In short, control dependencies apply only to the stores in the then-clause
844and else-clause of the if-statement in question (including functions
845invoked by those two clauses), not to code following that if-statement.
846
847Finally, control dependencies do -not- provide transitivity.  This is
848demonstrated by two related examples, with the initial values of
849'x' and 'y' both being zero:
850
851	CPU 0                     CPU 1
852	=======================   =======================
853	r1 = READ_ONCE(x);        r2 = READ_ONCE(y);
854	if (r1 > 0)               if (r2 > 0)
855	  WRITE_ONCE(y, 1);         WRITE_ONCE(x, 1);
856
857	assert(!(r1 == 1 && r2 == 1));
858
859The above two-CPU example will never trigger the assert().  However,
860if control dependencies guaranteed transitivity (which they do not),
861then adding the following CPU would guarantee a related assertion:
862
863	CPU 2
864	=====================
865	WRITE_ONCE(x, 2);
866
867	assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
868
869But because control dependencies do -not- provide transitivity, the above
870assertion can fail after the combined three-CPU example completes.  If you
871need the three-CPU example to provide ordering, you will need smp_mb()
872between the loads and stores in the CPU 0 and CPU 1 code fragments,
873that is, just before or just after the "if" statements.  Furthermore,
874the original two-CPU example is very fragile and should be avoided.
875
876These two examples are the LB and WWC litmus tests from this paper:
877http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
878site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
879
880In summary:
881
882  (*) Control dependencies can order prior loads against later stores.
883      However, they do -not- guarantee any other sort of ordering:
884      Not prior loads against later loads, nor prior stores against
885      later anything.  If you need these other forms of ordering,
886      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
887      later loads, smp_mb().
888
889  (*) If both legs of the "if" statement begin with identical stores to
890      the same variable, then those stores must be ordered, either by
891      preceding both of them with smp_mb() or by using smp_store_release()
892      to carry out the stores.  Please note that it is -not- sufficient
893      to use barrier() at beginning of each leg of the "if" statement
894      because, as shown by the example above, optimizing compilers can
895      destroy the control dependency while respecting the letter of the
896      barrier() law.
897
898  (*) Control dependencies require at least one run-time conditional
899      between the prior load and the subsequent store, and this
900      conditional must involve the prior load.  If the compiler is able
901      to optimize the conditional away, it will have also optimized
902      away the ordering.  Careful use of READ_ONCE() and WRITE_ONCE()
903      can help to preserve the needed conditional.
904
905  (*) Control dependencies require that the compiler avoid reordering the
906      dependency into nonexistence.  Careful use of READ_ONCE() or
907      atomic{,64}_read() can help to preserve your control dependency.
908      Please see the COMPILER BARRIER section for more information.
909
910  (*) Control dependencies apply only to the then-clause and else-clause
911      of the if-statement containing the control dependency, including
912      any functions that these two clauses call.  Control dependencies
913      do -not- apply to code following the if-statement containing the
914      control dependency.
915
916  (*) Control dependencies pair normally with other types of barriers.
917
918  (*) Control dependencies do -not- provide transitivity.  If you
919      need transitivity, use smp_mb().
920
921  (*) Compilers do not understand control dependencies.  It is therefore
922      your job to ensure that they do not break your code.
923
924
925SMP BARRIER PAIRING
926-------------------
927
928When dealing with CPU-CPU interactions, certain types of memory barrier should
929always be paired.  A lack of appropriate pairing is almost certainly an error.
930
931General barriers pair with each other, though they also pair with most
932other types of barriers, albeit without transitivity.  An acquire barrier
933pairs with a release barrier, but both may also pair with other barriers,
934including of course general barriers.  A write barrier pairs with a data
935dependency barrier, a control dependency, an acquire barrier, a release
936barrier, a read barrier, or a general barrier.  Similarly a read barrier,
937control dependency, or a data dependency barrier pairs with a write
938barrier, an acquire barrier, a release barrier, or a general barrier:
939
940	CPU 1		      CPU 2
941	===============	      ===============
942	WRITE_ONCE(a, 1);
943	<write barrier>
944	WRITE_ONCE(b, 2);     x = READ_ONCE(b);
945			      <read barrier>
946			      y = READ_ONCE(a);
947
948Or:
949
950	CPU 1		      CPU 2
951	===============	      ===============================
952	a = 1;
953	<write barrier>
954	WRITE_ONCE(b, &a);    x = READ_ONCE(b);
955			      <data dependency barrier>
956			      y = *x;
957
958Or even:
959
960	CPU 1		      CPU 2
961	===============	      ===============================
962	r1 = READ_ONCE(y);
963	<general barrier>
964	WRITE_ONCE(y, 1);     if (r2 = READ_ONCE(x)) {
965			         <implicit control dependency>
966			         WRITE_ONCE(y, 1);
967			      }
968
969	assert(r1 == 0 || r2 == 0);
970
971Basically, the read barrier always has to be there, even though it can be of
972the "weaker" type.
973
974[!] Note that the stores before the write barrier would normally be expected to
975match the loads after the read barrier or the data dependency barrier, and vice
976versa:
977
978	CPU 1                               CPU 2
979	===================                 ===================
980	WRITE_ONCE(a, 1);    }----   --->{  v = READ_ONCE(c);
981	WRITE_ONCE(b, 2);    }    \ /    {  w = READ_ONCE(d);
982	<write barrier>            \        <read barrier>
983	WRITE_ONCE(c, 3);    }    / \    {  x = READ_ONCE(a);
984	WRITE_ONCE(d, 4);    }----   --->{  y = READ_ONCE(b);
985
986
987EXAMPLES OF MEMORY BARRIER SEQUENCES
988------------------------------------
989
990Firstly, write barriers act as partial orderings on store operations.
991Consider the following sequence of events:
992
993	CPU 1
994	=======================
995	STORE A = 1
996	STORE B = 2
997	STORE C = 3
998	<write barrier>
999	STORE D = 4
1000	STORE E = 5
1001
1002This sequence of events is committed to the memory coherence system in an order
1003that the rest of the system might perceive as the unordered set of { STORE A,
1004STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
1005}:
1006
1007	+-------+       :      :
1008	|       |       +------+
1009	|       |------>| C=3  |     }     /\
1010	|       |  :    +------+     }-----  \  -----> Events perceptible to
1011	|       |  :    | A=1  |     }        \/       the rest of the system
1012	|       |  :    +------+     }
1013	| CPU 1 |  :    | B=2  |     }
1014	|       |       +------+     }
1015	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
1016	|       |       +------+     }        requires all stores prior to the
1017	|       |  :    | E=5  |     }        barrier to be committed before
1018	|       |  :    +------+     }        further stores may take place
1019	|       |------>| D=4  |     }
1020	|       |       +------+
1021	+-------+       :      :
1022	                   |
1023	                   | Sequence in which stores are committed to the
1024	                   | memory system by CPU 1
1025	                   V
1026
1027
1028Secondly, data dependency barriers act as partial orderings on data-dependent
1029loads.  Consider the following sequence of events:
1030
1031	CPU 1			CPU 2
1032	=======================	=======================
1033		{ B = 7; X = 9; Y = 8; C = &Y }
1034	STORE A = 1
1035	STORE B = 2
1036	<write barrier>
1037	STORE C = &B		LOAD X
1038	STORE D = 4		LOAD C (gets &B)
1039				LOAD *C (reads B)
1040
1041Without intervention, CPU 2 may perceive the events on CPU 1 in some
1042effectively random order, despite the write barrier issued by CPU 1:
1043
1044	+-------+       :      :                :       :
1045	|       |       +------+                +-------+  | Sequence of update
1046	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
1047	|       |  :    +------+     \          +-------+  | CPU 2
1048	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
1049	|       |       +------+       |        +-------+
1050	|       |   wwwwwwwwwwwwwwww   |        :       :
1051	|       |       +------+       |        :       :
1052	|       |  :    | C=&B |---    |        :       :       +-------+
1053	|       |  :    +------+   \   |        +-------+       |       |
1054	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1055	|       |       +------+       |        +-------+       |       |
1056	+-------+       :      :       |        :       :       |       |
1057	                               |        :       :       |       |
1058	                               |        :       :       | CPU 2 |
1059	                               |        +-------+       |       |
1060	    Apparently incorrect --->  |        | B->7  |------>|       |
1061	    perception of B (!)        |        +-------+       |       |
1062	                               |        :       :       |       |
1063	                               |        +-------+       |       |
1064	    The load of X holds --->    \       | X->9  |------>|       |
1065	    up the maintenance           \      +-------+       |       |
1066	    of coherence of B             ----->| B->2  |       +-------+
1067	                                        +-------+
1068	                                        :       :
1069
1070
1071In the above example, CPU 2 perceives that B is 7, despite the load of *C
1072(which would be B) coming after the LOAD of C.
1073
1074If, however, a data dependency barrier were to be placed between the load of C
1075and the load of *C (ie: B) on CPU 2:
1076
1077	CPU 1			CPU 2
1078	=======================	=======================
1079		{ B = 7; X = 9; Y = 8; C = &Y }
1080	STORE A = 1
1081	STORE B = 2
1082	<write barrier>
1083	STORE C = &B		LOAD X
1084	STORE D = 4		LOAD C (gets &B)
1085				<data dependency barrier>
1086				LOAD *C (reads B)
1087
1088then the following will occur:
1089
1090	+-------+       :      :                :       :
1091	|       |       +------+                +-------+
1092	|       |------>| B=2  |-----       --->| Y->8  |
1093	|       |  :    +------+     \          +-------+
1094	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
1095	|       |       +------+       |        +-------+
1096	|       |   wwwwwwwwwwwwwwww   |        :       :
1097	|       |       +------+       |        :       :
1098	|       |  :    | C=&B |---    |        :       :       +-------+
1099	|       |  :    +------+   \   |        +-------+       |       |
1100	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1101	|       |       +------+       |        +-------+       |       |
1102	+-------+       :      :       |        :       :       |       |
1103	                               |        :       :       |       |
1104	                               |        :       :       | CPU 2 |
1105	                               |        +-------+       |       |
1106	                               |        | X->9  |------>|       |
1107	                               |        +-------+       |       |
1108	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
1109	  prior to the store of C        \      +-------+       |       |
1110	  are perceptible to              ----->| B->2  |------>|       |
1111	  subsequent loads                      +-------+       |       |
1112	                                        :       :       +-------+
1113
1114
1115And thirdly, a read barrier acts as a partial order on loads.  Consider the
1116following sequence of events:
1117
1118	CPU 1			CPU 2
1119	=======================	=======================
1120		{ A = 0, B = 9 }
1121	STORE A=1
1122	<write barrier>
1123	STORE B=2
1124				LOAD B
1125				LOAD A
1126
1127Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1128some effectively random order, despite the write barrier issued by CPU 1:
1129
1130	+-------+       :      :                :       :
1131	|       |       +------+                +-------+
1132	|       |------>| A=1  |------      --->| A->0  |
1133	|       |       +------+      \         +-------+
1134	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1135	|       |       +------+        |       +-------+
1136	|       |------>| B=2  |---     |       :       :
1137	|       |       +------+   \    |       :       :       +-------+
1138	+-------+       :      :    \   |       +-------+       |       |
1139	                             ---------->| B->2  |------>|       |
1140	                                |       +-------+       | CPU 2 |
1141	                                |       | A->0  |------>|       |
1142	                                |       +-------+       |       |
1143	                                |       :       :       +-------+
1144	                                 \      :       :
1145	                                  \     +-------+
1146	                                   ---->| A->1  |
1147	                                        +-------+
1148	                                        :       :
1149
1150
1151If, however, a read barrier were to be placed between the load of B and the
1152load of A on CPU 2:
1153
1154	CPU 1			CPU 2
1155	=======================	=======================
1156		{ A = 0, B = 9 }
1157	STORE A=1
1158	<write barrier>
1159	STORE B=2
1160				LOAD B
1161				<read barrier>
1162				LOAD A
1163
1164then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
11652:
1166
1167	+-------+       :      :                :       :
1168	|       |       +------+                +-------+
1169	|       |------>| A=1  |------      --->| A->0  |
1170	|       |       +------+      \         +-------+
1171	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1172	|       |       +------+        |       +-------+
1173	|       |------>| B=2  |---     |       :       :
1174	|       |       +------+   \    |       :       :       +-------+
1175	+-------+       :      :    \   |       +-------+       |       |
1176	                             ---------->| B->2  |------>|       |
1177	                                |       +-------+       | CPU 2 |
1178	                                |       :       :       |       |
1179	                                |       :       :       |       |
1180	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1181	  barrier causes all effects      \     +-------+       |       |
1182	  prior to the storage of B        ---->| A->1  |------>|       |
1183	  to be perceptible to CPU 2            +-------+       |       |
1184	                                        :       :       +-------+
1185
1186
1187To illustrate this more completely, consider what could happen if the code
1188contained a load of A either side of the read barrier:
1189
1190	CPU 1			CPU 2
1191	=======================	=======================
1192		{ A = 0, B = 9 }
1193	STORE A=1
1194	<write barrier>
1195	STORE B=2
1196				LOAD B
1197				LOAD A [first load of A]
1198				<read barrier>
1199				LOAD A [second load of A]
1200
1201Even though the two loads of A both occur after the load of B, they may both
1202come up with different values:
1203
1204	+-------+       :      :                :       :
1205	|       |       +------+                +-------+
1206	|       |------>| A=1  |------      --->| A->0  |
1207	|       |       +------+      \         +-------+
1208	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1209	|       |       +------+        |       +-------+
1210	|       |------>| B=2  |---     |       :       :
1211	|       |       +------+   \    |       :       :       +-------+
1212	+-------+       :      :    \   |       +-------+       |       |
1213	                             ---------->| B->2  |------>|       |
1214	                                |       +-------+       | CPU 2 |
1215	                                |       :       :       |       |
1216	                                |       :       :       |       |
1217	                                |       +-------+       |       |
1218	                                |       | A->0  |------>| 1st   |
1219	                                |       +-------+       |       |
1220	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1221	  barrier causes all effects      \     +-------+       |       |
1222	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1223	  to be perceptible to CPU 2            +-------+       |       |
1224	                                        :       :       +-------+
1225
1226
1227But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1228before the read barrier completes anyway:
1229
1230	+-------+       :      :                :       :
1231	|       |       +------+                +-------+
1232	|       |------>| A=1  |------      --->| A->0  |
1233	|       |       +------+      \         +-------+
1234	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1235	|       |       +------+        |       +-------+
1236	|       |------>| B=2  |---     |       :       :
1237	|       |       +------+   \    |       :       :       +-------+
1238	+-------+       :      :    \   |       +-------+       |       |
1239	                             ---------->| B->2  |------>|       |
1240	                                |       +-------+       | CPU 2 |
1241	                                |       :       :       |       |
1242	                                 \      :       :       |       |
1243	                                  \     +-------+       |       |
1244	                                   ---->| A->1  |------>| 1st   |
1245	                                        +-------+       |       |
1246	                                    rrrrrrrrrrrrrrrrr   |       |
1247	                                        +-------+       |       |
1248	                                        | A->1  |------>| 2nd   |
1249	                                        +-------+       |       |
1250	                                        :       :       +-------+
1251
1252
1253The guarantee is that the second load will always come up with A == 1 if the
1254load of B came up with B == 2.  No such guarantee exists for the first load of
1255A; that may come up with either A == 0 or A == 1.
1256
1257
1258READ MEMORY BARRIERS VS LOAD SPECULATION
1259----------------------------------------
1260
1261Many CPUs speculate with loads: that is they see that they will need to load an
1262item from memory, and they find a time where they're not using the bus for any
1263other loads, and so do the load in advance - even though they haven't actually
1264got to that point in the instruction execution flow yet.  This permits the
1265actual load instruction to potentially complete immediately because the CPU
1266already has the value to hand.
1267
1268It may turn out that the CPU didn't actually need the value - perhaps because a
1269branch circumvented the load - in which case it can discard the value or just
1270cache it for later use.
1271
1272Consider:
1273
1274	CPU 1			CPU 2
1275	=======================	=======================
1276				LOAD B
1277				DIVIDE		} Divide instructions generally
1278				DIVIDE		} take a long time to perform
1279				LOAD A
1280
1281Which might appear as this:
1282
1283	                                        :       :       +-------+
1284	                                        +-------+       |       |
1285	                                    --->| B->2  |------>|       |
1286	                                        +-------+       | CPU 2 |
1287	                                        :       :DIVIDE |       |
1288	                                        +-------+       |       |
1289	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1290	division speculates on the              +-------+   ~   |       |
1291	LOAD of A                               :       :   ~   |       |
1292	                                        :       :DIVIDE |       |
1293	                                        :       :   ~   |       |
1294	Once the divisions are complete -->     :       :   ~-->|       |
1295	the CPU can then perform the            :       :       |       |
1296	LOAD with immediate effect              :       :       +-------+
1297
1298
1299Placing a read barrier or a data dependency barrier just before the second
1300load:
1301
1302	CPU 1			CPU 2
1303	=======================	=======================
1304				LOAD B
1305				DIVIDE
1306				DIVIDE
1307				<read barrier>
1308				LOAD A
1309
1310will force any value speculatively obtained to be reconsidered to an extent
1311dependent on the type of barrier used.  If there was no change made to the
1312speculated memory location, then the speculated value will just be used:
1313
1314	                                        :       :       +-------+
1315	                                        +-------+       |       |
1316	                                    --->| B->2  |------>|       |
1317	                                        +-------+       | CPU 2 |
1318	                                        :       :DIVIDE |       |
1319	                                        +-------+       |       |
1320	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1321	division speculates on the              +-------+   ~   |       |
1322	LOAD of A                               :       :   ~   |       |
1323	                                        :       :DIVIDE |       |
1324	                                        :       :   ~   |       |
1325	                                        :       :   ~   |       |
1326	                                    rrrrrrrrrrrrrrrr~   |       |
1327	                                        :       :   ~   |       |
1328	                                        :       :   ~-->|       |
1329	                                        :       :       |       |
1330	                                        :       :       +-------+
1331
1332
1333but if there was an update or an invalidation from another CPU pending, then
1334the speculation will be cancelled and the value reloaded:
1335
1336	                                        :       :       +-------+
1337	                                        +-------+       |       |
1338	                                    --->| B->2  |------>|       |
1339	                                        +-------+       | CPU 2 |
1340	                                        :       :DIVIDE |       |
1341	                                        +-------+       |       |
1342	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1343	division speculates on the              +-------+   ~   |       |
1344	LOAD of A                               :       :   ~   |       |
1345	                                        :       :DIVIDE |       |
1346	                                        :       :   ~   |       |
1347	                                        :       :   ~   |       |
1348	                                    rrrrrrrrrrrrrrrrr   |       |
1349	                                        +-------+       |       |
1350	The speculation is discarded --->   --->| A->1  |------>|       |
1351	and an updated value is                 +-------+       |       |
1352	retrieved                               :       :       +-------+
1353
1354
1355TRANSITIVITY
1356------------
1357
1358Transitivity is a deeply intuitive notion about ordering that is not
1359always provided by real computer systems.  The following example
1360demonstrates transitivity:
1361
1362	CPU 1			CPU 2			CPU 3
1363	=======================	=======================	=======================
1364		{ X = 0, Y = 0 }
1365	STORE X=1		LOAD X			STORE Y=1
1366				<general barrier>	<general barrier>
1367				LOAD Y			LOAD X
1368
1369Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1370This indicates that CPU 2's load from X in some sense follows CPU 1's
1371store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1372store to Y.  The question is then "Can CPU 3's load from X return 0?"
1373
1374Because CPU 2's load from X in some sense came after CPU 1's store, it
1375is natural to expect that CPU 3's load from X must therefore return 1.
1376This expectation is an example of transitivity: if a load executing on
1377CPU A follows a load from the same variable executing on CPU B, then
1378CPU A's load must either return the same value that CPU B's load did,
1379or must return some later value.
1380
1381In the Linux kernel, use of general memory barriers guarantees
1382transitivity.  Therefore, in the above example, if CPU 2's load from X
1383returns 1 and its load from Y returns 0, then CPU 3's load from X must
1384also return 1.
1385
1386However, transitivity is -not- guaranteed for read or write barriers.
1387For example, suppose that CPU 2's general barrier in the above example
1388is changed to a read barrier as shown below:
1389
1390	CPU 1			CPU 2			CPU 3
1391	=======================	=======================	=======================
1392		{ X = 0, Y = 0 }
1393	STORE X=1		LOAD X			STORE Y=1
1394				<read barrier>		<general barrier>
1395				LOAD Y			LOAD X
1396
1397This substitution destroys transitivity: in this example, it is perfectly
1398legal for CPU 2's load from X to return 1, its load from Y to return 0,
1399and CPU 3's load from X to return 0.
1400
1401The key point is that although CPU 2's read barrier orders its pair
1402of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1403this example runs on a system where CPUs 1 and 2 share a store buffer
1404or a level of cache, CPU 2 might have early access to CPU 1's writes.
1405General barriers are therefore required to ensure that all CPUs agree
1406on the combined order of CPU 1's and CPU 2's accesses.
1407
1408General barriers provide "global transitivity", so that all CPUs will
1409agree on the order of operations.  In contrast, a chain of release-acquire
1410pairs provides only "local transitivity", so that only those CPUs on
1411the chain are guaranteed to agree on the combined order of the accesses.
1412For example, switching to C code in deference to Herman Hollerith:
1413
1414	int u, v, x, y, z;
1415
1416	void cpu0(void)
1417	{
1418		r0 = smp_load_acquire(&x);
1419		WRITE_ONCE(u, 1);
1420		smp_store_release(&y, 1);
1421	}
1422
1423	void cpu1(void)
1424	{
1425		r1 = smp_load_acquire(&y);
1426		r4 = READ_ONCE(v);
1427		r5 = READ_ONCE(u);
1428		smp_store_release(&z, 1);
1429	}
1430
1431	void cpu2(void)
1432	{
1433		r2 = smp_load_acquire(&z);
1434		smp_store_release(&x, 1);
1435	}
1436
1437	void cpu3(void)
1438	{
1439		WRITE_ONCE(v, 1);
1440		smp_mb();
1441		r3 = READ_ONCE(u);
1442	}
1443
1444Because cpu0(), cpu1(), and cpu2() participate in a local transitive
1445chain of smp_store_release()/smp_load_acquire() pairs, the following
1446outcome is prohibited:
1447
1448	r0 == 1 && r1 == 1 && r2 == 1
1449
1450Furthermore, because of the release-acquire relationship between cpu0()
1451and cpu1(), cpu1() must see cpu0()'s writes, so that the following
1452outcome is prohibited:
1453
1454	r1 == 1 && r5 == 0
1455
1456However, the transitivity of release-acquire is local to the participating
1457CPUs and does not apply to cpu3().  Therefore, the following outcome
1458is possible:
1459
1460	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0
1461
1462As an aside, the following outcome is also possible:
1463
1464	r0 == 0 && r1 == 1 && r2 == 1 && r3 == 0 && r4 == 0 && r5 == 1
1465
1466Although cpu0(), cpu1(), and cpu2() will see their respective reads and
1467writes in order, CPUs not involved in the release-acquire chain might
1468well disagree on the order.  This disagreement stems from the fact that
1469the weak memory-barrier instructions used to implement smp_load_acquire()
1470and smp_store_release() are not required to order prior stores against
1471subsequent loads in all cases.  This means that cpu3() can see cpu0()'s
1472store to u as happening -after- cpu1()'s load from v, even though
1473both cpu0() and cpu1() agree that these two operations occurred in the
1474intended order.
1475
1476However, please keep in mind that smp_load_acquire() is not magic.
1477In particular, it simply reads from its argument with ordering.  It does
1478-not- ensure that any particular value will be read.  Therefore, the
1479following outcome is possible:
1480
1481	r0 == 0 && r1 == 0 && r2 == 0 && r5 == 0
1482
1483Note that this outcome can happen even on a mythical sequentially
1484consistent system where nothing is ever reordered.
1485
1486To reiterate, if your code requires global transitivity, use general
1487barriers throughout.
1488
1489
1490========================
1491EXPLICIT KERNEL BARRIERS
1492========================
1493
1494The Linux kernel has a variety of different barriers that act at different
1495levels:
1496
1497  (*) Compiler barrier.
1498
1499  (*) CPU memory barriers.
1500
1501  (*) MMIO write barrier.
1502
1503
1504COMPILER BARRIER
1505----------------
1506
1507The Linux kernel has an explicit compiler barrier function that prevents the
1508compiler from moving the memory accesses either side of it to the other side:
1509
1510	barrier();
1511
1512This is a general barrier -- there are no read-read or write-write
1513variants of barrier().  However, READ_ONCE() and WRITE_ONCE() can be
1514thought of as weak forms of barrier() that affect only the specific
1515accesses flagged by the READ_ONCE() or WRITE_ONCE().
1516
1517The barrier() function has the following effects:
1518
1519 (*) Prevents the compiler from reordering accesses following the
1520     barrier() to precede any accesses preceding the barrier().
1521     One example use for this property is to ease communication between
1522     interrupt-handler code and the code that was interrupted.
1523
1524 (*) Within a loop, forces the compiler to load the variables used
1525     in that loop's conditional on each pass through that loop.
1526
1527The READ_ONCE() and WRITE_ONCE() functions can prevent any number of
1528optimizations that, while perfectly safe in single-threaded code, can
1529be fatal in concurrent code.  Here are some examples of these sorts
1530of optimizations:
1531
1532 (*) The compiler is within its rights to reorder loads and stores
1533     to the same variable, and in some cases, the CPU is within its
1534     rights to reorder loads to the same variable.  This means that
1535     the following code:
1536
1537	a[0] = x;
1538	a[1] = x;
1539
1540     Might result in an older value of x stored in a[1] than in a[0].
1541     Prevent both the compiler and the CPU from doing this as follows:
1542
1543	a[0] = READ_ONCE(x);
1544	a[1] = READ_ONCE(x);
1545
1546     In short, READ_ONCE() and WRITE_ONCE() provide cache coherence for
1547     accesses from multiple CPUs to a single variable.
1548
1549 (*) The compiler is within its rights to merge successive loads from
1550     the same variable.  Such merging can cause the compiler to "optimize"
1551     the following code:
1552
1553	while (tmp = a)
1554		do_something_with(tmp);
1555
1556     into the following code, which, although in some sense legitimate
1557     for single-threaded code, is almost certainly not what the developer
1558     intended:
1559
1560	if (tmp = a)
1561		for (;;)
1562			do_something_with(tmp);
1563
1564     Use READ_ONCE() to prevent the compiler from doing this to you:
1565
1566	while (tmp = READ_ONCE(a))
1567		do_something_with(tmp);
1568
1569 (*) The compiler is within its rights to reload a variable, for example,
1570     in cases where high register pressure prevents the compiler from
1571     keeping all data of interest in registers.  The compiler might
1572     therefore optimize the variable 'tmp' out of our previous example:
1573
1574	while (tmp = a)
1575		do_something_with(tmp);
1576
1577     This could result in the following code, which is perfectly safe in
1578     single-threaded code, but can be fatal in concurrent code:
1579
1580	while (a)
1581		do_something_with(a);
1582
1583     For example, the optimized version of this code could result in
1584     passing a zero to do_something_with() in the case where the variable
1585     a was modified by some other CPU between the "while" statement and
1586     the call to do_something_with().
1587
1588     Again, use READ_ONCE() to prevent the compiler from doing this:
1589
1590	while (tmp = READ_ONCE(a))
1591		do_something_with(tmp);
1592
1593     Note that if the compiler runs short of registers, it might save
1594     tmp onto the stack.  The overhead of this saving and later restoring
1595     is why compilers reload variables.  Doing so is perfectly safe for
1596     single-threaded code, so you need to tell the compiler about cases
1597     where it is not safe.
1598
1599 (*) The compiler is within its rights to omit a load entirely if it knows
1600     what the value will be.  For example, if the compiler can prove that
1601     the value of variable 'a' is always zero, it can optimize this code:
1602
1603	while (tmp = a)
1604		do_something_with(tmp);
1605
1606     Into this:
1607
1608	do { } while (0);
1609
1610     This transformation is a win for single-threaded code because it
1611     gets rid of a load and a branch.  The problem is that the compiler
1612     will carry out its proof assuming that the current CPU is the only
1613     one updating variable 'a'.  If variable 'a' is shared, then the
1614     compiler's proof will be erroneous.  Use READ_ONCE() to tell the
1615     compiler that it doesn't know as much as it thinks it does:
1616
1617	while (tmp = READ_ONCE(a))
1618		do_something_with(tmp);
1619
1620     But please note that the compiler is also closely watching what you
1621     do with the value after the READ_ONCE().  For example, suppose you
1622     do the following and MAX is a preprocessor macro with the value 1:
1623
1624	while ((tmp = READ_ONCE(a)) % MAX)
1625		do_something_with(tmp);
1626
1627     Then the compiler knows that the result of the "%" operator applied
1628     to MAX will always be zero, again allowing the compiler to optimize
1629     the code into near-nonexistence.  (It will still load from the
1630     variable 'a'.)
1631
1632 (*) Similarly, the compiler is within its rights to omit a store entirely
1633     if it knows that the variable already has the value being stored.
1634     Again, the compiler assumes that the current CPU is the only one
1635     storing into the variable, which can cause the compiler to do the
1636     wrong thing for shared variables.  For example, suppose you have
1637     the following:
1638
1639	a = 0;
1640	... Code that does not store to variable a ...
1641	a = 0;
1642
1643     The compiler sees that the value of variable 'a' is already zero, so
1644     it might well omit the second store.  This would come as a fatal
1645     surprise if some other CPU might have stored to variable 'a' in the
1646     meantime.
1647
1648     Use WRITE_ONCE() to prevent the compiler from making this sort of
1649     wrong guess:
1650
1651	WRITE_ONCE(a, 0);
1652	... Code that does not store to variable a ...
1653	WRITE_ONCE(a, 0);
1654
1655 (*) The compiler is within its rights to reorder memory accesses unless
1656     you tell it not to.  For example, consider the following interaction
1657     between process-level code and an interrupt handler:
1658
1659	void process_level(void)
1660	{
1661		msg = get_message();
1662		flag = true;
1663	}
1664
1665	void interrupt_handler(void)
1666	{
1667		if (flag)
1668			process_message(msg);
1669	}
1670
1671     There is nothing to prevent the compiler from transforming
1672     process_level() to the following, in fact, this might well be a
1673     win for single-threaded code:
1674
1675	void process_level(void)
1676	{
1677		flag = true;
1678		msg = get_message();
1679	}
1680
1681     If the interrupt occurs between these two statement, then
1682     interrupt_handler() might be passed a garbled msg.  Use WRITE_ONCE()
1683     to prevent this as follows:
1684
1685	void process_level(void)
1686	{
1687		WRITE_ONCE(msg, get_message());
1688		WRITE_ONCE(flag, true);
1689	}
1690
1691	void interrupt_handler(void)
1692	{
1693		if (READ_ONCE(flag))
1694			process_message(READ_ONCE(msg));
1695	}
1696
1697     Note that the READ_ONCE() and WRITE_ONCE() wrappers in
1698     interrupt_handler() are needed if this interrupt handler can itself
1699     be interrupted by something that also accesses 'flag' and 'msg',
1700     for example, a nested interrupt or an NMI.  Otherwise, READ_ONCE()
1701     and WRITE_ONCE() are not needed in interrupt_handler() other than
1702     for documentation purposes.  (Note also that nested interrupts
1703     do not typically occur in modern Linux kernels, in fact, if an
1704     interrupt handler returns with interrupts enabled, you will get a
1705     WARN_ONCE() splat.)
1706
1707     You should assume that the compiler can move READ_ONCE() and
1708     WRITE_ONCE() past code not containing READ_ONCE(), WRITE_ONCE(),
1709     barrier(), or similar primitives.
1710
1711     This effect could also be achieved using barrier(), but READ_ONCE()
1712     and WRITE_ONCE() are more selective:  With READ_ONCE() and
1713     WRITE_ONCE(), the compiler need only forget the contents of the
1714     indicated memory locations, while with barrier() the compiler must
1715     discard the value of all memory locations that it has currented
1716     cached in any machine registers.  Of course, the compiler must also
1717     respect the order in which the READ_ONCE()s and WRITE_ONCE()s occur,
1718     though the CPU of course need not do so.
1719
1720 (*) The compiler is within its rights to invent stores to a variable,
1721     as in the following example:
1722
1723	if (a)
1724		b = a;
1725	else
1726		b = 42;
1727
1728     The compiler might save a branch by optimizing this as follows:
1729
1730	b = 42;
1731	if (a)
1732		b = a;
1733
1734     In single-threaded code, this is not only safe, but also saves
1735     a branch.  Unfortunately, in concurrent code, this optimization
1736     could cause some other CPU to see a spurious value of 42 -- even
1737     if variable 'a' was never zero -- when loading variable 'b'.
1738     Use WRITE_ONCE() to prevent this as follows:
1739
1740	if (a)
1741		WRITE_ONCE(b, a);
1742	else
1743		WRITE_ONCE(b, 42);
1744
1745     The compiler can also invent loads.  These are usually less
1746     damaging, but they can result in cache-line bouncing and thus in
1747     poor performance and scalability.  Use READ_ONCE() to prevent
1748     invented loads.
1749
1750 (*) For aligned memory locations whose size allows them to be accessed
1751     with a single memory-reference instruction, prevents "load tearing"
1752     and "store tearing," in which a single large access is replaced by
1753     multiple smaller accesses.  For example, given an architecture having
1754     16-bit store instructions with 7-bit immediate fields, the compiler
1755     might be tempted to use two 16-bit store-immediate instructions to
1756     implement the following 32-bit store:
1757
1758	p = 0x00010002;
1759
1760     Please note that GCC really does use this sort of optimization,
1761     which is not surprising given that it would likely take more
1762     than two instructions to build the constant and then store it.
1763     This optimization can therefore be a win in single-threaded code.
1764     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1765     this optimization in a volatile store.  In the absence of such bugs,
1766     use of WRITE_ONCE() prevents store tearing in the following example:
1767
1768	WRITE_ONCE(p, 0x00010002);
1769
1770     Use of packed structures can also result in load and store tearing,
1771     as in this example:
1772
1773	struct __attribute__((__packed__)) foo {
1774		short a;
1775		int b;
1776		short c;
1777	};
1778	struct foo foo1, foo2;
1779	...
1780
1781	foo2.a = foo1.a;
1782	foo2.b = foo1.b;
1783	foo2.c = foo1.c;
1784
1785     Because there are no READ_ONCE() or WRITE_ONCE() wrappers and no
1786     volatile markings, the compiler would be well within its rights to
1787     implement these three assignment statements as a pair of 32-bit
1788     loads followed by a pair of 32-bit stores.  This would result in
1789     load tearing on 'foo1.b' and store tearing on 'foo2.b'.  READ_ONCE()
1790     and WRITE_ONCE() again prevent tearing in this example:
1791
1792	foo2.a = foo1.a;
1793	WRITE_ONCE(foo2.b, READ_ONCE(foo1.b));
1794	foo2.c = foo1.c;
1795
1796All that aside, it is never necessary to use READ_ONCE() and
1797WRITE_ONCE() on a variable that has been marked volatile.  For example,
1798because 'jiffies' is marked volatile, it is never necessary to
1799say READ_ONCE(jiffies).  The reason for this is that READ_ONCE() and
1800WRITE_ONCE() are implemented as volatile casts, which has no effect when
1801its argument is already marked volatile.
1802
1803Please note that these compiler barriers have no direct effect on the CPU,
1804which may then reorder things however it wishes.
1805
1806
1807CPU MEMORY BARRIERS
1808-------------------
1809
1810The Linux kernel has eight basic CPU memory barriers:
1811
1812	TYPE		MANDATORY		SMP CONDITIONAL
1813	===============	=======================	===========================
1814	GENERAL		mb()			smp_mb()
1815	WRITE		wmb()			smp_wmb()
1816	READ		rmb()			smp_rmb()
1817	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1818
1819
1820All memory barriers except the data dependency barriers imply a compiler
1821barrier.  Data dependencies do not impose any additional compiler ordering.
1822
1823Aside: In the case of data dependencies, the compiler would be expected
1824to issue the loads in the correct order (eg. `a[b]` would have to load
1825the value of b before loading a[b]), however there is no guarantee in
1826the C specification that the compiler may not speculate the value of b
1827(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
1828tmp = a[b]; ).  There is also the problem of a compiler reloading b after
1829having loaded a[b], thus having a newer copy of b than a[b].  A consensus
1830has not yet been reached about these problems, however the READ_ONCE()
1831macro is a good place to start looking.
1832
1833SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1834systems because it is assumed that a CPU will appear to be self-consistent,
1835and will order overlapping accesses correctly with respect to itself.
1836However, see the subsection on "Virtual Machine Guests" below.
1837
1838[!] Note that SMP memory barriers _must_ be used to control the ordering of
1839references to shared memory on SMP systems, though the use of locking instead
1840is sufficient.
1841
1842Mandatory barriers should not be used to control SMP effects, since mandatory
1843barriers impose unnecessary overhead on both SMP and UP systems. They may,
1844however, be used to control MMIO effects on accesses through relaxed memory I/O
1845windows.  These barriers are required even on non-SMP systems as they affect
1846the order in which memory operations appear to a device by prohibiting both the
1847compiler and the CPU from reordering them.
1848
1849
1850There are some more advanced barrier functions:
1851
1852 (*) smp_store_mb(var, value)
1853
1854     This assigns the value to the variable and then inserts a full memory
1855     barrier after it.  It isn't guaranteed to insert anything more than a
1856     compiler barrier in a UP compilation.
1857
1858
1859 (*) smp_mb__before_atomic();
1860 (*) smp_mb__after_atomic();
1861
1862     These are for use with atomic (such as add, subtract, increment and
1863     decrement) functions that don't return a value, especially when used for
1864     reference counting.  These functions do not imply memory barriers.
1865
1866     These are also used for atomic bitop functions that do not return a
1867     value (such as set_bit and clear_bit).
1868
1869     As an example, consider a piece of code that marks an object as being dead
1870     and then decrements the object's reference count:
1871
1872	obj->dead = 1;
1873	smp_mb__before_atomic();
1874	atomic_dec(&obj->ref_count);
1875
1876     This makes sure that the death mark on the object is perceived to be set
1877     *before* the reference counter is decremented.
1878
1879     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1880     operations" subsection for information on where to use these.
1881
1882
1883 (*) lockless_dereference();
1884
1885     This can be thought of as a pointer-fetch wrapper around the
1886     smp_read_barrier_depends() data-dependency barrier.
1887
1888     This is also similar to rcu_dereference(), but in cases where
1889     object lifetime is handled by some mechanism other than RCU, for
1890     example, when the objects removed only when the system goes down.
1891     In addition, lockless_dereference() is used in some data structures
1892     that can be used both with and without RCU.
1893
1894
1895 (*) dma_wmb();
1896 (*) dma_rmb();
1897
1898     These are for use with consistent memory to guarantee the ordering
1899     of writes or reads of shared memory accessible to both the CPU and a
1900     DMA capable device.
1901
1902     For example, consider a device driver that shares memory with a device
1903     and uses a descriptor status value to indicate if the descriptor belongs
1904     to the device or the CPU, and a doorbell to notify it when new
1905     descriptors are available:
1906
1907	if (desc->status != DEVICE_OWN) {
1908		/* do not read data until we own descriptor */
1909		dma_rmb();
1910
1911		/* read/modify data */
1912		read_data = desc->data;
1913		desc->data = write_data;
1914
1915		/* flush modifications before status update */
1916		dma_wmb();
1917
1918		/* assign ownership */
1919		desc->status = DEVICE_OWN;
1920
1921		/* force memory to sync before notifying device via MMIO */
1922		wmb();
1923
1924		/* notify device of new descriptors */
1925		writel(DESC_NOTIFY, doorbell);
1926	}
1927
1928     The dma_rmb() allows us guarantee the device has released ownership
1929     before we read the data from the descriptor, and the dma_wmb() allows
1930     us to guarantee the data is written to the descriptor before the device
1931     can see it now has ownership.  The wmb() is needed to guarantee that the
1932     cache coherent memory writes have completed before attempting a write to
1933     the cache incoherent MMIO region.
1934
1935     See Documentation/DMA-API.txt for more information on consistent memory.
1936
1937
1938MMIO WRITE BARRIER
1939------------------
1940
1941The Linux kernel also has a special barrier for use with memory-mapped I/O
1942writes:
1943
1944	mmiowb();
1945
1946This is a variation on the mandatory write barrier that causes writes to weakly
1947ordered I/O regions to be partially ordered.  Its effects may go beyond the
1948CPU->Hardware interface and actually affect the hardware at some level.
1949
1950See the subsection "Acquires vs I/O accesses" for more information.
1951
1952
1953===============================
1954IMPLICIT KERNEL MEMORY BARRIERS
1955===============================
1956
1957Some of the other functions in the linux kernel imply memory barriers, amongst
1958which are locking and scheduling functions.
1959
1960This specification is a _minimum_ guarantee; any particular architecture may
1961provide more substantial guarantees, but these may not be relied upon outside
1962of arch specific code.
1963
1964
1965LOCK ACQUISITION FUNCTIONS
1966--------------------------
1967
1968The Linux kernel has a number of locking constructs:
1969
1970 (*) spin locks
1971 (*) R/W spin locks
1972 (*) mutexes
1973 (*) semaphores
1974 (*) R/W semaphores
1975
1976In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1977for each construct.  These operations all imply certain barriers:
1978
1979 (1) ACQUIRE operation implication:
1980
1981     Memory operations issued after the ACQUIRE will be completed after the
1982     ACQUIRE operation has completed.
1983
1984     Memory operations issued before the ACQUIRE may be completed after
1985     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1986     combined with a following ACQUIRE, orders prior stores against
1987     subsequent loads and stores.  Note that this is weaker than smp_mb()!
1988     The smp_mb__before_spinlock() primitive is free on many architectures.
1989
1990 (2) RELEASE operation implication:
1991
1992     Memory operations issued before the RELEASE will be completed before the
1993     RELEASE operation has completed.
1994
1995     Memory operations issued after the RELEASE may be completed before the
1996     RELEASE operation has completed.
1997
1998 (3) ACQUIRE vs ACQUIRE implication:
1999
2000     All ACQUIRE operations issued before another ACQUIRE operation will be
2001     completed before that ACQUIRE operation.
2002
2003 (4) ACQUIRE vs RELEASE implication:
2004
2005     All ACQUIRE operations issued before a RELEASE operation will be
2006     completed before the RELEASE operation.
2007
2008 (5) Failed conditional ACQUIRE implication:
2009
2010     Certain locking variants of the ACQUIRE operation may fail, either due to
2011     being unable to get the lock immediately, or due to receiving an unblocked
2012     signal whilst asleep waiting for the lock to become available.  Failed
2013     locks do not imply any sort of barrier.
2014
2015[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
2016one-way barriers is that the effects of instructions outside of a critical
2017section may seep into the inside of the critical section.
2018
2019An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
2020because it is possible for an access preceding the ACQUIRE to happen after the
2021ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
2022the two accesses can themselves then cross:
2023
2024	*A = a;
2025	ACQUIRE M
2026	RELEASE M
2027	*B = b;
2028
2029may occur as:
2030
2031	ACQUIRE M, STORE *B, STORE *A, RELEASE M
2032
2033When the ACQUIRE and RELEASE are a lock acquisition and release,
2034respectively, this same reordering can occur if the lock's ACQUIRE and
2035RELEASE are to the same lock variable, but only from the perspective of
2036another CPU not holding that lock.  In short, a ACQUIRE followed by an
2037RELEASE may -not- be assumed to be a full memory barrier.
2038
2039Similarly, the reverse case of a RELEASE followed by an ACQUIRE does
2040not imply a full memory barrier.  Therefore, the CPU's execution of the
2041critical sections corresponding to the RELEASE and the ACQUIRE can cross,
2042so that:
2043
2044	*A = a;
2045	RELEASE M
2046	ACQUIRE N
2047	*B = b;
2048
2049could occur as:
2050
2051	ACQUIRE N, STORE *B, STORE *A, RELEASE M
2052
2053It might appear that this reordering could introduce a deadlock.
2054However, this cannot happen because if such a deadlock threatened,
2055the RELEASE would simply complete, thereby avoiding the deadlock.
2056
2057	Why does this work?
2058
2059	One key point is that we are only talking about the CPU doing
2060	the reordering, not the compiler.  If the compiler (or, for
2061	that matter, the developer) switched the operations, deadlock
2062	-could- occur.
2063
2064	But suppose the CPU reordered the operations.  In this case,
2065	the unlock precedes the lock in the assembly code.  The CPU
2066	simply elected to try executing the later lock operation first.
2067	If there is a deadlock, this lock operation will simply spin (or
2068	try to sleep, but more on that later).	The CPU will eventually
2069	execute the unlock operation (which preceded the lock operation
2070	in the assembly code), which will unravel the potential deadlock,
2071	allowing the lock operation to succeed.
2072
2073	But what if the lock is a sleeplock?  In that case, the code will
2074	try to enter the scheduler, where it will eventually encounter
2075	a memory barrier, which will force the earlier unlock operation
2076	to complete, again unraveling the deadlock.  There might be
2077	a sleep-unlock race, but the locking primitive needs to resolve
2078	such races properly in any case.
2079
2080Locks and semaphores may not provide any guarantee of ordering on UP compiled
2081systems, and so cannot be counted on in such a situation to actually achieve
2082anything at all - especially with respect to I/O accesses - unless combined
2083with interrupt disabling operations.
2084
2085See also the section on "Inter-CPU acquiring barrier effects".
2086
2087
2088As an example, consider the following:
2089
2090	*A = a;
2091	*B = b;
2092	ACQUIRE
2093	*C = c;
2094	*D = d;
2095	RELEASE
2096	*E = e;
2097	*F = f;
2098
2099The following sequence of events is acceptable:
2100
2101	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
2102
2103	[+] Note that {*F,*A} indicates a combined access.
2104
2105But none of the following are:
2106
2107	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
2108	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
2109	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
2110	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
2111
2112
2113
2114INTERRUPT DISABLING FUNCTIONS
2115-----------------------------
2116
2117Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
2118(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
2119barriers are required in such a situation, they must be provided from some
2120other means.
2121
2122
2123SLEEP AND WAKE-UP FUNCTIONS
2124---------------------------
2125
2126Sleeping and waking on an event flagged in global data can be viewed as an
2127interaction between two pieces of data: the task state of the task waiting for
2128the event and the global data used to indicate the event.  To make sure that
2129these appear to happen in the right order, the primitives to begin the process
2130of going to sleep, and the primitives to initiate a wake up imply certain
2131barriers.
2132
2133Firstly, the sleeper normally follows something like this sequence of events:
2134
2135	for (;;) {
2136		set_current_state(TASK_UNINTERRUPTIBLE);
2137		if (event_indicated)
2138			break;
2139		schedule();
2140	}
2141
2142A general memory barrier is interpolated automatically by set_current_state()
2143after it has altered the task state:
2144
2145	CPU 1
2146	===============================
2147	set_current_state();
2148	  smp_store_mb();
2149	    STORE current->state
2150	    <general barrier>
2151	LOAD event_indicated
2152
2153set_current_state() may be wrapped by:
2154
2155	prepare_to_wait();
2156	prepare_to_wait_exclusive();
2157
2158which therefore also imply a general memory barrier after setting the state.
2159The whole sequence above is available in various canned forms, all of which
2160interpolate the memory barrier in the right place:
2161
2162	wait_event();
2163	wait_event_interruptible();
2164	wait_event_interruptible_exclusive();
2165	wait_event_interruptible_timeout();
2166	wait_event_killable();
2167	wait_event_timeout();
2168	wait_on_bit();
2169	wait_on_bit_lock();
2170
2171
2172Secondly, code that performs a wake up normally follows something like this:
2173
2174	event_indicated = 1;
2175	wake_up(&event_wait_queue);
2176
2177or:
2178
2179	event_indicated = 1;
2180	wake_up_process(event_daemon);
2181
2182A write memory barrier is implied by wake_up() and co.  if and only if they
2183wake something up.  The barrier occurs before the task state is cleared, and so
2184sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2185
2186	CPU 1				CPU 2
2187	===============================	===============================
2188	set_current_state();		STORE event_indicated
2189	  smp_store_mb();		wake_up();
2190	    STORE current->state	  <write barrier>
2191	    <general barrier>		  STORE current->state
2192	LOAD event_indicated
2193
2194To repeat, this write memory barrier is present if and only if something
2195is actually awakened.  To see this, consider the following sequence of
2196events, where X and Y are both initially zero:
2197
2198	CPU 1				CPU 2
2199	===============================	===============================
2200	X = 1;				STORE event_indicated
2201	smp_mb();			wake_up();
2202	Y = 1;				wait_event(wq, Y == 1);
2203	wake_up();			  load from Y sees 1, no memory barrier
2204					load from X might see 0
2205
2206In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2207to see 1.
2208
2209The available waker functions include:
2210
2211	complete();
2212	wake_up();
2213	wake_up_all();
2214	wake_up_bit();
2215	wake_up_interruptible();
2216	wake_up_interruptible_all();
2217	wake_up_interruptible_nr();
2218	wake_up_interruptible_poll();
2219	wake_up_interruptible_sync();
2220	wake_up_interruptible_sync_poll();
2221	wake_up_locked();
2222	wake_up_locked_poll();
2223	wake_up_nr();
2224	wake_up_poll();
2225	wake_up_process();
2226
2227
2228[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2229order multiple stores before the wake-up with respect to loads of those stored
2230values after the sleeper has called set_current_state().  For instance, if the
2231sleeper does:
2232
2233	set_current_state(TASK_INTERRUPTIBLE);
2234	if (event_indicated)
2235		break;
2236	__set_current_state(TASK_RUNNING);
2237	do_something(my_data);
2238
2239and the waker does:
2240
2241	my_data = value;
2242	event_indicated = 1;
2243	wake_up(&event_wait_queue);
2244
2245there's no guarantee that the change to event_indicated will be perceived by
2246the sleeper as coming after the change to my_data.  In such a circumstance, the
2247code on both sides must interpolate its own memory barriers between the
2248separate data accesses.  Thus the above sleeper ought to do:
2249
2250	set_current_state(TASK_INTERRUPTIBLE);
2251	if (event_indicated) {
2252		smp_rmb();
2253		do_something(my_data);
2254	}
2255
2256and the waker should do:
2257
2258	my_data = value;
2259	smp_wmb();
2260	event_indicated = 1;
2261	wake_up(&event_wait_queue);
2262
2263
2264MISCELLANEOUS FUNCTIONS
2265-----------------------
2266
2267Other functions that imply barriers:
2268
2269 (*) schedule() and similar imply full memory barriers.
2270
2271
2272===================================
2273INTER-CPU ACQUIRING BARRIER EFFECTS
2274===================================
2275
2276On SMP systems locking primitives give a more substantial form of barrier: one
2277that does affect memory access ordering on other CPUs, within the context of
2278conflict on any particular lock.
2279
2280
2281ACQUIRES VS MEMORY ACCESSES
2282---------------------------
2283
2284Consider the following: the system has a pair of spinlocks (M) and (Q), and
2285three CPUs; then should the following sequence of events occur:
2286
2287	CPU 1				CPU 2
2288	===============================	===============================
2289	WRITE_ONCE(*A, a);		WRITE_ONCE(*E, e);
2290	ACQUIRE M			ACQUIRE Q
2291	WRITE_ONCE(*B, b);		WRITE_ONCE(*F, f);
2292	WRITE_ONCE(*C, c);		WRITE_ONCE(*G, g);
2293	RELEASE M			RELEASE Q
2294	WRITE_ONCE(*D, d);		WRITE_ONCE(*H, h);
2295
2296Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2297through *H occur in, other than the constraints imposed by the separate locks
2298on the separate CPUs.  It might, for example, see:
2299
2300	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2301
2302But it won't see any of:
2303
2304	*B, *C or *D preceding ACQUIRE M
2305	*A, *B or *C following RELEASE M
2306	*F, *G or *H preceding ACQUIRE Q
2307	*E, *F or *G following RELEASE Q
2308
2309
2310
2311ACQUIRES VS I/O ACCESSES
2312------------------------
2313
2314Under certain circumstances (especially involving NUMA), I/O accesses within
2315two spinlocked sections on two different CPUs may be seen as interleaved by the
2316PCI bridge, because the PCI bridge does not necessarily participate in the
2317cache-coherence protocol, and is therefore incapable of issuing the required
2318read memory barriers.
2319
2320For example:
2321
2322	CPU 1				CPU 2
2323	===============================	===============================
2324	spin_lock(Q)
2325	writel(0, ADDR)
2326	writel(1, DATA);
2327	spin_unlock(Q);
2328					spin_lock(Q);
2329					writel(4, ADDR);
2330					writel(5, DATA);
2331					spin_unlock(Q);
2332
2333may be seen by the PCI bridge as follows:
2334
2335	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2336
2337which would probably cause the hardware to malfunction.
2338
2339
2340What is necessary here is to intervene with an mmiowb() before dropping the
2341spinlock, for example:
2342
2343	CPU 1				CPU 2
2344	===============================	===============================
2345	spin_lock(Q)
2346	writel(0, ADDR)
2347	writel(1, DATA);
2348	mmiowb();
2349	spin_unlock(Q);
2350					spin_lock(Q);
2351					writel(4, ADDR);
2352					writel(5, DATA);
2353					mmiowb();
2354					spin_unlock(Q);
2355
2356this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2357before either of the stores issued on CPU 2.
2358
2359
2360Furthermore, following a store by a load from the same device obviates the need
2361for the mmiowb(), because the load forces the store to complete before the load
2362is performed:
2363
2364	CPU 1				CPU 2
2365	===============================	===============================
2366	spin_lock(Q)
2367	writel(0, ADDR)
2368	a = readl(DATA);
2369	spin_unlock(Q);
2370					spin_lock(Q);
2371					writel(4, ADDR);
2372					b = readl(DATA);
2373					spin_unlock(Q);
2374
2375
2376See Documentation/DocBook/deviceiobook.tmpl for more information.
2377
2378
2379=================================
2380WHERE ARE MEMORY BARRIERS NEEDED?
2381=================================
2382
2383Under normal operation, memory operation reordering is generally not going to
2384be a problem as a single-threaded linear piece of code will still appear to
2385work correctly, even if it's in an SMP kernel.  There are, however, four
2386circumstances in which reordering definitely _could_ be a problem:
2387
2388 (*) Interprocessor interaction.
2389
2390 (*) Atomic operations.
2391
2392 (*) Accessing devices.
2393
2394 (*) Interrupts.
2395
2396
2397INTERPROCESSOR INTERACTION
2398--------------------------
2399
2400When there's a system with more than one processor, more than one CPU in the
2401system may be working on the same data set at the same time.  This can cause
2402synchronisation problems, and the usual way of dealing with them is to use
2403locks.  Locks, however, are quite expensive, and so it may be preferable to
2404operate without the use of a lock if at all possible.  In such a case
2405operations that affect both CPUs may have to be carefully ordered to prevent
2406a malfunction.
2407
2408Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2409queued on the semaphore, by virtue of it having a piece of its stack linked to
2410the semaphore's list of waiting processes:
2411
2412	struct rw_semaphore {
2413		...
2414		spinlock_t lock;
2415		struct list_head waiters;
2416	};
2417
2418	struct rwsem_waiter {
2419		struct list_head list;
2420		struct task_struct *task;
2421	};
2422
2423To wake up a particular waiter, the up_read() or up_write() functions have to:
2424
2425 (1) read the next pointer from this waiter's record to know as to where the
2426     next waiter record is;
2427
2428 (2) read the pointer to the waiter's task structure;
2429
2430 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2431
2432 (4) call wake_up_process() on the task; and
2433
2434 (5) release the reference held on the waiter's task struct.
2435
2436In other words, it has to perform this sequence of events:
2437
2438	LOAD waiter->list.next;
2439	LOAD waiter->task;
2440	STORE waiter->task;
2441	CALL wakeup
2442	RELEASE task
2443
2444and if any of these steps occur out of order, then the whole thing may
2445malfunction.
2446
2447Once it has queued itself and dropped the semaphore lock, the waiter does not
2448get the lock again; it instead just waits for its task pointer to be cleared
2449before proceeding.  Since the record is on the waiter's stack, this means that
2450if the task pointer is cleared _before_ the next pointer in the list is read,
2451another CPU might start processing the waiter and might clobber the waiter's
2452stack before the up*() function has a chance to read the next pointer.
2453
2454Consider then what might happen to the above sequence of events:
2455
2456	CPU 1				CPU 2
2457	===============================	===============================
2458					down_xxx()
2459					Queue waiter
2460					Sleep
2461	up_yyy()
2462	LOAD waiter->task;
2463	STORE waiter->task;
2464					Woken up by other event
2465	<preempt>
2466					Resume processing
2467					down_xxx() returns
2468					call foo()
2469					foo() clobbers *waiter
2470	</preempt>
2471	LOAD waiter->list.next;
2472	--- OOPS ---
2473
2474This could be dealt with using the semaphore lock, but then the down_xxx()
2475function has to needlessly get the spinlock again after being woken up.
2476
2477The way to deal with this is to insert a general SMP memory barrier:
2478
2479	LOAD waiter->list.next;
2480	LOAD waiter->task;
2481	smp_mb();
2482	STORE waiter->task;
2483	CALL wakeup
2484	RELEASE task
2485
2486In this case, the barrier makes a guarantee that all memory accesses before the
2487barrier will appear to happen before all the memory accesses after the barrier
2488with respect to the other CPUs on the system.  It does _not_ guarantee that all
2489the memory accesses before the barrier will be complete by the time the barrier
2490instruction itself is complete.
2491
2492On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2493compiler barrier, thus making sure the compiler emits the instructions in the
2494right order without actually intervening in the CPU.  Since there's only one
2495CPU, that CPU's dependency ordering logic will take care of everything else.
2496
2497
2498ATOMIC OPERATIONS
2499-----------------
2500
2501Whilst they are technically interprocessor interaction considerations, atomic
2502operations are noted specially as some of them imply full memory barriers and
2503some don't, but they're very heavily relied on as a group throughout the
2504kernel.
2505
2506Any atomic operation that modifies some state in memory and returns information
2507about the state (old or new) implies an SMP-conditional general memory barrier
2508(smp_mb()) on each side of the actual operation (with the exception of
2509explicit lock operations, described later).  These include:
2510
2511	xchg();
2512	atomic_xchg();			atomic_long_xchg();
2513	atomic_inc_return();		atomic_long_inc_return();
2514	atomic_dec_return();		atomic_long_dec_return();
2515	atomic_add_return();		atomic_long_add_return();
2516	atomic_sub_return();		atomic_long_sub_return();
2517	atomic_inc_and_test();		atomic_long_inc_and_test();
2518	atomic_dec_and_test();		atomic_long_dec_and_test();
2519	atomic_sub_and_test();		atomic_long_sub_and_test();
2520	atomic_add_negative();		atomic_long_add_negative();
2521	test_and_set_bit();
2522	test_and_clear_bit();
2523	test_and_change_bit();
2524
2525	/* when succeeds */
2526	cmpxchg();
2527	atomic_cmpxchg();		atomic_long_cmpxchg();
2528	atomic_add_unless();		atomic_long_add_unless();
2529
2530These are used for such things as implementing ACQUIRE-class and RELEASE-class
2531operations and adjusting reference counters towards object destruction, and as
2532such the implicit memory barrier effects are necessary.
2533
2534
2535The following operations are potential problems as they do _not_ imply memory
2536barriers, but might be used for implementing such things as RELEASE-class
2537operations:
2538
2539	atomic_set();
2540	set_bit();
2541	clear_bit();
2542	change_bit();
2543
2544With these the appropriate explicit memory barrier should be used if necessary
2545(smp_mb__before_atomic() for instance).
2546
2547
2548The following also do _not_ imply memory barriers, and so may require explicit
2549memory barriers under some circumstances (smp_mb__before_atomic() for
2550instance):
2551
2552	atomic_add();
2553	atomic_sub();
2554	atomic_inc();
2555	atomic_dec();
2556
2557If they're used for statistics generation, then they probably don't need memory
2558barriers, unless there's a coupling between statistical data.
2559
2560If they're used for reference counting on an object to control its lifetime,
2561they probably don't need memory barriers because either the reference count
2562will be adjusted inside a locked section, or the caller will already hold
2563sufficient references to make the lock, and thus a memory barrier unnecessary.
2564
2565If they're used for constructing a lock of some description, then they probably
2566do need memory barriers as a lock primitive generally has to do things in a
2567specific order.
2568
2569Basically, each usage case has to be carefully considered as to whether memory
2570barriers are needed or not.
2571
2572The following operations are special locking primitives:
2573
2574	test_and_set_bit_lock();
2575	clear_bit_unlock();
2576	__clear_bit_unlock();
2577
2578These implement ACQUIRE-class and RELEASE-class operations.  These should be
2579used in preference to other operations when implementing locking primitives,
2580because their implementations can be optimised on many architectures.
2581
2582[!] Note that special memory barrier primitives are available for these
2583situations because on some CPUs the atomic instructions used imply full memory
2584barriers, and so barrier instructions are superfluous in conjunction with them,
2585and in such cases the special barrier primitives will be no-ops.
2586
2587See Documentation/atomic_ops.txt for more information.
2588
2589
2590ACCESSING DEVICES
2591-----------------
2592
2593Many devices can be memory mapped, and so appear to the CPU as if they're just
2594a set of memory locations.  To control such a device, the driver usually has to
2595make the right memory accesses in exactly the right order.
2596
2597However, having a clever CPU or a clever compiler creates a potential problem
2598in that the carefully sequenced accesses in the driver code won't reach the
2599device in the requisite order if the CPU or the compiler thinks it is more
2600efficient to reorder, combine or merge accesses - something that would cause
2601the device to malfunction.
2602
2603Inside of the Linux kernel, I/O should be done through the appropriate accessor
2604routines - such as inb() or writel() - which know how to make such accesses
2605appropriately sequential.  Whilst this, for the most part, renders the explicit
2606use of memory barriers unnecessary, there are a couple of situations where they
2607might be needed:
2608
2609 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2610     so for _all_ general drivers locks should be used and mmiowb() must be
2611     issued prior to unlocking the critical section.
2612
2613 (2) If the accessor functions are used to refer to an I/O memory window with
2614     relaxed memory access properties, then _mandatory_ memory barriers are
2615     required to enforce ordering.
2616
2617See Documentation/DocBook/deviceiobook.tmpl for more information.
2618
2619
2620INTERRUPTS
2621----------
2622
2623A driver may be interrupted by its own interrupt service routine, and thus the
2624two parts of the driver may interfere with each other's attempts to control or
2625access the device.
2626
2627This may be alleviated - at least in part - by disabling local interrupts (a
2628form of locking), such that the critical operations are all contained within
2629the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2630routine is executing, the driver's core may not run on the same CPU, and its
2631interrupt is not permitted to happen again until the current interrupt has been
2632handled, thus the interrupt handler does not need to lock against that.
2633
2634However, consider a driver that was talking to an ethernet card that sports an
2635address register and a data register.  If that driver's core talks to the card
2636under interrupt-disablement and then the driver's interrupt handler is invoked:
2637
2638	LOCAL IRQ DISABLE
2639	writew(ADDR, 3);
2640	writew(DATA, y);
2641	LOCAL IRQ ENABLE
2642	<interrupt>
2643	writew(ADDR, 4);
2644	q = readw(DATA);
2645	</interrupt>
2646
2647The store to the data register might happen after the second store to the
2648address register if ordering rules are sufficiently relaxed:
2649
2650	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2651
2652
2653If ordering rules are relaxed, it must be assumed that accesses done inside an
2654interrupt disabled section may leak outside of it and may interleave with
2655accesses performed in an interrupt - and vice versa - unless implicit or
2656explicit barriers are used.
2657
2658Normally this won't be a problem because the I/O accesses done inside such
2659sections will include synchronous load operations on strictly ordered I/O
2660registers that form implicit I/O barriers.  If this isn't sufficient then an
2661mmiowb() may need to be used explicitly.
2662
2663
2664A similar situation may occur between an interrupt routine and two routines
2665running on separate CPUs that communicate with each other.  If such a case is
2666likely, then interrupt-disabling locks should be used to guarantee ordering.
2667
2668
2669==========================
2670KERNEL I/O BARRIER EFFECTS
2671==========================
2672
2673When accessing I/O memory, drivers should use the appropriate accessor
2674functions:
2675
2676 (*) inX(), outX():
2677
2678     These are intended to talk to I/O space rather than memory space, but
2679     that's primarily a CPU-specific concept.  The i386 and x86_64 processors
2680     do indeed have special I/O space access cycles and instructions, but many
2681     CPUs don't have such a concept.
2682
2683     The PCI bus, amongst others, defines an I/O space concept which - on such
2684     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2685     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2686     memory map, particularly on those CPUs that don't support alternate I/O
2687     spaces.
2688
2689     Accesses to this space may be fully synchronous (as on i386), but
2690     intermediary bridges (such as the PCI host bridge) may not fully honour
2691     that.
2692
2693     They are guaranteed to be fully ordered with respect to each other.
2694
2695     They are not guaranteed to be fully ordered with respect to other types of
2696     memory and I/O operation.
2697
2698 (*) readX(), writeX():
2699
2700     Whether these are guaranteed to be fully ordered and uncombined with
2701     respect to each other on the issuing CPU depends on the characteristics
2702     defined for the memory window through which they're accessing.  On later
2703     i386 architecture machines, for example, this is controlled by way of the
2704     MTRR registers.
2705
2706     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2707     provided they're not accessing a prefetchable device.
2708
2709     However, intermediary hardware (such as a PCI bridge) may indulge in
2710     deferral if it so wishes; to flush a store, a load from the same location
2711     is preferred[*], but a load from the same device or from configuration
2712     space should suffice for PCI.
2713
2714     [*] NOTE! attempting to load from the same location as was written to may
2715	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2716	 example.
2717
2718     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2719     force stores to be ordered.
2720
2721     Please refer to the PCI specification for more information on interactions
2722     between PCI transactions.
2723
2724 (*) readX_relaxed(), writeX_relaxed()
2725
2726     These are similar to readX() and writeX(), but provide weaker memory
2727     ordering guarantees.  Specifically, they do not guarantee ordering with
2728     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2729     ordering with respect to LOCK or UNLOCK operations.  If the latter is
2730     required, an mmiowb() barrier can be used.  Note that relaxed accesses to
2731     the same peripheral are guaranteed to be ordered with respect to each
2732     other.
2733
2734 (*) ioreadX(), iowriteX()
2735
2736     These will perform appropriately for the type of access they're actually
2737     doing, be it inX()/outX() or readX()/writeX().
2738
2739
2740========================================
2741ASSUMED MINIMUM EXECUTION ORDERING MODEL
2742========================================
2743
2744It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2745maintain the appearance of program causality with respect to itself.  Some CPUs
2746(such as i386 or x86_64) are more constrained than others (such as powerpc or
2747frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2748of arch-specific code.
2749
2750This means that it must be considered that the CPU will execute its instruction
2751stream in any order it feels like - or even in parallel - provided that if an
2752instruction in the stream depends on an earlier instruction, then that
2753earlier instruction must be sufficiently complete[*] before the later
2754instruction may proceed; in other words: provided that the appearance of
2755causality is maintained.
2756
2757 [*] Some instructions have more than one effect - such as changing the
2758     condition codes, changing registers or changing memory - and different
2759     instructions may depend on different effects.
2760
2761A CPU may also discard any instruction sequence that winds up having no
2762ultimate effect.  For example, if two adjacent instructions both load an
2763immediate value into the same register, the first may be discarded.
2764
2765
2766Similarly, it has to be assumed that compiler might reorder the instruction
2767stream in any way it sees fit, again provided the appearance of causality is
2768maintained.
2769
2770
2771============================
2772THE EFFECTS OF THE CPU CACHE
2773============================
2774
2775The way cached memory operations are perceived across the system is affected to
2776a certain extent by the caches that lie between CPUs and memory, and by the
2777memory coherence system that maintains the consistency of state in the system.
2778
2779As far as the way a CPU interacts with another part of the system through the
2780caches goes, the memory system has to include the CPU's caches, and memory
2781barriers for the most part act at the interface between the CPU and its cache
2782(memory barriers logically act on the dotted line in the following diagram):
2783
2784	    <--- CPU --->         :       <----------- Memory ----------->
2785	                          :
2786	+--------+    +--------+  :   +--------+    +-----------+
2787	|        |    |        |  :   |        |    |           |    +--------+
2788	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2789	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2790	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2791	|        |    |        |  :   |        |    |           |    |        |
2792	+--------+    +--------+  :   +--------+    |           |    |        |
2793	                          :                 | Cache     |    +--------+
2794	                          :                 | Coherency |
2795	                          :                 | Mechanism |    +--------+
2796	+--------+    +--------+  :   +--------+    |           |    |	      |
2797	|        |    |        |  :   |        |    |           |    |        |
2798	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2799	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2800	|        |    | Queue  |  :   |        |    |           |    |        |
2801	|        |    |        |  :   |        |    |           |    +--------+
2802	+--------+    +--------+  :   +--------+    +-----------+
2803	                          :
2804	                          :
2805
2806Although any particular load or store may not actually appear outside of the
2807CPU that issued it since it may have been satisfied within the CPU's own cache,
2808it will still appear as if the full memory access had taken place as far as the
2809other CPUs are concerned since the cache coherency mechanisms will migrate the
2810cacheline over to the accessing CPU and propagate the effects upon conflict.
2811
2812The CPU core may execute instructions in any order it deems fit, provided the
2813expected program causality appears to be maintained.  Some of the instructions
2814generate load and store operations which then go into the queue of memory
2815accesses to be performed.  The core may place these in the queue in any order
2816it wishes, and continue execution until it is forced to wait for an instruction
2817to complete.
2818
2819What memory barriers are concerned with is controlling the order in which
2820accesses cross from the CPU side of things to the memory side of things, and
2821the order in which the effects are perceived to happen by the other observers
2822in the system.
2823
2824[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2825their own loads and stores as if they had happened in program order.
2826
2827[!] MMIO or other device accesses may bypass the cache system.  This depends on
2828the properties of the memory window through which devices are accessed and/or
2829the use of any special device communication instructions the CPU may have.
2830
2831
2832CACHE COHERENCY
2833---------------
2834
2835Life isn't quite as simple as it may appear above, however: for while the
2836caches are expected to be coherent, there's no guarantee that that coherency
2837will be ordered.  This means that whilst changes made on one CPU will
2838eventually become visible on all CPUs, there's no guarantee that they will
2839become apparent in the same order on those other CPUs.
2840
2841
2842Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2843has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2844
2845	            :
2846	            :                          +--------+
2847	            :      +---------+         |        |
2848	+--------+  : +--->| Cache A |<------->|        |
2849	|        |  : |    +---------+         |        |
2850	|  CPU 1 |<---+                        |        |
2851	|        |  : |    +---------+         |        |
2852	+--------+  : +--->| Cache B |<------->|        |
2853	            :      +---------+         |        |
2854	            :                          | Memory |
2855	            :      +---------+         | System |
2856	+--------+  : +--->| Cache C |<------->|        |
2857	|        |  : |    +---------+         |        |
2858	|  CPU 2 |<---+                        |        |
2859	|        |  : |    +---------+         |        |
2860	+--------+  : +--->| Cache D |<------->|        |
2861	            :      +---------+         |        |
2862	            :                          +--------+
2863	            :
2864
2865Imagine the system has the following properties:
2866
2867 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2868     resident in memory;
2869
2870 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2871     resident in memory;
2872
2873 (*) whilst the CPU core is interrogating one cache, the other cache may be
2874     making use of the bus to access the rest of the system - perhaps to
2875     displace a dirty cacheline or to do a speculative load;
2876
2877 (*) each cache has a queue of operations that need to be applied to that cache
2878     to maintain coherency with the rest of the system;
2879
2880 (*) the coherency queue is not flushed by normal loads to lines already
2881     present in the cache, even though the contents of the queue may
2882     potentially affect those loads.
2883
2884Imagine, then, that two writes are made on the first CPU, with a write barrier
2885between them to guarantee that they will appear to reach that CPU's caches in
2886the requisite order:
2887
2888	CPU 1		CPU 2		COMMENT
2889	===============	===============	=======================================
2890					u == 0, v == 1 and p == &u, q == &u
2891	v = 2;
2892	smp_wmb();			Make sure change to v is visible before
2893					 change to p
2894	<A:modify v=2>			v is now in cache A exclusively
2895	p = &v;
2896	<B:modify p=&v>			p is now in cache B exclusively
2897
2898The write memory barrier forces the other CPUs in the system to perceive that
2899the local CPU's caches have apparently been updated in the correct order.  But
2900now imagine that the second CPU wants to read those values:
2901
2902	CPU 1		CPU 2		COMMENT
2903	===============	===============	=======================================
2904	...
2905			q = p;
2906			x = *q;
2907
2908The above pair of reads may then fail to happen in the expected order, as the
2909cacheline holding p may get updated in one of the second CPU's caches whilst
2910the update to the cacheline holding v is delayed in the other of the second
2911CPU's caches by some other cache event:
2912
2913	CPU 1		CPU 2		COMMENT
2914	===============	===============	=======================================
2915					u == 0, v == 1 and p == &u, q == &u
2916	v = 2;
2917	smp_wmb();
2918	<A:modify v=2>	<C:busy>
2919			<C:queue v=2>
2920	p = &v;		q = p;
2921			<D:request p>
2922	<B:modify p=&v>	<D:commit p=&v>
2923			<D:read p>
2924			x = *q;
2925			<C:read *q>	Reads from v before v updated in cache
2926			<C:unbusy>
2927			<C:commit v=2>
2928
2929Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2930no guarantee that, without intervention, the order of update will be the same
2931as that committed on CPU 1.
2932
2933
2934To intervene, we need to interpolate a data dependency barrier or a read
2935barrier between the loads.  This will force the cache to commit its coherency
2936queue before processing any further requests:
2937
2938	CPU 1		CPU 2		COMMENT
2939	===============	===============	=======================================
2940					u == 0, v == 1 and p == &u, q == &u
2941	v = 2;
2942	smp_wmb();
2943	<A:modify v=2>	<C:busy>
2944			<C:queue v=2>
2945	p = &v;		q = p;
2946			<D:request p>
2947	<B:modify p=&v>	<D:commit p=&v>
2948			<D:read p>
2949			smp_read_barrier_depends()
2950			<C:unbusy>
2951			<C:commit v=2>
2952			x = *q;
2953			<C:read *q>	Reads from v after v updated in cache
2954
2955
2956This sort of problem can be encountered on DEC Alpha processors as they have a
2957split cache that improves performance by making better use of the data bus.
2958Whilst most CPUs do imply a data dependency barrier on the read when a memory
2959access depends on a read, not all do, so it may not be relied on.
2960
2961Other CPUs may also have split caches, but must coordinate between the various
2962cachelets for normal memory accesses.  The semantics of the Alpha removes the
2963need for coordination in the absence of memory barriers.
2964
2965
2966CACHE COHERENCY VS DMA
2967----------------------
2968
2969Not all systems maintain cache coherency with respect to devices doing DMA.  In
2970such cases, a device attempting DMA may obtain stale data from RAM because
2971dirty cache lines may be resident in the caches of various CPUs, and may not
2972have been written back to RAM yet.  To deal with this, the appropriate part of
2973the kernel must flush the overlapping bits of cache on each CPU (and maybe
2974invalidate them as well).
2975
2976In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2977cache lines being written back to RAM from a CPU's cache after the device has
2978installed its own data, or cache lines present in the CPU's cache may simply
2979obscure the fact that RAM has been updated, until at such time as the cacheline
2980is discarded from the CPU's cache and reloaded.  To deal with this, the
2981appropriate part of the kernel must invalidate the overlapping bits of the
2982cache on each CPU.
2983
2984See Documentation/cachetlb.txt for more information on cache management.
2985
2986
2987CACHE COHERENCY VS MMIO
2988-----------------------
2989
2990Memory mapped I/O usually takes place through memory locations that are part of
2991a window in the CPU's memory space that has different properties assigned than
2992the usual RAM directed window.
2993
2994Amongst these properties is usually the fact that such accesses bypass the
2995caching entirely and go directly to the device buses.  This means MMIO accesses
2996may, in effect, overtake accesses to cached memory that were emitted earlier.
2997A memory barrier isn't sufficient in such a case, but rather the cache must be
2998flushed between the cached memory write and the MMIO access if the two are in
2999any way dependent.
3000
3001
3002=========================
3003THE THINGS CPUS GET UP TO
3004=========================
3005
3006A programmer might take it for granted that the CPU will perform memory
3007operations in exactly the order specified, so that if the CPU is, for example,
3008given the following piece of code to execute:
3009
3010	a = READ_ONCE(*A);
3011	WRITE_ONCE(*B, b);
3012	c = READ_ONCE(*C);
3013	d = READ_ONCE(*D);
3014	WRITE_ONCE(*E, e);
3015
3016they would then expect that the CPU will complete the memory operation for each
3017instruction before moving on to the next one, leading to a definite sequence of
3018operations as seen by external observers in the system:
3019
3020	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
3021
3022
3023Reality is, of course, much messier.  With many CPUs and compilers, the above
3024assumption doesn't hold because:
3025
3026 (*) loads are more likely to need to be completed immediately to permit
3027     execution progress, whereas stores can often be deferred without a
3028     problem;
3029
3030 (*) loads may be done speculatively, and the result discarded should it prove
3031     to have been unnecessary;
3032
3033 (*) loads may be done speculatively, leading to the result having been fetched
3034     at the wrong time in the expected sequence of events;
3035
3036 (*) the order of the memory accesses may be rearranged to promote better use
3037     of the CPU buses and caches;
3038
3039 (*) loads and stores may be combined to improve performance when talking to
3040     memory or I/O hardware that can do batched accesses of adjacent locations,
3041     thus cutting down on transaction setup costs (memory and PCI devices may
3042     both be able to do this); and
3043
3044 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
3045     mechanisms may alleviate this - once the store has actually hit the cache
3046     - there's no guarantee that the coherency management will be propagated in
3047     order to other CPUs.
3048
3049So what another CPU, say, might actually observe from the above piece of code
3050is:
3051
3052	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
3053
3054	(Where "LOAD {*C,*D}" is a combined load)
3055
3056
3057However, it is guaranteed that a CPU will be self-consistent: it will see its
3058_own_ accesses appear to be correctly ordered, without the need for a memory
3059barrier.  For instance with the following code:
3060
3061	U = READ_ONCE(*A);
3062	WRITE_ONCE(*A, V);
3063	WRITE_ONCE(*A, W);
3064	X = READ_ONCE(*A);
3065	WRITE_ONCE(*A, Y);
3066	Z = READ_ONCE(*A);
3067
3068and assuming no intervention by an external influence, it can be assumed that
3069the final result will appear to be:
3070
3071	U == the original value of *A
3072	X == W
3073	Z == Y
3074	*A == Y
3075
3076The code above may cause the CPU to generate the full sequence of memory
3077accesses:
3078
3079	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
3080
3081in that order, but, without intervention, the sequence may have almost any
3082combination of elements combined or discarded, provided the program's view
3083of the world remains consistent.  Note that READ_ONCE() and WRITE_ONCE()
3084are -not- optional in the above example, as there are architectures
3085where a given CPU might reorder successive loads to the same location.
3086On such architectures, READ_ONCE() and WRITE_ONCE() do whatever is
3087necessary to prevent this, for example, on Itanium the volatile casts
3088used by READ_ONCE() and WRITE_ONCE() cause GCC to emit the special ld.acq
3089and st.rel instructions (respectively) that prevent such reordering.
3090
3091The compiler may also combine, discard or defer elements of the sequence before
3092the CPU even sees them.
3093
3094For instance:
3095
3096	*A = V;
3097	*A = W;
3098
3099may be reduced to:
3100
3101	*A = W;
3102
3103since, without either a write barrier or an WRITE_ONCE(), it can be
3104assumed that the effect of the storage of V to *A is lost.  Similarly:
3105
3106	*A = Y;
3107	Z = *A;
3108
3109may, without a memory barrier or an READ_ONCE() and WRITE_ONCE(), be
3110reduced to:
3111
3112	*A = Y;
3113	Z = Y;
3114
3115and the LOAD operation never appear outside of the CPU.
3116
3117
3118AND THEN THERE'S THE ALPHA
3119--------------------------
3120
3121The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
3122some versions of the Alpha CPU have a split data cache, permitting them to have
3123two semantically-related cache lines updated at separate times.  This is where
3124the data dependency barrier really becomes necessary as this synchronises both
3125caches with the memory coherence system, thus making it seem like pointer
3126changes vs new data occur in the right order.
3127
3128The Alpha defines the Linux kernel's memory barrier model.
3129
3130See the subsection on "Cache Coherency" above.
3131
3132
3133VIRTUAL MACHINE GUESTS
3134----------------------
3135
3136Guests running within virtual machines might be affected by SMP effects even if
3137the guest itself is compiled without SMP support.  This is an artifact of
3138interfacing with an SMP host while running an UP kernel.  Using mandatory
3139barriers for this use-case would be possible but is often suboptimal.
3140
3141To handle this case optimally, low-level virt_mb() etc macros are available.
3142These have the same effect as smp_mb() etc when SMP is enabled, but generate
3143identical code for SMP and non-SMP systems.  For example, virtual machine guests
3144should use virt_mb() rather than smp_mb() when synchronizing against a
3145(possibly SMP) host.
3146
3147These are equivalent to smp_mb() etc counterparts in all other respects,
3148in particular, they do not control MMIO effects: to control
3149MMIO effects, use mandatory barriers.
3150
3151
3152============
3153EXAMPLE USES
3154============
3155
3156CIRCULAR BUFFERS
3157----------------
3158
3159Memory barriers can be used to implement circular buffering without the need
3160of a lock to serialise the producer with the consumer.  See:
3161
3162	Documentation/circular-buffers.txt
3163
3164for details.
3165
3166
3167==========
3168REFERENCES
3169==========
3170
3171Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3172Digital Press)
3173	Chapter 5.2: Physical Address Space Characteristics
3174	Chapter 5.4: Caches and Write Buffers
3175	Chapter 5.5: Data Sharing
3176	Chapter 5.6: Read/Write Ordering
3177
3178AMD64 Architecture Programmer's Manual Volume 2: System Programming
3179	Chapter 7.1: Memory-Access Ordering
3180	Chapter 7.4: Buffering and Combining Memory Writes
3181
3182IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3183System Programming Guide
3184	Chapter 7.1: Locked Atomic Operations
3185	Chapter 7.2: Memory Ordering
3186	Chapter 7.4: Serializing Instructions
3187
3188The SPARC Architecture Manual, Version 9
3189	Chapter 8: Memory Models
3190	Appendix D: Formal Specification of the Memory Models
3191	Appendix J: Programming with the Memory Models
3192
3193UltraSPARC Programmer Reference Manual
3194	Chapter 5: Memory Accesses and Cacheability
3195	Chapter 15: Sparc-V9 Memory Models
3196
3197UltraSPARC III Cu User's Manual
3198	Chapter 9: Memory Models
3199
3200UltraSPARC IIIi Processor User's Manual
3201	Chapter 8: Memory Models
3202
3203UltraSPARC Architecture 2005
3204	Chapter 9: Memory
3205	Appendix D: Formal Specifications of the Memory Models
3206
3207UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3208	Chapter 8: Memory Models
3209	Appendix F: Caches and Cache Coherency
3210
3211Solaris Internals, Core Kernel Architecture, p63-68:
3212	Chapter 3.3: Hardware Considerations for Locks and
3213			Synchronization
3214
3215Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3216for Kernel Programmers:
3217	Chapter 13: Other Memory Models
3218
3219Intel Itanium Architecture Software Developer's Manual: Volume 1:
3220	Section 2.6: Speculation
3221	Section 4.4: Memory Access
3222