xref: /openbmc/linux/Documentation/memory-barriers.txt (revision bbde9fc1824aab58bc78c084163007dd6c03fe5b)
1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7
8Contents:
9
10 (*) Abstract memory access model.
11
12     - Device operations.
13     - Guarantees.
14
15 (*) What are memory barriers?
16
17     - Varieties of memory barrier.
18     - What may not be assumed about memory barriers?
19     - Data dependency barriers.
20     - Control dependencies.
21     - SMP barrier pairing.
22     - Examples of memory barrier sequences.
23     - Read memory barriers vs load speculation.
24     - Transitivity
25
26 (*) Explicit kernel barriers.
27
28     - Compiler barrier.
29     - CPU memory barriers.
30     - MMIO write barrier.
31
32 (*) Implicit kernel memory barriers.
33
34     - Locking functions.
35     - Interrupt disabling functions.
36     - Sleep and wake-up functions.
37     - Miscellaneous functions.
38
39 (*) Inter-CPU locking barrier effects.
40
41     - Locks vs memory accesses.
42     - Locks vs I/O accesses.
43
44 (*) Where are memory barriers needed?
45
46     - Interprocessor interaction.
47     - Atomic operations.
48     - Accessing devices.
49     - Interrupts.
50
51 (*) Kernel I/O barrier effects.
52
53 (*) Assumed minimum execution ordering model.
54
55 (*) The effects of the cpu cache.
56
57     - Cache coherency.
58     - Cache coherency vs DMA.
59     - Cache coherency vs MMIO.
60
61 (*) The things CPUs get up to.
62
63     - And then there's the Alpha.
64
65 (*) Example uses.
66
67     - Circular buffers.
68
69 (*) References.
70
71
72============================
73ABSTRACT MEMORY ACCESS MODEL
74============================
75
76Consider the following abstract model of the system:
77
78		            :                :
79		            :                :
80		            :                :
81		+-------+   :   +--------+   :   +-------+
82		|       |   :   |        |   :   |       |
83		|       |   :   |        |   :   |       |
84		| CPU 1 |<----->| Memory |<----->| CPU 2 |
85		|       |   :   |        |   :   |       |
86		|       |   :   |        |   :   |       |
87		+-------+   :   +--------+   :   +-------+
88		    ^       :       ^        :       ^
89		    |       :       |        :       |
90		    |       :       |        :       |
91		    |       :       v        :       |
92		    |       :   +--------+   :       |
93		    |       :   |        |   :       |
94		    |       :   |        |   :       |
95		    +---------->| Device |<----------+
96		            :   |        |   :
97		            :   |        |   :
98		            :   +--------+   :
99		            :                :
100
101Each CPU executes a program that generates memory access operations.  In the
102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103perform the memory operations in any order it likes, provided program causality
104appears to be maintained.  Similarly, the compiler may also arrange the
105instructions it emits in any order it likes, provided it doesn't affect the
106apparent operation of the program.
107
108So in the above diagram, the effects of the memory operations performed by a
109CPU are perceived by the rest of the system as the operations cross the
110interface between the CPU and rest of the system (the dotted lines).
111
112
113For example, consider the following sequence of events:
114
115	CPU 1		CPU 2
116	===============	===============
117	{ A == 1; B == 2 }
118	A = 3;		x = B;
119	B = 4;		y = A;
120
121The set of accesses as seen by the memory system in the middle can be arranged
122in 24 different combinations:
123
124	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
125	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
126	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
127	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
128	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
129	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
130	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
131	STORE B=4, ...
132	...
133
134and can thus result in four different combinations of values:
135
136	x == 2, y == 1
137	x == 2, y == 3
138	x == 4, y == 1
139	x == 4, y == 3
140
141
142Furthermore, the stores committed by a CPU to the memory system may not be
143perceived by the loads made by another CPU in the same order as the stores were
144committed.
145
146
147As a further example, consider this sequence of events:
148
149	CPU 1		CPU 2
150	===============	===============
151	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
152	B = 4;		Q = P;
153	P = &B		D = *Q;
154
155There is an obvious data dependency here, as the value loaded into D depends on
156the address retrieved from P by CPU 2.  At the end of the sequence, any of the
157following results are possible:
158
159	(Q == &A) and (D == 1)
160	(Q == &B) and (D == 2)
161	(Q == &B) and (D == 4)
162
163Note that CPU 2 will never try and load C into D because the CPU will load P
164into Q before issuing the load of *Q.
165
166
167DEVICE OPERATIONS
168-----------------
169
170Some devices present their control interfaces as collections of memory
171locations, but the order in which the control registers are accessed is very
172important.  For instance, imagine an ethernet card with a set of internal
173registers that are accessed through an address port register (A) and a data
174port register (D).  To read internal register 5, the following code might then
175be used:
176
177	*A = 5;
178	x = *D;
179
180but this might show up as either of the following two sequences:
181
182	STORE *A = 5, x = LOAD *D
183	x = LOAD *D, STORE *A = 5
184
185the second of which will almost certainly result in a malfunction, since it set
186the address _after_ attempting to read the register.
187
188
189GUARANTEES
190----------
191
192There are some minimal guarantees that may be expected of a CPU:
193
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195     respect to itself.  This means that for:
196
197	ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
198
199     the CPU will issue the following memory operations:
200
201	Q = LOAD P, D = LOAD *Q
202
203     and always in that order.  On most systems, smp_read_barrier_depends()
204     does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
205     is required to prevent compiler mischief.  Please note that you
206     should normally use something like rcu_dereference() instead of
207     open-coding smp_read_barrier_depends().
208
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210     ordered within that CPU.  This means that for:
211
212	a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
213
214     the CPU will only issue the following sequence of memory operations:
215
216	a = LOAD *X, STORE *X = b
217
218     And for:
219
220	ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
221
222     the CPU will only issue:
223
224	STORE *X = c, d = LOAD *X
225
226     (Loads and stores overlap if they are targeted at overlapping pieces of
227     memory).
228
229And there are a number of things that _must_ or _must_not_ be assumed:
230
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232     memory references that are not protected by ACCESS_ONCE().  Without
233     ACCESS_ONCE(), the compiler is within its rights to do all sorts
234     of "creative" transformations, which are covered in the Compiler
235     Barrier section.
236
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
238     in the order given.  This means that for:
239
240	X = *A; Y = *B; *D = Z;
241
242     we may get any of the following sequences:
243
244	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
245	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
246	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
247	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
248	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
249	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
250
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252     discarded.  This means that for:
253
254	X = *A; Y = *(A + 4);
255
256     we may get any one of the following sequences:
257
258	X = LOAD *A; Y = LOAD *(A + 4);
259	Y = LOAD *(A + 4); X = LOAD *A;
260	{X, Y} = LOAD {*A, *(A + 4) };
261
262     And for:
263
264	*A = X; *(A + 4) = Y;
265
266     we may get any of:
267
268	STORE *A = X; STORE *(A + 4) = Y;
269	STORE *(A + 4) = Y; STORE *A = X;
270	STORE {*A, *(A + 4) } = {X, Y};
271
272And there are anti-guarantees:
273
274 (*) These guarantees do not apply to bitfields, because compilers often
275     generate code to modify these using non-atomic read-modify-write
276     sequences.  Do not attempt to use bitfields to synchronize parallel
277     algorithms.
278
279 (*) Even in cases where bitfields are protected by locks, all fields
280     in a given bitfield must be protected by one lock.  If two fields
281     in a given bitfield are protected by different locks, the compiler's
282     non-atomic read-modify-write sequences can cause an update to one
283     field to corrupt the value of an adjacent field.
284
285 (*) These guarantees apply only to properly aligned and sized scalar
286     variables.  "Properly sized" currently means variables that are
287     the same size as "char", "short", "int" and "long".  "Properly
288     aligned" means the natural alignment, thus no constraints for
289     "char", two-byte alignment for "short", four-byte alignment for
290     "int", and either four-byte or eight-byte alignment for "long",
291     on 32-bit and 64-bit systems, respectively.  Note that these
292     guarantees were introduced into the C11 standard, so beware when
293     using older pre-C11 compilers (for example, gcc 4.6).  The portion
294     of the standard containing this guarantee is Section 3.14, which
295     defines "memory location" as follows:
296
297     	memory location
298		either an object of scalar type, or a maximal sequence
299		of adjacent bit-fields all having nonzero width
300
301		NOTE 1: Two threads of execution can update and access
302		separate memory locations without interfering with
303		each other.
304
305		NOTE 2: A bit-field and an adjacent non-bit-field member
306		are in separate memory locations. The same applies
307		to two bit-fields, if one is declared inside a nested
308		structure declaration and the other is not, or if the two
309		are separated by a zero-length bit-field declaration,
310		or if they are separated by a non-bit-field member
311		declaration. It is not safe to concurrently update two
312		bit-fields in the same structure if all members declared
313		between them are also bit-fields, no matter what the
314		sizes of those intervening bit-fields happen to be.
315
316
317=========================
318WHAT ARE MEMORY BARRIERS?
319=========================
320
321As can be seen above, independent memory operations are effectively performed
322in random order, but this can be a problem for CPU-CPU interaction and for I/O.
323What is required is some way of intervening to instruct the compiler and the
324CPU to restrict the order.
325
326Memory barriers are such interventions.  They impose a perceived partial
327ordering over the memory operations on either side of the barrier.
328
329Such enforcement is important because the CPUs and other devices in a system
330can use a variety of tricks to improve performance, including reordering,
331deferral and combination of memory operations; speculative loads; speculative
332branch prediction and various types of caching.  Memory barriers are used to
333override or suppress these tricks, allowing the code to sanely control the
334interaction of multiple CPUs and/or devices.
335
336
337VARIETIES OF MEMORY BARRIER
338---------------------------
339
340Memory barriers come in four basic varieties:
341
342 (1) Write (or store) memory barriers.
343
344     A write memory barrier gives a guarantee that all the STORE operations
345     specified before the barrier will appear to happen before all the STORE
346     operations specified after the barrier with respect to the other
347     components of the system.
348
349     A write barrier is a partial ordering on stores only; it is not required
350     to have any effect on loads.
351
352     A CPU can be viewed as committing a sequence of store operations to the
353     memory system as time progresses.  All stores before a write barrier will
354     occur in the sequence _before_ all the stores after the write barrier.
355
356     [!] Note that write barriers should normally be paired with read or data
357     dependency barriers; see the "SMP barrier pairing" subsection.
358
359
360 (2) Data dependency barriers.
361
362     A data dependency barrier is a weaker form of read barrier.  In the case
363     where two loads are performed such that the second depends on the result
364     of the first (eg: the first load retrieves the address to which the second
365     load will be directed), a data dependency barrier would be required to
366     make sure that the target of the second load is updated before the address
367     obtained by the first load is accessed.
368
369     A data dependency barrier is a partial ordering on interdependent loads
370     only; it is not required to have any effect on stores, independent loads
371     or overlapping loads.
372
373     As mentioned in (1), the other CPUs in the system can be viewed as
374     committing sequences of stores to the memory system that the CPU being
375     considered can then perceive.  A data dependency barrier issued by the CPU
376     under consideration guarantees that for any load preceding it, if that
377     load touches one of a sequence of stores from another CPU, then by the
378     time the barrier completes, the effects of all the stores prior to that
379     touched by the load will be perceptible to any loads issued after the data
380     dependency barrier.
381
382     See the "Examples of memory barrier sequences" subsection for diagrams
383     showing the ordering constraints.
384
385     [!] Note that the first load really has to have a _data_ dependency and
386     not a control dependency.  If the address for the second load is dependent
387     on the first load, but the dependency is through a conditional rather than
388     actually loading the address itself, then it's a _control_ dependency and
389     a full read barrier or better is required.  See the "Control dependencies"
390     subsection for more information.
391
392     [!] Note that data dependency barriers should normally be paired with
393     write barriers; see the "SMP barrier pairing" subsection.
394
395
396 (3) Read (or load) memory barriers.
397
398     A read barrier is a data dependency barrier plus a guarantee that all the
399     LOAD operations specified before the barrier will appear to happen before
400     all the LOAD operations specified after the barrier with respect to the
401     other components of the system.
402
403     A read barrier is a partial ordering on loads only; it is not required to
404     have any effect on stores.
405
406     Read memory barriers imply data dependency barriers, and so can substitute
407     for them.
408
409     [!] Note that read barriers should normally be paired with write barriers;
410     see the "SMP barrier pairing" subsection.
411
412
413 (4) General memory barriers.
414
415     A general memory barrier gives a guarantee that all the LOAD and STORE
416     operations specified before the barrier will appear to happen before all
417     the LOAD and STORE operations specified after the barrier with respect to
418     the other components of the system.
419
420     A general memory barrier is a partial ordering over both loads and stores.
421
422     General memory barriers imply both read and write memory barriers, and so
423     can substitute for either.
424
425
426And a couple of implicit varieties:
427
428 (5) ACQUIRE operations.
429
430     This acts as a one-way permeable barrier.  It guarantees that all memory
431     operations after the ACQUIRE operation will appear to happen after the
432     ACQUIRE operation with respect to the other components of the system.
433     ACQUIRE operations include LOCK operations and smp_load_acquire()
434     operations.
435
436     Memory operations that occur before an ACQUIRE operation may appear to
437     happen after it completes.
438
439     An ACQUIRE operation should almost always be paired with a RELEASE
440     operation.
441
442
443 (6) RELEASE operations.
444
445     This also acts as a one-way permeable barrier.  It guarantees that all
446     memory operations before the RELEASE operation will appear to happen
447     before the RELEASE operation with respect to the other components of the
448     system. RELEASE operations include UNLOCK operations and
449     smp_store_release() operations.
450
451     Memory operations that occur after a RELEASE operation may appear to
452     happen before it completes.
453
454     The use of ACQUIRE and RELEASE operations generally precludes the need
455     for other sorts of memory barrier (but note the exceptions mentioned in
456     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
457     pair is -not- guaranteed to act as a full memory barrier.  However, after
458     an ACQUIRE on a given variable, all memory accesses preceding any prior
459     RELEASE on that same variable are guaranteed to be visible.  In other
460     words, within a given variable's critical section, all accesses of all
461     previous critical sections for that variable are guaranteed to have
462     completed.
463
464     This means that ACQUIRE acts as a minimal "acquire" operation and
465     RELEASE acts as a minimal "release" operation.
466
467
468Memory barriers are only required where there's a possibility of interaction
469between two CPUs or between a CPU and a device.  If it can be guaranteed that
470there won't be any such interaction in any particular piece of code, then
471memory barriers are unnecessary in that piece of code.
472
473
474Note that these are the _minimum_ guarantees.  Different architectures may give
475more substantial guarantees, but they may _not_ be relied upon outside of arch
476specific code.
477
478
479WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
480----------------------------------------------
481
482There are certain things that the Linux kernel memory barriers do not guarantee:
483
484 (*) There is no guarantee that any of the memory accesses specified before a
485     memory barrier will be _complete_ by the completion of a memory barrier
486     instruction; the barrier can be considered to draw a line in that CPU's
487     access queue that accesses of the appropriate type may not cross.
488
489 (*) There is no guarantee that issuing a memory barrier on one CPU will have
490     any direct effect on another CPU or any other hardware in the system.  The
491     indirect effect will be the order in which the second CPU sees the effects
492     of the first CPU's accesses occur, but see the next point:
493
494 (*) There is no guarantee that a CPU will see the correct order of effects
495     from a second CPU's accesses, even _if_ the second CPU uses a memory
496     barrier, unless the first CPU _also_ uses a matching memory barrier (see
497     the subsection on "SMP Barrier Pairing").
498
499 (*) There is no guarantee that some intervening piece of off-the-CPU
500     hardware[*] will not reorder the memory accesses.  CPU cache coherency
501     mechanisms should propagate the indirect effects of a memory barrier
502     between CPUs, but might not do so in order.
503
504	[*] For information on bus mastering DMA and coherency please read:
505
506	    Documentation/PCI/pci.txt
507	    Documentation/DMA-API-HOWTO.txt
508	    Documentation/DMA-API.txt
509
510
511DATA DEPENDENCY BARRIERS
512------------------------
513
514The usage requirements of data dependency barriers are a little subtle, and
515it's not always obvious that they're needed.  To illustrate, consider the
516following sequence of events:
517
518	CPU 1		      CPU 2
519	===============	      ===============
520	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
521	B = 4;
522	<write barrier>
523	ACCESS_ONCE(P) = &B
524			      Q = ACCESS_ONCE(P);
525			      D = *Q;
526
527There's a clear data dependency here, and it would seem that by the end of the
528sequence, Q must be either &A or &B, and that:
529
530	(Q == &A) implies (D == 1)
531	(Q == &B) implies (D == 4)
532
533But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
534leading to the following situation:
535
536	(Q == &B) and (D == 2) ????
537
538Whilst this may seem like a failure of coherency or causality maintenance, it
539isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
540Alpha).
541
542To deal with this, a data dependency barrier or better must be inserted
543between the address load and the data load:
544
545	CPU 1		      CPU 2
546	===============	      ===============
547	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
548	B = 4;
549	<write barrier>
550	ACCESS_ONCE(P) = &B
551			      Q = ACCESS_ONCE(P);
552			      <data dependency barrier>
553			      D = *Q;
554
555This enforces the occurrence of one of the two implications, and prevents the
556third possibility from arising.
557
558[!] Note that this extremely counterintuitive situation arises most easily on
559machines with split caches, so that, for example, one cache bank processes
560even-numbered cache lines and the other bank processes odd-numbered cache
561lines.  The pointer P might be stored in an odd-numbered cache line, and the
562variable B might be stored in an even-numbered cache line.  Then, if the
563even-numbered bank of the reading CPU's cache is extremely busy while the
564odd-numbered bank is idle, one can see the new value of the pointer P (&B),
565but the old value of the variable B (2).
566
567
568Another example of where data dependency barriers might be required is where a
569number is read from memory and then used to calculate the index for an array
570access:
571
572	CPU 1		      CPU 2
573	===============	      ===============
574	{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
575	M[1] = 4;
576	<write barrier>
577	ACCESS_ONCE(P) = 1
578			      Q = ACCESS_ONCE(P);
579			      <data dependency barrier>
580			      D = M[Q];
581
582
583The data dependency barrier is very important to the RCU system,
584for example.  See rcu_assign_pointer() and rcu_dereference() in
585include/linux/rcupdate.h.  This permits the current target of an RCU'd
586pointer to be replaced with a new modified target, without the replacement
587target appearing to be incompletely initialised.
588
589See also the subsection on "Cache Coherency" for a more thorough example.
590
591
592CONTROL DEPENDENCIES
593--------------------
594
595A load-load control dependency requires a full read memory barrier, not
596simply a data dependency barrier to make it work correctly.  Consider the
597following bit of code:
598
599	q = ACCESS_ONCE(a);
600	if (q) {
601		<data dependency barrier>  /* BUG: No data dependency!!! */
602		p = ACCESS_ONCE(b);
603	}
604
605This will not have the desired effect because there is no actual data
606dependency, but rather a control dependency that the CPU may short-circuit
607by attempting to predict the outcome in advance, so that other CPUs see
608the load from b as having happened before the load from a.  In such a
609case what's actually required is:
610
611	q = ACCESS_ONCE(a);
612	if (q) {
613		<read barrier>
614		p = ACCESS_ONCE(b);
615	}
616
617However, stores are not speculated.  This means that ordering -is- provided
618for load-store control dependencies, as in the following example:
619
620	q = READ_ONCE_CTRL(a);
621	if (q) {
622		ACCESS_ONCE(b) = p;
623	}
624
625Control dependencies pair normally with other types of barriers.  That
626said, please note that READ_ONCE_CTRL() is not optional!  Without the
627READ_ONCE_CTRL(), the compiler might combine the load from 'a' with
628other loads from 'a', and the store to 'b' with other stores to 'b',
629with possible highly counterintuitive effects on ordering.
630
631Worse yet, if the compiler is able to prove (say) that the value of
632variable 'a' is always non-zero, it would be well within its rights
633to optimize the original example by eliminating the "if" statement
634as follows:
635
636	q = a;
637	b = p;  /* BUG: Compiler and CPU can both reorder!!! */
638
639Finally, the READ_ONCE_CTRL() includes an smp_read_barrier_depends()
640that DEC Alpha needs in order to respect control depedencies.
641
642So don't leave out the READ_ONCE_CTRL().
643
644It is tempting to try to enforce ordering on identical stores on both
645branches of the "if" statement as follows:
646
647	q = READ_ONCE_CTRL(a);
648	if (q) {
649		barrier();
650		ACCESS_ONCE(b) = p;
651		do_something();
652	} else {
653		barrier();
654		ACCESS_ONCE(b) = p;
655		do_something_else();
656	}
657
658Unfortunately, current compilers will transform this as follows at high
659optimization levels:
660
661	q = READ_ONCE_CTRL(a);
662	barrier();
663	ACCESS_ONCE(b) = p;  /* BUG: No ordering vs. load from a!!! */
664	if (q) {
665		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
666		do_something();
667	} else {
668		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
669		do_something_else();
670	}
671
672Now there is no conditional between the load from 'a' and the store to
673'b', which means that the CPU is within its rights to reorder them:
674The conditional is absolutely required, and must be present in the
675assembly code even after all compiler optimizations have been applied.
676Therefore, if you need ordering in this example, you need explicit
677memory barriers, for example, smp_store_release():
678
679	q = ACCESS_ONCE(a);
680	if (q) {
681		smp_store_release(&b, p);
682		do_something();
683	} else {
684		smp_store_release(&b, p);
685		do_something_else();
686	}
687
688In contrast, without explicit memory barriers, two-legged-if control
689ordering is guaranteed only when the stores differ, for example:
690
691	q = READ_ONCE_CTRL(a);
692	if (q) {
693		ACCESS_ONCE(b) = p;
694		do_something();
695	} else {
696		ACCESS_ONCE(b) = r;
697		do_something_else();
698	}
699
700The initial READ_ONCE_CTRL() is still required to prevent the compiler
701from proving the value of 'a'.
702
703In addition, you need to be careful what you do with the local variable 'q',
704otherwise the compiler might be able to guess the value and again remove
705the needed conditional.  For example:
706
707	q = READ_ONCE_CTRL(a);
708	if (q % MAX) {
709		ACCESS_ONCE(b) = p;
710		do_something();
711	} else {
712		ACCESS_ONCE(b) = r;
713		do_something_else();
714	}
715
716If MAX is defined to be 1, then the compiler knows that (q % MAX) is
717equal to zero, in which case the compiler is within its rights to
718transform the above code into the following:
719
720	q = READ_ONCE_CTRL(a);
721	ACCESS_ONCE(b) = p;
722	do_something_else();
723
724Given this transformation, the CPU is not required to respect the ordering
725between the load from variable 'a' and the store to variable 'b'.  It is
726tempting to add a barrier(), but this does not help.  The conditional
727is gone, and the barrier won't bring it back.  Therefore, if you are
728relying on this ordering, you should make sure that MAX is greater than
729one, perhaps as follows:
730
731	q = READ_ONCE_CTRL(a);
732	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
733	if (q % MAX) {
734		ACCESS_ONCE(b) = p;
735		do_something();
736	} else {
737		ACCESS_ONCE(b) = r;
738		do_something_else();
739	}
740
741Please note once again that the stores to 'b' differ.  If they were
742identical, as noted earlier, the compiler could pull this store outside
743of the 'if' statement.
744
745You must also be careful not to rely too much on boolean short-circuit
746evaluation.  Consider this example:
747
748	q = READ_ONCE_CTRL(a);
749	if (a || 1 > 0)
750		ACCESS_ONCE(b) = 1;
751
752Because the first condition cannot fault and the second condition is
753always true, the compiler can transform this example as following,
754defeating control dependency:
755
756	q = READ_ONCE_CTRL(a);
757	ACCESS_ONCE(b) = 1;
758
759This example underscores the need to ensure that the compiler cannot
760out-guess your code.  More generally, although ACCESS_ONCE() does force
761the compiler to actually emit code for a given load, it does not force
762the compiler to use the results.
763
764Finally, control dependencies do -not- provide transitivity.  This is
765demonstrated by two related examples, with the initial values of
766x and y both being zero:
767
768	CPU 0                     CPU 1
769	=======================   =======================
770	r1 = READ_ONCE_CTRL(x);   r2 = READ_ONCE_CTRL(y);
771	if (r1 > 0)               if (r2 > 0)
772	  ACCESS_ONCE(y) = 1;       ACCESS_ONCE(x) = 1;
773
774	assert(!(r1 == 1 && r2 == 1));
775
776The above two-CPU example will never trigger the assert().  However,
777if control dependencies guaranteed transitivity (which they do not),
778then adding the following CPU would guarantee a related assertion:
779
780	CPU 2
781	=====================
782	ACCESS_ONCE(x) = 2;
783
784	assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
785
786But because control dependencies do -not- provide transitivity, the above
787assertion can fail after the combined three-CPU example completes.  If you
788need the three-CPU example to provide ordering, you will need smp_mb()
789between the loads and stores in the CPU 0 and CPU 1 code fragments,
790that is, just before or just after the "if" statements.  Furthermore,
791the original two-CPU example is very fragile and should be avoided.
792
793These two examples are the LB and WWC litmus tests from this paper:
794http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
795site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
796
797In summary:
798
799  (*) Control dependencies must be headed by READ_ONCE_CTRL().
800      Or, as a much less preferable alternative, interpose
801      be headed by READ_ONCE() or an ACCESS_ONCE() read and must
802      have smp_read_barrier_depends() between this read and the
803      control-dependent write.
804
805  (*) Control dependencies can order prior loads against later stores.
806      However, they do -not- guarantee any other sort of ordering:
807      Not prior loads against later loads, nor prior stores against
808      later anything.  If you need these other forms of ordering,
809      use smp_rmb(), smp_wmb(), or, in the case of prior stores and
810      later loads, smp_mb().
811
812  (*) If both legs of the "if" statement begin with identical stores
813      to the same variable, a barrier() statement is required at the
814      beginning of each leg of the "if" statement.
815
816  (*) Control dependencies require at least one run-time conditional
817      between the prior load and the subsequent store, and this
818      conditional must involve the prior load.  If the compiler
819      is able to optimize the conditional away, it will have also
820      optimized away the ordering.  Careful use of ACCESS_ONCE() can
821      help to preserve the needed conditional.
822
823  (*) Control dependencies require that the compiler avoid reordering the
824      dependency into nonexistence.  Careful use of ACCESS_ONCE() or
825      barrier() can help to preserve your control dependency.  Please
826      see the Compiler Barrier section for more information.
827
828  (*) Control dependencies pair normally with other types of barriers.
829
830  (*) Control dependencies do -not- provide transitivity.  If you
831      need transitivity, use smp_mb().
832
833
834SMP BARRIER PAIRING
835-------------------
836
837When dealing with CPU-CPU interactions, certain types of memory barrier should
838always be paired.  A lack of appropriate pairing is almost certainly an error.
839
840General barriers pair with each other, though they also pair with most
841other types of barriers, albeit without transitivity.  An acquire barrier
842pairs with a release barrier, but both may also pair with other barriers,
843including of course general barriers.  A write barrier pairs with a data
844dependency barrier, a control dependency, an acquire barrier, a release
845barrier, a read barrier, or a general barrier.  Similarly a read barrier,
846control dependency, or a data dependency barrier pairs with a write
847barrier, an acquire barrier, a release barrier, or a general barrier:
848
849	CPU 1		      CPU 2
850	===============	      ===============
851	ACCESS_ONCE(a) = 1;
852	<write barrier>
853	ACCESS_ONCE(b) = 2;   x = ACCESS_ONCE(b);
854			      <read barrier>
855			      y = ACCESS_ONCE(a);
856
857Or:
858
859	CPU 1		      CPU 2
860	===============	      ===============================
861	a = 1;
862	<write barrier>
863	ACCESS_ONCE(b) = &a;  x = ACCESS_ONCE(b);
864			      <data dependency barrier>
865			      y = *x;
866
867Or even:
868
869	CPU 1		      CPU 2
870	===============	      ===============================
871	r1 = ACCESS_ONCE(y);
872	<general barrier>
873	ACCESS_ONCE(y) = 1;   if (r2 = ACCESS_ONCE(x)) {
874			         <implicit control dependency>
875			         ACCESS_ONCE(y) = 1;
876			      }
877
878	assert(r1 == 0 || r2 == 0);
879
880Basically, the read barrier always has to be there, even though it can be of
881the "weaker" type.
882
883[!] Note that the stores before the write barrier would normally be expected to
884match the loads after the read barrier or the data dependency barrier, and vice
885versa:
886
887	CPU 1                               CPU 2
888	===================                 ===================
889	ACCESS_ONCE(a) = 1;  }----   --->{  v = ACCESS_ONCE(c);
890	ACCESS_ONCE(b) = 2;  }    \ /    {  w = ACCESS_ONCE(d);
891	<write barrier>            \        <read barrier>
892	ACCESS_ONCE(c) = 3;  }    / \    {  x = ACCESS_ONCE(a);
893	ACCESS_ONCE(d) = 4;  }----   --->{  y = ACCESS_ONCE(b);
894
895
896EXAMPLES OF MEMORY BARRIER SEQUENCES
897------------------------------------
898
899Firstly, write barriers act as partial orderings on store operations.
900Consider the following sequence of events:
901
902	CPU 1
903	=======================
904	STORE A = 1
905	STORE B = 2
906	STORE C = 3
907	<write barrier>
908	STORE D = 4
909	STORE E = 5
910
911This sequence of events is committed to the memory coherence system in an order
912that the rest of the system might perceive as the unordered set of { STORE A,
913STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
914}:
915
916	+-------+       :      :
917	|       |       +------+
918	|       |------>| C=3  |     }     /\
919	|       |  :    +------+     }-----  \  -----> Events perceptible to
920	|       |  :    | A=1  |     }        \/       the rest of the system
921	|       |  :    +------+     }
922	| CPU 1 |  :    | B=2  |     }
923	|       |       +------+     }
924	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
925	|       |       +------+     }        requires all stores prior to the
926	|       |  :    | E=5  |     }        barrier to be committed before
927	|       |  :    +------+     }        further stores may take place
928	|       |------>| D=4  |     }
929	|       |       +------+
930	+-------+       :      :
931	                   |
932	                   | Sequence in which stores are committed to the
933	                   | memory system by CPU 1
934	                   V
935
936
937Secondly, data dependency barriers act as partial orderings on data-dependent
938loads.  Consider the following sequence of events:
939
940	CPU 1			CPU 2
941	=======================	=======================
942		{ B = 7; X = 9; Y = 8; C = &Y }
943	STORE A = 1
944	STORE B = 2
945	<write barrier>
946	STORE C = &B		LOAD X
947	STORE D = 4		LOAD C (gets &B)
948				LOAD *C (reads B)
949
950Without intervention, CPU 2 may perceive the events on CPU 1 in some
951effectively random order, despite the write barrier issued by CPU 1:
952
953	+-------+       :      :                :       :
954	|       |       +------+                +-------+  | Sequence of update
955	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
956	|       |  :    +------+     \          +-------+  | CPU 2
957	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
958	|       |       +------+       |        +-------+
959	|       |   wwwwwwwwwwwwwwww   |        :       :
960	|       |       +------+       |        :       :
961	|       |  :    | C=&B |---    |        :       :       +-------+
962	|       |  :    +------+   \   |        +-------+       |       |
963	|       |------>| D=4  |    ----------->| C->&B |------>|       |
964	|       |       +------+       |        +-------+       |       |
965	+-------+       :      :       |        :       :       |       |
966	                               |        :       :       |       |
967	                               |        :       :       | CPU 2 |
968	                               |        +-------+       |       |
969	    Apparently incorrect --->  |        | B->7  |------>|       |
970	    perception of B (!)        |        +-------+       |       |
971	                               |        :       :       |       |
972	                               |        +-------+       |       |
973	    The load of X holds --->    \       | X->9  |------>|       |
974	    up the maintenance           \      +-------+       |       |
975	    of coherence of B             ----->| B->2  |       +-------+
976	                                        +-------+
977	                                        :       :
978
979
980In the above example, CPU 2 perceives that B is 7, despite the load of *C
981(which would be B) coming after the LOAD of C.
982
983If, however, a data dependency barrier were to be placed between the load of C
984and the load of *C (ie: B) on CPU 2:
985
986	CPU 1			CPU 2
987	=======================	=======================
988		{ B = 7; X = 9; Y = 8; C = &Y }
989	STORE A = 1
990	STORE B = 2
991	<write barrier>
992	STORE C = &B		LOAD X
993	STORE D = 4		LOAD C (gets &B)
994				<data dependency barrier>
995				LOAD *C (reads B)
996
997then the following will occur:
998
999	+-------+       :      :                :       :
1000	|       |       +------+                +-------+
1001	|       |------>| B=2  |-----       --->| Y->8  |
1002	|       |  :    +------+     \          +-------+
1003	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
1004	|       |       +------+       |        +-------+
1005	|       |   wwwwwwwwwwwwwwww   |        :       :
1006	|       |       +------+       |        :       :
1007	|       |  :    | C=&B |---    |        :       :       +-------+
1008	|       |  :    +------+   \   |        +-------+       |       |
1009	|       |------>| D=4  |    ----------->| C->&B |------>|       |
1010	|       |       +------+       |        +-------+       |       |
1011	+-------+       :      :       |        :       :       |       |
1012	                               |        :       :       |       |
1013	                               |        :       :       | CPU 2 |
1014	                               |        +-------+       |       |
1015	                               |        | X->9  |------>|       |
1016	                               |        +-------+       |       |
1017	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
1018	  prior to the store of C        \      +-------+       |       |
1019	  are perceptible to              ----->| B->2  |------>|       |
1020	  subsequent loads                      +-------+       |       |
1021	                                        :       :       +-------+
1022
1023
1024And thirdly, a read barrier acts as a partial order on loads.  Consider the
1025following sequence of events:
1026
1027	CPU 1			CPU 2
1028	=======================	=======================
1029		{ A = 0, B = 9 }
1030	STORE A=1
1031	<write barrier>
1032	STORE B=2
1033				LOAD B
1034				LOAD A
1035
1036Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
1037some effectively random order, despite the write barrier issued by CPU 1:
1038
1039	+-------+       :      :                :       :
1040	|       |       +------+                +-------+
1041	|       |------>| A=1  |------      --->| A->0  |
1042	|       |       +------+      \         +-------+
1043	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1044	|       |       +------+        |       +-------+
1045	|       |------>| B=2  |---     |       :       :
1046	|       |       +------+   \    |       :       :       +-------+
1047	+-------+       :      :    \   |       +-------+       |       |
1048	                             ---------->| B->2  |------>|       |
1049	                                |       +-------+       | CPU 2 |
1050	                                |       | A->0  |------>|       |
1051	                                |       +-------+       |       |
1052	                                |       :       :       +-------+
1053	                                 \      :       :
1054	                                  \     +-------+
1055	                                   ---->| A->1  |
1056	                                        +-------+
1057	                                        :       :
1058
1059
1060If, however, a read barrier were to be placed between the load of B and the
1061load of A on CPU 2:
1062
1063	CPU 1			CPU 2
1064	=======================	=======================
1065		{ A = 0, B = 9 }
1066	STORE A=1
1067	<write barrier>
1068	STORE B=2
1069				LOAD B
1070				<read barrier>
1071				LOAD A
1072
1073then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
10742:
1075
1076	+-------+       :      :                :       :
1077	|       |       +------+                +-------+
1078	|       |------>| A=1  |------      --->| A->0  |
1079	|       |       +------+      \         +-------+
1080	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1081	|       |       +------+        |       +-------+
1082	|       |------>| B=2  |---     |       :       :
1083	|       |       +------+   \    |       :       :       +-------+
1084	+-------+       :      :    \   |       +-------+       |       |
1085	                             ---------->| B->2  |------>|       |
1086	                                |       +-------+       | CPU 2 |
1087	                                |       :       :       |       |
1088	                                |       :       :       |       |
1089	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1090	  barrier causes all effects      \     +-------+       |       |
1091	  prior to the storage of B        ---->| A->1  |------>|       |
1092	  to be perceptible to CPU 2            +-------+       |       |
1093	                                        :       :       +-------+
1094
1095
1096To illustrate this more completely, consider what could happen if the code
1097contained a load of A either side of the read barrier:
1098
1099	CPU 1			CPU 2
1100	=======================	=======================
1101		{ A = 0, B = 9 }
1102	STORE A=1
1103	<write barrier>
1104	STORE B=2
1105				LOAD B
1106				LOAD A [first load of A]
1107				<read barrier>
1108				LOAD A [second load of A]
1109
1110Even though the two loads of A both occur after the load of B, they may both
1111come up with different values:
1112
1113	+-------+       :      :                :       :
1114	|       |       +------+                +-------+
1115	|       |------>| A=1  |------      --->| A->0  |
1116	|       |       +------+      \         +-------+
1117	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1118	|       |       +------+        |       +-------+
1119	|       |------>| B=2  |---     |       :       :
1120	|       |       +------+   \    |       :       :       +-------+
1121	+-------+       :      :    \   |       +-------+       |       |
1122	                             ---------->| B->2  |------>|       |
1123	                                |       +-------+       | CPU 2 |
1124	                                |       :       :       |       |
1125	                                |       :       :       |       |
1126	                                |       +-------+       |       |
1127	                                |       | A->0  |------>| 1st   |
1128	                                |       +-------+       |       |
1129	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1130	  barrier causes all effects      \     +-------+       |       |
1131	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1132	  to be perceptible to CPU 2            +-------+       |       |
1133	                                        :       :       +-------+
1134
1135
1136But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1137before the read barrier completes anyway:
1138
1139	+-------+       :      :                :       :
1140	|       |       +------+                +-------+
1141	|       |------>| A=1  |------      --->| A->0  |
1142	|       |       +------+      \         +-------+
1143	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1144	|       |       +------+        |       +-------+
1145	|       |------>| B=2  |---     |       :       :
1146	|       |       +------+   \    |       :       :       +-------+
1147	+-------+       :      :    \   |       +-------+       |       |
1148	                             ---------->| B->2  |------>|       |
1149	                                |       +-------+       | CPU 2 |
1150	                                |       :       :       |       |
1151	                                 \      :       :       |       |
1152	                                  \     +-------+       |       |
1153	                                   ---->| A->1  |------>| 1st   |
1154	                                        +-------+       |       |
1155	                                    rrrrrrrrrrrrrrrrr   |       |
1156	                                        +-------+       |       |
1157	                                        | A->1  |------>| 2nd   |
1158	                                        +-------+       |       |
1159	                                        :       :       +-------+
1160
1161
1162The guarantee is that the second load will always come up with A == 1 if the
1163load of B came up with B == 2.  No such guarantee exists for the first load of
1164A; that may come up with either A == 0 or A == 1.
1165
1166
1167READ MEMORY BARRIERS VS LOAD SPECULATION
1168----------------------------------------
1169
1170Many CPUs speculate with loads: that is they see that they will need to load an
1171item from memory, and they find a time where they're not using the bus for any
1172other loads, and so do the load in advance - even though they haven't actually
1173got to that point in the instruction execution flow yet.  This permits the
1174actual load instruction to potentially complete immediately because the CPU
1175already has the value to hand.
1176
1177It may turn out that the CPU didn't actually need the value - perhaps because a
1178branch circumvented the load - in which case it can discard the value or just
1179cache it for later use.
1180
1181Consider:
1182
1183	CPU 1			CPU 2
1184	=======================	=======================
1185				LOAD B
1186				DIVIDE		} Divide instructions generally
1187				DIVIDE		} take a long time to perform
1188				LOAD A
1189
1190Which might appear as this:
1191
1192	                                        :       :       +-------+
1193	                                        +-------+       |       |
1194	                                    --->| B->2  |------>|       |
1195	                                        +-------+       | CPU 2 |
1196	                                        :       :DIVIDE |       |
1197	                                        +-------+       |       |
1198	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1199	division speculates on the              +-------+   ~   |       |
1200	LOAD of A                               :       :   ~   |       |
1201	                                        :       :DIVIDE |       |
1202	                                        :       :   ~   |       |
1203	Once the divisions are complete -->     :       :   ~-->|       |
1204	the CPU can then perform the            :       :       |       |
1205	LOAD with immediate effect              :       :       +-------+
1206
1207
1208Placing a read barrier or a data dependency barrier just before the second
1209load:
1210
1211	CPU 1			CPU 2
1212	=======================	=======================
1213				LOAD B
1214				DIVIDE
1215				DIVIDE
1216				<read barrier>
1217				LOAD A
1218
1219will force any value speculatively obtained to be reconsidered to an extent
1220dependent on the type of barrier used.  If there was no change made to the
1221speculated memory location, then the speculated value will just be used:
1222
1223	                                        :       :       +-------+
1224	                                        +-------+       |       |
1225	                                    --->| B->2  |------>|       |
1226	                                        +-------+       | CPU 2 |
1227	                                        :       :DIVIDE |       |
1228	                                        +-------+       |       |
1229	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1230	division speculates on the              +-------+   ~   |       |
1231	LOAD of A                               :       :   ~   |       |
1232	                                        :       :DIVIDE |       |
1233	                                        :       :   ~   |       |
1234	                                        :       :   ~   |       |
1235	                                    rrrrrrrrrrrrrrrr~   |       |
1236	                                        :       :   ~   |       |
1237	                                        :       :   ~-->|       |
1238	                                        :       :       |       |
1239	                                        :       :       +-------+
1240
1241
1242but if there was an update or an invalidation from another CPU pending, then
1243the speculation will be cancelled and the value reloaded:
1244
1245	                                        :       :       +-------+
1246	                                        +-------+       |       |
1247	                                    --->| B->2  |------>|       |
1248	                                        +-------+       | CPU 2 |
1249	                                        :       :DIVIDE |       |
1250	                                        +-------+       |       |
1251	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1252	division speculates on the              +-------+   ~   |       |
1253	LOAD of A                               :       :   ~   |       |
1254	                                        :       :DIVIDE |       |
1255	                                        :       :   ~   |       |
1256	                                        :       :   ~   |       |
1257	                                    rrrrrrrrrrrrrrrrr   |       |
1258	                                        +-------+       |       |
1259	The speculation is discarded --->   --->| A->1  |------>|       |
1260	and an updated value is                 +-------+       |       |
1261	retrieved                               :       :       +-------+
1262
1263
1264TRANSITIVITY
1265------------
1266
1267Transitivity is a deeply intuitive notion about ordering that is not
1268always provided by real computer systems.  The following example
1269demonstrates transitivity (also called "cumulativity"):
1270
1271	CPU 1			CPU 2			CPU 3
1272	=======================	=======================	=======================
1273		{ X = 0, Y = 0 }
1274	STORE X=1		LOAD X			STORE Y=1
1275				<general barrier>	<general barrier>
1276				LOAD Y			LOAD X
1277
1278Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1279This indicates that CPU 2's load from X in some sense follows CPU 1's
1280store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1281store to Y.  The question is then "Can CPU 3's load from X return 0?"
1282
1283Because CPU 2's load from X in some sense came after CPU 1's store, it
1284is natural to expect that CPU 3's load from X must therefore return 1.
1285This expectation is an example of transitivity: if a load executing on
1286CPU A follows a load from the same variable executing on CPU B, then
1287CPU A's load must either return the same value that CPU B's load did,
1288or must return some later value.
1289
1290In the Linux kernel, use of general memory barriers guarantees
1291transitivity.  Therefore, in the above example, if CPU 2's load from X
1292returns 1 and its load from Y returns 0, then CPU 3's load from X must
1293also return 1.
1294
1295However, transitivity is -not- guaranteed for read or write barriers.
1296For example, suppose that CPU 2's general barrier in the above example
1297is changed to a read barrier as shown below:
1298
1299	CPU 1			CPU 2			CPU 3
1300	=======================	=======================	=======================
1301		{ X = 0, Y = 0 }
1302	STORE X=1		LOAD X			STORE Y=1
1303				<read barrier>		<general barrier>
1304				LOAD Y			LOAD X
1305
1306This substitution destroys transitivity: in this example, it is perfectly
1307legal for CPU 2's load from X to return 1, its load from Y to return 0,
1308and CPU 3's load from X to return 0.
1309
1310The key point is that although CPU 2's read barrier orders its pair
1311of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1312this example runs on a system where CPUs 1 and 2 share a store buffer
1313or a level of cache, CPU 2 might have early access to CPU 1's writes.
1314General barriers are therefore required to ensure that all CPUs agree
1315on the combined order of CPU 1's and CPU 2's accesses.
1316
1317To reiterate, if your code requires transitivity, use general barriers
1318throughout.
1319
1320
1321========================
1322EXPLICIT KERNEL BARRIERS
1323========================
1324
1325The Linux kernel has a variety of different barriers that act at different
1326levels:
1327
1328  (*) Compiler barrier.
1329
1330  (*) CPU memory barriers.
1331
1332  (*) MMIO write barrier.
1333
1334
1335COMPILER BARRIER
1336----------------
1337
1338The Linux kernel has an explicit compiler barrier function that prevents the
1339compiler from moving the memory accesses either side of it to the other side:
1340
1341	barrier();
1342
1343This is a general barrier -- there are no read-read or write-write variants
1344of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
1345for barrier() that affects only the specific accesses flagged by the
1346ACCESS_ONCE().
1347
1348The barrier() function has the following effects:
1349
1350 (*) Prevents the compiler from reordering accesses following the
1351     barrier() to precede any accesses preceding the barrier().
1352     One example use for this property is to ease communication between
1353     interrupt-handler code and the code that was interrupted.
1354
1355 (*) Within a loop, forces the compiler to load the variables used
1356     in that loop's conditional on each pass through that loop.
1357
1358The ACCESS_ONCE() function can prevent any number of optimizations that,
1359while perfectly safe in single-threaded code, can be fatal in concurrent
1360code.  Here are some examples of these sorts of optimizations:
1361
1362 (*) The compiler is within its rights to reorder loads and stores
1363     to the same variable, and in some cases, the CPU is within its
1364     rights to reorder loads to the same variable.  This means that
1365     the following code:
1366
1367	a[0] = x;
1368	a[1] = x;
1369
1370     Might result in an older value of x stored in a[1] than in a[0].
1371     Prevent both the compiler and the CPU from doing this as follows:
1372
1373	a[0] = ACCESS_ONCE(x);
1374	a[1] = ACCESS_ONCE(x);
1375
1376     In short, ACCESS_ONCE() provides cache coherence for accesses from
1377     multiple CPUs to a single variable.
1378
1379 (*) The compiler is within its rights to merge successive loads from
1380     the same variable.  Such merging can cause the compiler to "optimize"
1381     the following code:
1382
1383	while (tmp = a)
1384		do_something_with(tmp);
1385
1386     into the following code, which, although in some sense legitimate
1387     for single-threaded code, is almost certainly not what the developer
1388     intended:
1389
1390	if (tmp = a)
1391		for (;;)
1392			do_something_with(tmp);
1393
1394     Use ACCESS_ONCE() to prevent the compiler from doing this to you:
1395
1396	while (tmp = ACCESS_ONCE(a))
1397		do_something_with(tmp);
1398
1399 (*) The compiler is within its rights to reload a variable, for example,
1400     in cases where high register pressure prevents the compiler from
1401     keeping all data of interest in registers.  The compiler might
1402     therefore optimize the variable 'tmp' out of our previous example:
1403
1404	while (tmp = a)
1405		do_something_with(tmp);
1406
1407     This could result in the following code, which is perfectly safe in
1408     single-threaded code, but can be fatal in concurrent code:
1409
1410	while (a)
1411		do_something_with(a);
1412
1413     For example, the optimized version of this code could result in
1414     passing a zero to do_something_with() in the case where the variable
1415     a was modified by some other CPU between the "while" statement and
1416     the call to do_something_with().
1417
1418     Again, use ACCESS_ONCE() to prevent the compiler from doing this:
1419
1420	while (tmp = ACCESS_ONCE(a))
1421		do_something_with(tmp);
1422
1423     Note that if the compiler runs short of registers, it might save
1424     tmp onto the stack.  The overhead of this saving and later restoring
1425     is why compilers reload variables.  Doing so is perfectly safe for
1426     single-threaded code, so you need to tell the compiler about cases
1427     where it is not safe.
1428
1429 (*) The compiler is within its rights to omit a load entirely if it knows
1430     what the value will be.  For example, if the compiler can prove that
1431     the value of variable 'a' is always zero, it can optimize this code:
1432
1433	while (tmp = a)
1434		do_something_with(tmp);
1435
1436     Into this:
1437
1438	do { } while (0);
1439
1440     This transformation is a win for single-threaded code because it gets
1441     rid of a load and a branch.  The problem is that the compiler will
1442     carry out its proof assuming that the current CPU is the only one
1443     updating variable 'a'.  If variable 'a' is shared, then the compiler's
1444     proof will be erroneous.  Use ACCESS_ONCE() to tell the compiler
1445     that it doesn't know as much as it thinks it does:
1446
1447	while (tmp = ACCESS_ONCE(a))
1448		do_something_with(tmp);
1449
1450     But please note that the compiler is also closely watching what you
1451     do with the value after the ACCESS_ONCE().  For example, suppose you
1452     do the following and MAX is a preprocessor macro with the value 1:
1453
1454	while ((tmp = ACCESS_ONCE(a)) % MAX)
1455		do_something_with(tmp);
1456
1457     Then the compiler knows that the result of the "%" operator applied
1458     to MAX will always be zero, again allowing the compiler to optimize
1459     the code into near-nonexistence.  (It will still load from the
1460     variable 'a'.)
1461
1462 (*) Similarly, the compiler is within its rights to omit a store entirely
1463     if it knows that the variable already has the value being stored.
1464     Again, the compiler assumes that the current CPU is the only one
1465     storing into the variable, which can cause the compiler to do the
1466     wrong thing for shared variables.  For example, suppose you have
1467     the following:
1468
1469	a = 0;
1470	/* Code that does not store to variable a. */
1471	a = 0;
1472
1473     The compiler sees that the value of variable 'a' is already zero, so
1474     it might well omit the second store.  This would come as a fatal
1475     surprise if some other CPU might have stored to variable 'a' in the
1476     meantime.
1477
1478     Use ACCESS_ONCE() to prevent the compiler from making this sort of
1479     wrong guess:
1480
1481	ACCESS_ONCE(a) = 0;
1482	/* Code that does not store to variable a. */
1483	ACCESS_ONCE(a) = 0;
1484
1485 (*) The compiler is within its rights to reorder memory accesses unless
1486     you tell it not to.  For example, consider the following interaction
1487     between process-level code and an interrupt handler:
1488
1489	void process_level(void)
1490	{
1491		msg = get_message();
1492		flag = true;
1493	}
1494
1495	void interrupt_handler(void)
1496	{
1497		if (flag)
1498			process_message(msg);
1499	}
1500
1501     There is nothing to prevent the compiler from transforming
1502     process_level() to the following, in fact, this might well be a
1503     win for single-threaded code:
1504
1505	void process_level(void)
1506	{
1507		flag = true;
1508		msg = get_message();
1509	}
1510
1511     If the interrupt occurs between these two statement, then
1512     interrupt_handler() might be passed a garbled msg.  Use ACCESS_ONCE()
1513     to prevent this as follows:
1514
1515	void process_level(void)
1516	{
1517		ACCESS_ONCE(msg) = get_message();
1518		ACCESS_ONCE(flag) = true;
1519	}
1520
1521	void interrupt_handler(void)
1522	{
1523		if (ACCESS_ONCE(flag))
1524			process_message(ACCESS_ONCE(msg));
1525	}
1526
1527     Note that the ACCESS_ONCE() wrappers in interrupt_handler()
1528     are needed if this interrupt handler can itself be interrupted
1529     by something that also accesses 'flag' and 'msg', for example,
1530     a nested interrupt or an NMI.  Otherwise, ACCESS_ONCE() is not
1531     needed in interrupt_handler() other than for documentation purposes.
1532     (Note also that nested interrupts do not typically occur in modern
1533     Linux kernels, in fact, if an interrupt handler returns with
1534     interrupts enabled, you will get a WARN_ONCE() splat.)
1535
1536     You should assume that the compiler can move ACCESS_ONCE() past
1537     code not containing ACCESS_ONCE(), barrier(), or similar primitives.
1538
1539     This effect could also be achieved using barrier(), but ACCESS_ONCE()
1540     is more selective:  With ACCESS_ONCE(), the compiler need only forget
1541     the contents of the indicated memory locations, while with barrier()
1542     the compiler must discard the value of all memory locations that
1543     it has currented cached in any machine registers.  Of course,
1544     the compiler must also respect the order in which the ACCESS_ONCE()s
1545     occur, though the CPU of course need not do so.
1546
1547 (*) The compiler is within its rights to invent stores to a variable,
1548     as in the following example:
1549
1550	if (a)
1551		b = a;
1552	else
1553		b = 42;
1554
1555     The compiler might save a branch by optimizing this as follows:
1556
1557	b = 42;
1558	if (a)
1559		b = a;
1560
1561     In single-threaded code, this is not only safe, but also saves
1562     a branch.  Unfortunately, in concurrent code, this optimization
1563     could cause some other CPU to see a spurious value of 42 -- even
1564     if variable 'a' was never zero -- when loading variable 'b'.
1565     Use ACCESS_ONCE() to prevent this as follows:
1566
1567	if (a)
1568		ACCESS_ONCE(b) = a;
1569	else
1570		ACCESS_ONCE(b) = 42;
1571
1572     The compiler can also invent loads.  These are usually less
1573     damaging, but they can result in cache-line bouncing and thus in
1574     poor performance and scalability.  Use ACCESS_ONCE() to prevent
1575     invented loads.
1576
1577 (*) For aligned memory locations whose size allows them to be accessed
1578     with a single memory-reference instruction, prevents "load tearing"
1579     and "store tearing," in which a single large access is replaced by
1580     multiple smaller accesses.  For example, given an architecture having
1581     16-bit store instructions with 7-bit immediate fields, the compiler
1582     might be tempted to use two 16-bit store-immediate instructions to
1583     implement the following 32-bit store:
1584
1585	p = 0x00010002;
1586
1587     Please note that GCC really does use this sort of optimization,
1588     which is not surprising given that it would likely take more
1589     than two instructions to build the constant and then store it.
1590     This optimization can therefore be a win in single-threaded code.
1591     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1592     this optimization in a volatile store.  In the absence of such bugs,
1593     use of ACCESS_ONCE() prevents store tearing in the following example:
1594
1595	ACCESS_ONCE(p) = 0x00010002;
1596
1597     Use of packed structures can also result in load and store tearing,
1598     as in this example:
1599
1600	struct __attribute__((__packed__)) foo {
1601		short a;
1602		int b;
1603		short c;
1604	};
1605	struct foo foo1, foo2;
1606	...
1607
1608	foo2.a = foo1.a;
1609	foo2.b = foo1.b;
1610	foo2.c = foo1.c;
1611
1612     Because there are no ACCESS_ONCE() wrappers and no volatile markings,
1613     the compiler would be well within its rights to implement these three
1614     assignment statements as a pair of 32-bit loads followed by a pair
1615     of 32-bit stores.  This would result in load tearing on 'foo1.b'
1616     and store tearing on 'foo2.b'.  ACCESS_ONCE() again prevents tearing
1617     in this example:
1618
1619	foo2.a = foo1.a;
1620	ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
1621	foo2.c = foo1.c;
1622
1623All that aside, it is never necessary to use ACCESS_ONCE() on a variable
1624that has been marked volatile.  For example, because 'jiffies' is marked
1625volatile, it is never necessary to say ACCESS_ONCE(jiffies).  The reason
1626for this is that ACCESS_ONCE() is implemented as a volatile cast, which
1627has no effect when its argument is already marked volatile.
1628
1629Please note that these compiler barriers have no direct effect on the CPU,
1630which may then reorder things however it wishes.
1631
1632
1633CPU MEMORY BARRIERS
1634-------------------
1635
1636The Linux kernel has eight basic CPU memory barriers:
1637
1638	TYPE		MANDATORY		SMP CONDITIONAL
1639	===============	=======================	===========================
1640	GENERAL		mb()			smp_mb()
1641	WRITE		wmb()			smp_wmb()
1642	READ		rmb()			smp_rmb()
1643	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1644
1645
1646All memory barriers except the data dependency barriers imply a compiler
1647barrier. Data dependencies do not impose any additional compiler ordering.
1648
1649Aside: In the case of data dependencies, the compiler would be expected to
1650issue the loads in the correct order (eg. `a[b]` would have to load the value
1651of b before loading a[b]), however there is no guarantee in the C specification
1652that the compiler may not speculate the value of b (eg. is equal to 1) and load
1653a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1654problem of a compiler reloading b after having loaded a[b], thus having a newer
1655copy of b than a[b]. A consensus has not yet been reached about these problems,
1656however the ACCESS_ONCE macro is a good place to start looking.
1657
1658SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1659systems because it is assumed that a CPU will appear to be self-consistent,
1660and will order overlapping accesses correctly with respect to itself.
1661
1662[!] Note that SMP memory barriers _must_ be used to control the ordering of
1663references to shared memory on SMP systems, though the use of locking instead
1664is sufficient.
1665
1666Mandatory barriers should not be used to control SMP effects, since mandatory
1667barriers unnecessarily impose overhead on UP systems. They may, however, be
1668used to control MMIO effects on accesses through relaxed memory I/O windows.
1669These are required even on non-SMP systems as they affect the order in which
1670memory operations appear to a device by prohibiting both the compiler and the
1671CPU from reordering them.
1672
1673
1674There are some more advanced barrier functions:
1675
1676 (*) smp_store_mb(var, value)
1677
1678     This assigns the value to the variable and then inserts a full memory
1679     barrier after it, depending on the function.  It isn't guaranteed to
1680     insert anything more than a compiler barrier in a UP compilation.
1681
1682
1683 (*) smp_mb__before_atomic();
1684 (*) smp_mb__after_atomic();
1685
1686     These are for use with atomic (such as add, subtract, increment and
1687     decrement) functions that don't return a value, especially when used for
1688     reference counting.  These functions do not imply memory barriers.
1689
1690     These are also used for atomic bitop functions that do not return a
1691     value (such as set_bit and clear_bit).
1692
1693     As an example, consider a piece of code that marks an object as being dead
1694     and then decrements the object's reference count:
1695
1696	obj->dead = 1;
1697	smp_mb__before_atomic();
1698	atomic_dec(&obj->ref_count);
1699
1700     This makes sure that the death mark on the object is perceived to be set
1701     *before* the reference counter is decremented.
1702
1703     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1704     operations" subsection for information on where to use these.
1705
1706
1707 (*) dma_wmb();
1708 (*) dma_rmb();
1709
1710     These are for use with consistent memory to guarantee the ordering
1711     of writes or reads of shared memory accessible to both the CPU and a
1712     DMA capable device.
1713
1714     For example, consider a device driver that shares memory with a device
1715     and uses a descriptor status value to indicate if the descriptor belongs
1716     to the device or the CPU, and a doorbell to notify it when new
1717     descriptors are available:
1718
1719	if (desc->status != DEVICE_OWN) {
1720		/* do not read data until we own descriptor */
1721		dma_rmb();
1722
1723		/* read/modify data */
1724		read_data = desc->data;
1725		desc->data = write_data;
1726
1727		/* flush modifications before status update */
1728		dma_wmb();
1729
1730		/* assign ownership */
1731		desc->status = DEVICE_OWN;
1732
1733		/* force memory to sync before notifying device via MMIO */
1734		wmb();
1735
1736		/* notify device of new descriptors */
1737		writel(DESC_NOTIFY, doorbell);
1738	}
1739
1740     The dma_rmb() allows us guarantee the device has released ownership
1741     before we read the data from the descriptor, and the dma_wmb() allows
1742     us to guarantee the data is written to the descriptor before the device
1743     can see it now has ownership.  The wmb() is needed to guarantee that the
1744     cache coherent memory writes have completed before attempting a write to
1745     the cache incoherent MMIO region.
1746
1747     See Documentation/DMA-API.txt for more information on consistent memory.
1748
1749MMIO WRITE BARRIER
1750------------------
1751
1752The Linux kernel also has a special barrier for use with memory-mapped I/O
1753writes:
1754
1755	mmiowb();
1756
1757This is a variation on the mandatory write barrier that causes writes to weakly
1758ordered I/O regions to be partially ordered.  Its effects may go beyond the
1759CPU->Hardware interface and actually affect the hardware at some level.
1760
1761See the subsection "Locks vs I/O accesses" for more information.
1762
1763
1764===============================
1765IMPLICIT KERNEL MEMORY BARRIERS
1766===============================
1767
1768Some of the other functions in the linux kernel imply memory barriers, amongst
1769which are locking and scheduling functions.
1770
1771This specification is a _minimum_ guarantee; any particular architecture may
1772provide more substantial guarantees, but these may not be relied upon outside
1773of arch specific code.
1774
1775
1776ACQUIRING FUNCTIONS
1777-------------------
1778
1779The Linux kernel has a number of locking constructs:
1780
1781 (*) spin locks
1782 (*) R/W spin locks
1783 (*) mutexes
1784 (*) semaphores
1785 (*) R/W semaphores
1786 (*) RCU
1787
1788In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1789for each construct.  These operations all imply certain barriers:
1790
1791 (1) ACQUIRE operation implication:
1792
1793     Memory operations issued after the ACQUIRE will be completed after the
1794     ACQUIRE operation has completed.
1795
1796     Memory operations issued before the ACQUIRE may be completed after
1797     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1798     combined with a following ACQUIRE, orders prior stores against
1799     subsequent loads and stores. Note that this is weaker than smp_mb()!
1800     The smp_mb__before_spinlock() primitive is free on many architectures.
1801
1802 (2) RELEASE operation implication:
1803
1804     Memory operations issued before the RELEASE will be completed before the
1805     RELEASE operation has completed.
1806
1807     Memory operations issued after the RELEASE may be completed before the
1808     RELEASE operation has completed.
1809
1810 (3) ACQUIRE vs ACQUIRE implication:
1811
1812     All ACQUIRE operations issued before another ACQUIRE operation will be
1813     completed before that ACQUIRE operation.
1814
1815 (4) ACQUIRE vs RELEASE implication:
1816
1817     All ACQUIRE operations issued before a RELEASE operation will be
1818     completed before the RELEASE operation.
1819
1820 (5) Failed conditional ACQUIRE implication:
1821
1822     Certain locking variants of the ACQUIRE operation may fail, either due to
1823     being unable to get the lock immediately, or due to receiving an unblocked
1824     signal whilst asleep waiting for the lock to become available.  Failed
1825     locks do not imply any sort of barrier.
1826
1827[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1828one-way barriers is that the effects of instructions outside of a critical
1829section may seep into the inside of the critical section.
1830
1831An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1832because it is possible for an access preceding the ACQUIRE to happen after the
1833ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1834the two accesses can themselves then cross:
1835
1836	*A = a;
1837	ACQUIRE M
1838	RELEASE M
1839	*B = b;
1840
1841may occur as:
1842
1843	ACQUIRE M, STORE *B, STORE *A, RELEASE M
1844
1845When the ACQUIRE and RELEASE are a lock acquisition and release,
1846respectively, this same reordering can occur if the lock's ACQUIRE and
1847RELEASE are to the same lock variable, but only from the perspective of
1848another CPU not holding that lock.  In short, a ACQUIRE followed by an
1849RELEASE may -not- be assumed to be a full memory barrier.
1850
1851Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not
1852imply a full memory barrier.  If it is necessary for a RELEASE-ACQUIRE
1853pair to produce a full barrier, the ACQUIRE can be followed by an
1854smp_mb__after_unlock_lock() invocation.  This will produce a full barrier
1855if either (a) the RELEASE and the ACQUIRE are executed by the same
1856CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable.
1857The smp_mb__after_unlock_lock() primitive is free on many architectures.
1858Without smp_mb__after_unlock_lock(), the CPU's execution of the critical
1859sections corresponding to the RELEASE and the ACQUIRE can cross, so that:
1860
1861	*A = a;
1862	RELEASE M
1863	ACQUIRE N
1864	*B = b;
1865
1866could occur as:
1867
1868	ACQUIRE N, STORE *B, STORE *A, RELEASE M
1869
1870It might appear that this reordering could introduce a deadlock.
1871However, this cannot happen because if such a deadlock threatened,
1872the RELEASE would simply complete, thereby avoiding the deadlock.
1873
1874	Why does this work?
1875
1876	One key point is that we are only talking about the CPU doing
1877	the reordering, not the compiler.  If the compiler (or, for
1878	that matter, the developer) switched the operations, deadlock
1879	-could- occur.
1880
1881	But suppose the CPU reordered the operations.  In this case,
1882	the unlock precedes the lock in the assembly code.  The CPU
1883	simply elected to try executing the later lock operation first.
1884	If there is a deadlock, this lock operation will simply spin (or
1885	try to sleep, but more on that later).	The CPU will eventually
1886	execute the unlock operation (which preceded the lock operation
1887	in the assembly code), which will unravel the potential deadlock,
1888	allowing the lock operation to succeed.
1889
1890	But what if the lock is a sleeplock?  In that case, the code will
1891	try to enter the scheduler, where it will eventually encounter
1892	a memory barrier, which will force the earlier unlock operation
1893	to complete, again unraveling the deadlock.  There might be
1894	a sleep-unlock race, but the locking primitive needs to resolve
1895	such races properly in any case.
1896
1897With smp_mb__after_unlock_lock(), the two critical sections cannot overlap.
1898For example, with the following code, the store to *A will always be
1899seen by other CPUs before the store to *B:
1900
1901	*A = a;
1902	RELEASE M
1903	ACQUIRE N
1904	smp_mb__after_unlock_lock();
1905	*B = b;
1906
1907The operations will always occur in one of the following orders:
1908
1909	STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B
1910	STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1911	ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1912
1913If the RELEASE and ACQUIRE were instead both operating on the same lock
1914variable, only the first of these alternatives can occur.  In addition,
1915the more strongly ordered systems may rule out some of the above orders.
1916But in any case, as noted earlier, the smp_mb__after_unlock_lock()
1917ensures that the store to *A will always be seen as happening before
1918the store to *B.
1919
1920Locks and semaphores may not provide any guarantee of ordering on UP compiled
1921systems, and so cannot be counted on in such a situation to actually achieve
1922anything at all - especially with respect to I/O accesses - unless combined
1923with interrupt disabling operations.
1924
1925See also the section on "Inter-CPU locking barrier effects".
1926
1927
1928As an example, consider the following:
1929
1930	*A = a;
1931	*B = b;
1932	ACQUIRE
1933	*C = c;
1934	*D = d;
1935	RELEASE
1936	*E = e;
1937	*F = f;
1938
1939The following sequence of events is acceptable:
1940
1941	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
1942
1943	[+] Note that {*F,*A} indicates a combined access.
1944
1945But none of the following are:
1946
1947	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
1948	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
1949	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
1950	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
1951
1952
1953
1954INTERRUPT DISABLING FUNCTIONS
1955-----------------------------
1956
1957Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
1958(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
1959barriers are required in such a situation, they must be provided from some
1960other means.
1961
1962
1963SLEEP AND WAKE-UP FUNCTIONS
1964---------------------------
1965
1966Sleeping and waking on an event flagged in global data can be viewed as an
1967interaction between two pieces of data: the task state of the task waiting for
1968the event and the global data used to indicate the event.  To make sure that
1969these appear to happen in the right order, the primitives to begin the process
1970of going to sleep, and the primitives to initiate a wake up imply certain
1971barriers.
1972
1973Firstly, the sleeper normally follows something like this sequence of events:
1974
1975	for (;;) {
1976		set_current_state(TASK_UNINTERRUPTIBLE);
1977		if (event_indicated)
1978			break;
1979		schedule();
1980	}
1981
1982A general memory barrier is interpolated automatically by set_current_state()
1983after it has altered the task state:
1984
1985	CPU 1
1986	===============================
1987	set_current_state();
1988	  smp_store_mb();
1989	    STORE current->state
1990	    <general barrier>
1991	LOAD event_indicated
1992
1993set_current_state() may be wrapped by:
1994
1995	prepare_to_wait();
1996	prepare_to_wait_exclusive();
1997
1998which therefore also imply a general memory barrier after setting the state.
1999The whole sequence above is available in various canned forms, all of which
2000interpolate the memory barrier in the right place:
2001
2002	wait_event();
2003	wait_event_interruptible();
2004	wait_event_interruptible_exclusive();
2005	wait_event_interruptible_timeout();
2006	wait_event_killable();
2007	wait_event_timeout();
2008	wait_on_bit();
2009	wait_on_bit_lock();
2010
2011
2012Secondly, code that performs a wake up normally follows something like this:
2013
2014	event_indicated = 1;
2015	wake_up(&event_wait_queue);
2016
2017or:
2018
2019	event_indicated = 1;
2020	wake_up_process(event_daemon);
2021
2022A write memory barrier is implied by wake_up() and co. if and only if they wake
2023something up.  The barrier occurs before the task state is cleared, and so sits
2024between the STORE to indicate the event and the STORE to set TASK_RUNNING:
2025
2026	CPU 1				CPU 2
2027	===============================	===============================
2028	set_current_state();		STORE event_indicated
2029	  smp_store_mb();		wake_up();
2030	    STORE current->state	  <write barrier>
2031	    <general barrier>		  STORE current->state
2032	LOAD event_indicated
2033
2034To repeat, this write memory barrier is present if and only if something
2035is actually awakened.  To see this, consider the following sequence of
2036events, where X and Y are both initially zero:
2037
2038	CPU 1				CPU 2
2039	===============================	===============================
2040	X = 1;				STORE event_indicated
2041	smp_mb();			wake_up();
2042	Y = 1;				wait_event(wq, Y == 1);
2043	wake_up();			  load from Y sees 1, no memory barrier
2044					load from X might see 0
2045
2046In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
2047to see 1.
2048
2049The available waker functions include:
2050
2051	complete();
2052	wake_up();
2053	wake_up_all();
2054	wake_up_bit();
2055	wake_up_interruptible();
2056	wake_up_interruptible_all();
2057	wake_up_interruptible_nr();
2058	wake_up_interruptible_poll();
2059	wake_up_interruptible_sync();
2060	wake_up_interruptible_sync_poll();
2061	wake_up_locked();
2062	wake_up_locked_poll();
2063	wake_up_nr();
2064	wake_up_poll();
2065	wake_up_process();
2066
2067
2068[!] Note that the memory barriers implied by the sleeper and the waker do _not_
2069order multiple stores before the wake-up with respect to loads of those stored
2070values after the sleeper has called set_current_state().  For instance, if the
2071sleeper does:
2072
2073	set_current_state(TASK_INTERRUPTIBLE);
2074	if (event_indicated)
2075		break;
2076	__set_current_state(TASK_RUNNING);
2077	do_something(my_data);
2078
2079and the waker does:
2080
2081	my_data = value;
2082	event_indicated = 1;
2083	wake_up(&event_wait_queue);
2084
2085there's no guarantee that the change to event_indicated will be perceived by
2086the sleeper as coming after the change to my_data.  In such a circumstance, the
2087code on both sides must interpolate its own memory barriers between the
2088separate data accesses.  Thus the above sleeper ought to do:
2089
2090	set_current_state(TASK_INTERRUPTIBLE);
2091	if (event_indicated) {
2092		smp_rmb();
2093		do_something(my_data);
2094	}
2095
2096and the waker should do:
2097
2098	my_data = value;
2099	smp_wmb();
2100	event_indicated = 1;
2101	wake_up(&event_wait_queue);
2102
2103
2104MISCELLANEOUS FUNCTIONS
2105-----------------------
2106
2107Other functions that imply barriers:
2108
2109 (*) schedule() and similar imply full memory barriers.
2110
2111
2112===================================
2113INTER-CPU ACQUIRING BARRIER EFFECTS
2114===================================
2115
2116On SMP systems locking primitives give a more substantial form of barrier: one
2117that does affect memory access ordering on other CPUs, within the context of
2118conflict on any particular lock.
2119
2120
2121ACQUIRES VS MEMORY ACCESSES
2122---------------------------
2123
2124Consider the following: the system has a pair of spinlocks (M) and (Q), and
2125three CPUs; then should the following sequence of events occur:
2126
2127	CPU 1				CPU 2
2128	===============================	===============================
2129	ACCESS_ONCE(*A) = a;		ACCESS_ONCE(*E) = e;
2130	ACQUIRE M			ACQUIRE Q
2131	ACCESS_ONCE(*B) = b;		ACCESS_ONCE(*F) = f;
2132	ACCESS_ONCE(*C) = c;		ACCESS_ONCE(*G) = g;
2133	RELEASE M			RELEASE Q
2134	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*H) = h;
2135
2136Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2137through *H occur in, other than the constraints imposed by the separate locks
2138on the separate CPUs. It might, for example, see:
2139
2140	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2141
2142But it won't see any of:
2143
2144	*B, *C or *D preceding ACQUIRE M
2145	*A, *B or *C following RELEASE M
2146	*F, *G or *H preceding ACQUIRE Q
2147	*E, *F or *G following RELEASE Q
2148
2149
2150However, if the following occurs:
2151
2152	CPU 1				CPU 2
2153	===============================	===============================
2154	ACCESS_ONCE(*A) = a;
2155	ACQUIRE M		     [1]
2156	ACCESS_ONCE(*B) = b;
2157	ACCESS_ONCE(*C) = c;
2158	RELEASE M	     [1]
2159	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*E) = e;
2160					ACQUIRE M		     [2]
2161					smp_mb__after_unlock_lock();
2162					ACCESS_ONCE(*F) = f;
2163					ACCESS_ONCE(*G) = g;
2164					RELEASE M	     [2]
2165					ACCESS_ONCE(*H) = h;
2166
2167CPU 3 might see:
2168
2169	*E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1],
2170		ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D
2171
2172But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
2173
2174	*B, *C, *D, *F, *G or *H preceding ACQUIRE M [1]
2175	*A, *B or *C following RELEASE M [1]
2176	*F, *G or *H preceding ACQUIRE M [2]
2177	*A, *B, *C, *E, *F or *G following RELEASE M [2]
2178
2179Note that the smp_mb__after_unlock_lock() is critically important
2180here: Without it CPU 3 might see some of the above orderings.
2181Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2182to be seen in order unless CPU 3 holds lock M.
2183
2184
2185ACQUIRES VS I/O ACCESSES
2186------------------------
2187
2188Under certain circumstances (especially involving NUMA), I/O accesses within
2189two spinlocked sections on two different CPUs may be seen as interleaved by the
2190PCI bridge, because the PCI bridge does not necessarily participate in the
2191cache-coherence protocol, and is therefore incapable of issuing the required
2192read memory barriers.
2193
2194For example:
2195
2196	CPU 1				CPU 2
2197	===============================	===============================
2198	spin_lock(Q)
2199	writel(0, ADDR)
2200	writel(1, DATA);
2201	spin_unlock(Q);
2202					spin_lock(Q);
2203					writel(4, ADDR);
2204					writel(5, DATA);
2205					spin_unlock(Q);
2206
2207may be seen by the PCI bridge as follows:
2208
2209	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2210
2211which would probably cause the hardware to malfunction.
2212
2213
2214What is necessary here is to intervene with an mmiowb() before dropping the
2215spinlock, for example:
2216
2217	CPU 1				CPU 2
2218	===============================	===============================
2219	spin_lock(Q)
2220	writel(0, ADDR)
2221	writel(1, DATA);
2222	mmiowb();
2223	spin_unlock(Q);
2224					spin_lock(Q);
2225					writel(4, ADDR);
2226					writel(5, DATA);
2227					mmiowb();
2228					spin_unlock(Q);
2229
2230this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2231before either of the stores issued on CPU 2.
2232
2233
2234Furthermore, following a store by a load from the same device obviates the need
2235for the mmiowb(), because the load forces the store to complete before the load
2236is performed:
2237
2238	CPU 1				CPU 2
2239	===============================	===============================
2240	spin_lock(Q)
2241	writel(0, ADDR)
2242	a = readl(DATA);
2243	spin_unlock(Q);
2244					spin_lock(Q);
2245					writel(4, ADDR);
2246					b = readl(DATA);
2247					spin_unlock(Q);
2248
2249
2250See Documentation/DocBook/deviceiobook.tmpl for more information.
2251
2252
2253=================================
2254WHERE ARE MEMORY BARRIERS NEEDED?
2255=================================
2256
2257Under normal operation, memory operation reordering is generally not going to
2258be a problem as a single-threaded linear piece of code will still appear to
2259work correctly, even if it's in an SMP kernel.  There are, however, four
2260circumstances in which reordering definitely _could_ be a problem:
2261
2262 (*) Interprocessor interaction.
2263
2264 (*) Atomic operations.
2265
2266 (*) Accessing devices.
2267
2268 (*) Interrupts.
2269
2270
2271INTERPROCESSOR INTERACTION
2272--------------------------
2273
2274When there's a system with more than one processor, more than one CPU in the
2275system may be working on the same data set at the same time.  This can cause
2276synchronisation problems, and the usual way of dealing with them is to use
2277locks.  Locks, however, are quite expensive, and so it may be preferable to
2278operate without the use of a lock if at all possible.  In such a case
2279operations that affect both CPUs may have to be carefully ordered to prevent
2280a malfunction.
2281
2282Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2283queued on the semaphore, by virtue of it having a piece of its stack linked to
2284the semaphore's list of waiting processes:
2285
2286	struct rw_semaphore {
2287		...
2288		spinlock_t lock;
2289		struct list_head waiters;
2290	};
2291
2292	struct rwsem_waiter {
2293		struct list_head list;
2294		struct task_struct *task;
2295	};
2296
2297To wake up a particular waiter, the up_read() or up_write() functions have to:
2298
2299 (1) read the next pointer from this waiter's record to know as to where the
2300     next waiter record is;
2301
2302 (2) read the pointer to the waiter's task structure;
2303
2304 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2305
2306 (4) call wake_up_process() on the task; and
2307
2308 (5) release the reference held on the waiter's task struct.
2309
2310In other words, it has to perform this sequence of events:
2311
2312	LOAD waiter->list.next;
2313	LOAD waiter->task;
2314	STORE waiter->task;
2315	CALL wakeup
2316	RELEASE task
2317
2318and if any of these steps occur out of order, then the whole thing may
2319malfunction.
2320
2321Once it has queued itself and dropped the semaphore lock, the waiter does not
2322get the lock again; it instead just waits for its task pointer to be cleared
2323before proceeding.  Since the record is on the waiter's stack, this means that
2324if the task pointer is cleared _before_ the next pointer in the list is read,
2325another CPU might start processing the waiter and might clobber the waiter's
2326stack before the up*() function has a chance to read the next pointer.
2327
2328Consider then what might happen to the above sequence of events:
2329
2330	CPU 1				CPU 2
2331	===============================	===============================
2332					down_xxx()
2333					Queue waiter
2334					Sleep
2335	up_yyy()
2336	LOAD waiter->task;
2337	STORE waiter->task;
2338					Woken up by other event
2339	<preempt>
2340					Resume processing
2341					down_xxx() returns
2342					call foo()
2343					foo() clobbers *waiter
2344	</preempt>
2345	LOAD waiter->list.next;
2346	--- OOPS ---
2347
2348This could be dealt with using the semaphore lock, but then the down_xxx()
2349function has to needlessly get the spinlock again after being woken up.
2350
2351The way to deal with this is to insert a general SMP memory barrier:
2352
2353	LOAD waiter->list.next;
2354	LOAD waiter->task;
2355	smp_mb();
2356	STORE waiter->task;
2357	CALL wakeup
2358	RELEASE task
2359
2360In this case, the barrier makes a guarantee that all memory accesses before the
2361barrier will appear to happen before all the memory accesses after the barrier
2362with respect to the other CPUs on the system.  It does _not_ guarantee that all
2363the memory accesses before the barrier will be complete by the time the barrier
2364instruction itself is complete.
2365
2366On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2367compiler barrier, thus making sure the compiler emits the instructions in the
2368right order without actually intervening in the CPU.  Since there's only one
2369CPU, that CPU's dependency ordering logic will take care of everything else.
2370
2371
2372ATOMIC OPERATIONS
2373-----------------
2374
2375Whilst they are technically interprocessor interaction considerations, atomic
2376operations are noted specially as some of them imply full memory barriers and
2377some don't, but they're very heavily relied on as a group throughout the
2378kernel.
2379
2380Any atomic operation that modifies some state in memory and returns information
2381about the state (old or new) implies an SMP-conditional general memory barrier
2382(smp_mb()) on each side of the actual operation (with the exception of
2383explicit lock operations, described later).  These include:
2384
2385	xchg();
2386	cmpxchg();
2387	atomic_xchg();			atomic_long_xchg();
2388	atomic_cmpxchg();		atomic_long_cmpxchg();
2389	atomic_inc_return();		atomic_long_inc_return();
2390	atomic_dec_return();		atomic_long_dec_return();
2391	atomic_add_return();		atomic_long_add_return();
2392	atomic_sub_return();		atomic_long_sub_return();
2393	atomic_inc_and_test();		atomic_long_inc_and_test();
2394	atomic_dec_and_test();		atomic_long_dec_and_test();
2395	atomic_sub_and_test();		atomic_long_sub_and_test();
2396	atomic_add_negative();		atomic_long_add_negative();
2397	test_and_set_bit();
2398	test_and_clear_bit();
2399	test_and_change_bit();
2400
2401	/* when succeeds (returns 1) */
2402	atomic_add_unless();		atomic_long_add_unless();
2403
2404These are used for such things as implementing ACQUIRE-class and RELEASE-class
2405operations and adjusting reference counters towards object destruction, and as
2406such the implicit memory barrier effects are necessary.
2407
2408
2409The following operations are potential problems as they do _not_ imply memory
2410barriers, but might be used for implementing such things as RELEASE-class
2411operations:
2412
2413	atomic_set();
2414	set_bit();
2415	clear_bit();
2416	change_bit();
2417
2418With these the appropriate explicit memory barrier should be used if necessary
2419(smp_mb__before_atomic() for instance).
2420
2421
2422The following also do _not_ imply memory barriers, and so may require explicit
2423memory barriers under some circumstances (smp_mb__before_atomic() for
2424instance):
2425
2426	atomic_add();
2427	atomic_sub();
2428	atomic_inc();
2429	atomic_dec();
2430
2431If they're used for statistics generation, then they probably don't need memory
2432barriers, unless there's a coupling between statistical data.
2433
2434If they're used for reference counting on an object to control its lifetime,
2435they probably don't need memory barriers because either the reference count
2436will be adjusted inside a locked section, or the caller will already hold
2437sufficient references to make the lock, and thus a memory barrier unnecessary.
2438
2439If they're used for constructing a lock of some description, then they probably
2440do need memory barriers as a lock primitive generally has to do things in a
2441specific order.
2442
2443Basically, each usage case has to be carefully considered as to whether memory
2444barriers are needed or not.
2445
2446The following operations are special locking primitives:
2447
2448	test_and_set_bit_lock();
2449	clear_bit_unlock();
2450	__clear_bit_unlock();
2451
2452These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2453preference to other operations when implementing locking primitives, because
2454their implementations can be optimised on many architectures.
2455
2456[!] Note that special memory barrier primitives are available for these
2457situations because on some CPUs the atomic instructions used imply full memory
2458barriers, and so barrier instructions are superfluous in conjunction with them,
2459and in such cases the special barrier primitives will be no-ops.
2460
2461See Documentation/atomic_ops.txt for more information.
2462
2463
2464ACCESSING DEVICES
2465-----------------
2466
2467Many devices can be memory mapped, and so appear to the CPU as if they're just
2468a set of memory locations.  To control such a device, the driver usually has to
2469make the right memory accesses in exactly the right order.
2470
2471However, having a clever CPU or a clever compiler creates a potential problem
2472in that the carefully sequenced accesses in the driver code won't reach the
2473device in the requisite order if the CPU or the compiler thinks it is more
2474efficient to reorder, combine or merge accesses - something that would cause
2475the device to malfunction.
2476
2477Inside of the Linux kernel, I/O should be done through the appropriate accessor
2478routines - such as inb() or writel() - which know how to make such accesses
2479appropriately sequential.  Whilst this, for the most part, renders the explicit
2480use of memory barriers unnecessary, there are a couple of situations where they
2481might be needed:
2482
2483 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2484     so for _all_ general drivers locks should be used and mmiowb() must be
2485     issued prior to unlocking the critical section.
2486
2487 (2) If the accessor functions are used to refer to an I/O memory window with
2488     relaxed memory access properties, then _mandatory_ memory barriers are
2489     required to enforce ordering.
2490
2491See Documentation/DocBook/deviceiobook.tmpl for more information.
2492
2493
2494INTERRUPTS
2495----------
2496
2497A driver may be interrupted by its own interrupt service routine, and thus the
2498two parts of the driver may interfere with each other's attempts to control or
2499access the device.
2500
2501This may be alleviated - at least in part - by disabling local interrupts (a
2502form of locking), such that the critical operations are all contained within
2503the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2504routine is executing, the driver's core may not run on the same CPU, and its
2505interrupt is not permitted to happen again until the current interrupt has been
2506handled, thus the interrupt handler does not need to lock against that.
2507
2508However, consider a driver that was talking to an ethernet card that sports an
2509address register and a data register.  If that driver's core talks to the card
2510under interrupt-disablement and then the driver's interrupt handler is invoked:
2511
2512	LOCAL IRQ DISABLE
2513	writew(ADDR, 3);
2514	writew(DATA, y);
2515	LOCAL IRQ ENABLE
2516	<interrupt>
2517	writew(ADDR, 4);
2518	q = readw(DATA);
2519	</interrupt>
2520
2521The store to the data register might happen after the second store to the
2522address register if ordering rules are sufficiently relaxed:
2523
2524	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2525
2526
2527If ordering rules are relaxed, it must be assumed that accesses done inside an
2528interrupt disabled section may leak outside of it and may interleave with
2529accesses performed in an interrupt - and vice versa - unless implicit or
2530explicit barriers are used.
2531
2532Normally this won't be a problem because the I/O accesses done inside such
2533sections will include synchronous load operations on strictly ordered I/O
2534registers that form implicit I/O barriers. If this isn't sufficient then an
2535mmiowb() may need to be used explicitly.
2536
2537
2538A similar situation may occur between an interrupt routine and two routines
2539running on separate CPUs that communicate with each other. If such a case is
2540likely, then interrupt-disabling locks should be used to guarantee ordering.
2541
2542
2543==========================
2544KERNEL I/O BARRIER EFFECTS
2545==========================
2546
2547When accessing I/O memory, drivers should use the appropriate accessor
2548functions:
2549
2550 (*) inX(), outX():
2551
2552     These are intended to talk to I/O space rather than memory space, but
2553     that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2554     indeed have special I/O space access cycles and instructions, but many
2555     CPUs don't have such a concept.
2556
2557     The PCI bus, amongst others, defines an I/O space concept which - on such
2558     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2559     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2560     memory map, particularly on those CPUs that don't support alternate I/O
2561     spaces.
2562
2563     Accesses to this space may be fully synchronous (as on i386), but
2564     intermediary bridges (such as the PCI host bridge) may not fully honour
2565     that.
2566
2567     They are guaranteed to be fully ordered with respect to each other.
2568
2569     They are not guaranteed to be fully ordered with respect to other types of
2570     memory and I/O operation.
2571
2572 (*) readX(), writeX():
2573
2574     Whether these are guaranteed to be fully ordered and uncombined with
2575     respect to each other on the issuing CPU depends on the characteristics
2576     defined for the memory window through which they're accessing. On later
2577     i386 architecture machines, for example, this is controlled by way of the
2578     MTRR registers.
2579
2580     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2581     provided they're not accessing a prefetchable device.
2582
2583     However, intermediary hardware (such as a PCI bridge) may indulge in
2584     deferral if it so wishes; to flush a store, a load from the same location
2585     is preferred[*], but a load from the same device or from configuration
2586     space should suffice for PCI.
2587
2588     [*] NOTE! attempting to load from the same location as was written to may
2589	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2590	 example.
2591
2592     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2593     force stores to be ordered.
2594
2595     Please refer to the PCI specification for more information on interactions
2596     between PCI transactions.
2597
2598 (*) readX_relaxed(), writeX_relaxed()
2599
2600     These are similar to readX() and writeX(), but provide weaker memory
2601     ordering guarantees. Specifically, they do not guarantee ordering with
2602     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2603     ordering with respect to LOCK or UNLOCK operations. If the latter is
2604     required, an mmiowb() barrier can be used. Note that relaxed accesses to
2605     the same peripheral are guaranteed to be ordered with respect to each
2606     other.
2607
2608 (*) ioreadX(), iowriteX()
2609
2610     These will perform appropriately for the type of access they're actually
2611     doing, be it inX()/outX() or readX()/writeX().
2612
2613
2614========================================
2615ASSUMED MINIMUM EXECUTION ORDERING MODEL
2616========================================
2617
2618It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2619maintain the appearance of program causality with respect to itself.  Some CPUs
2620(such as i386 or x86_64) are more constrained than others (such as powerpc or
2621frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2622of arch-specific code.
2623
2624This means that it must be considered that the CPU will execute its instruction
2625stream in any order it feels like - or even in parallel - provided that if an
2626instruction in the stream depends on an earlier instruction, then that
2627earlier instruction must be sufficiently complete[*] before the later
2628instruction may proceed; in other words: provided that the appearance of
2629causality is maintained.
2630
2631 [*] Some instructions have more than one effect - such as changing the
2632     condition codes, changing registers or changing memory - and different
2633     instructions may depend on different effects.
2634
2635A CPU may also discard any instruction sequence that winds up having no
2636ultimate effect.  For example, if two adjacent instructions both load an
2637immediate value into the same register, the first may be discarded.
2638
2639
2640Similarly, it has to be assumed that compiler might reorder the instruction
2641stream in any way it sees fit, again provided the appearance of causality is
2642maintained.
2643
2644
2645============================
2646THE EFFECTS OF THE CPU CACHE
2647============================
2648
2649The way cached memory operations are perceived across the system is affected to
2650a certain extent by the caches that lie between CPUs and memory, and by the
2651memory coherence system that maintains the consistency of state in the system.
2652
2653As far as the way a CPU interacts with another part of the system through the
2654caches goes, the memory system has to include the CPU's caches, and memory
2655barriers for the most part act at the interface between the CPU and its cache
2656(memory barriers logically act on the dotted line in the following diagram):
2657
2658	    <--- CPU --->         :       <----------- Memory ----------->
2659	                          :
2660	+--------+    +--------+  :   +--------+    +-----------+
2661	|        |    |        |  :   |        |    |           |    +--------+
2662	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2663	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2664	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2665	|        |    |        |  :   |        |    |           |    |        |
2666	+--------+    +--------+  :   +--------+    |           |    |        |
2667	                          :                 | Cache     |    +--------+
2668	                          :                 | Coherency |
2669	                          :                 | Mechanism |    +--------+
2670	+--------+    +--------+  :   +--------+    |           |    |	      |
2671	|        |    |        |  :   |        |    |           |    |        |
2672	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2673	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2674	|        |    | Queue  |  :   |        |    |           |    |        |
2675	|        |    |        |  :   |        |    |           |    +--------+
2676	+--------+    +--------+  :   +--------+    +-----------+
2677	                          :
2678	                          :
2679
2680Although any particular load or store may not actually appear outside of the
2681CPU that issued it since it may have been satisfied within the CPU's own cache,
2682it will still appear as if the full memory access had taken place as far as the
2683other CPUs are concerned since the cache coherency mechanisms will migrate the
2684cacheline over to the accessing CPU and propagate the effects upon conflict.
2685
2686The CPU core may execute instructions in any order it deems fit, provided the
2687expected program causality appears to be maintained.  Some of the instructions
2688generate load and store operations which then go into the queue of memory
2689accesses to be performed.  The core may place these in the queue in any order
2690it wishes, and continue execution until it is forced to wait for an instruction
2691to complete.
2692
2693What memory barriers are concerned with is controlling the order in which
2694accesses cross from the CPU side of things to the memory side of things, and
2695the order in which the effects are perceived to happen by the other observers
2696in the system.
2697
2698[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2699their own loads and stores as if they had happened in program order.
2700
2701[!] MMIO or other device accesses may bypass the cache system.  This depends on
2702the properties of the memory window through which devices are accessed and/or
2703the use of any special device communication instructions the CPU may have.
2704
2705
2706CACHE COHERENCY
2707---------------
2708
2709Life isn't quite as simple as it may appear above, however: for while the
2710caches are expected to be coherent, there's no guarantee that that coherency
2711will be ordered.  This means that whilst changes made on one CPU will
2712eventually become visible on all CPUs, there's no guarantee that they will
2713become apparent in the same order on those other CPUs.
2714
2715
2716Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2717has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2718
2719	            :
2720	            :                          +--------+
2721	            :      +---------+         |        |
2722	+--------+  : +--->| Cache A |<------->|        |
2723	|        |  : |    +---------+         |        |
2724	|  CPU 1 |<---+                        |        |
2725	|        |  : |    +---------+         |        |
2726	+--------+  : +--->| Cache B |<------->|        |
2727	            :      +---------+         |        |
2728	            :                          | Memory |
2729	            :      +---------+         | System |
2730	+--------+  : +--->| Cache C |<------->|        |
2731	|        |  : |    +---------+         |        |
2732	|  CPU 2 |<---+                        |        |
2733	|        |  : |    +---------+         |        |
2734	+--------+  : +--->| Cache D |<------->|        |
2735	            :      +---------+         |        |
2736	            :                          +--------+
2737	            :
2738
2739Imagine the system has the following properties:
2740
2741 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2742     resident in memory;
2743
2744 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2745     resident in memory;
2746
2747 (*) whilst the CPU core is interrogating one cache, the other cache may be
2748     making use of the bus to access the rest of the system - perhaps to
2749     displace a dirty cacheline or to do a speculative load;
2750
2751 (*) each cache has a queue of operations that need to be applied to that cache
2752     to maintain coherency with the rest of the system;
2753
2754 (*) the coherency queue is not flushed by normal loads to lines already
2755     present in the cache, even though the contents of the queue may
2756     potentially affect those loads.
2757
2758Imagine, then, that two writes are made on the first CPU, with a write barrier
2759between them to guarantee that they will appear to reach that CPU's caches in
2760the requisite order:
2761
2762	CPU 1		CPU 2		COMMENT
2763	===============	===============	=======================================
2764					u == 0, v == 1 and p == &u, q == &u
2765	v = 2;
2766	smp_wmb();			Make sure change to v is visible before
2767					 change to p
2768	<A:modify v=2>			v is now in cache A exclusively
2769	p = &v;
2770	<B:modify p=&v>			p is now in cache B exclusively
2771
2772The write memory barrier forces the other CPUs in the system to perceive that
2773the local CPU's caches have apparently been updated in the correct order.  But
2774now imagine that the second CPU wants to read those values:
2775
2776	CPU 1		CPU 2		COMMENT
2777	===============	===============	=======================================
2778	...
2779			q = p;
2780			x = *q;
2781
2782The above pair of reads may then fail to happen in the expected order, as the
2783cacheline holding p may get updated in one of the second CPU's caches whilst
2784the update to the cacheline holding v is delayed in the other of the second
2785CPU's caches by some other cache event:
2786
2787	CPU 1		CPU 2		COMMENT
2788	===============	===============	=======================================
2789					u == 0, v == 1 and p == &u, q == &u
2790	v = 2;
2791	smp_wmb();
2792	<A:modify v=2>	<C:busy>
2793			<C:queue v=2>
2794	p = &v;		q = p;
2795			<D:request p>
2796	<B:modify p=&v>	<D:commit p=&v>
2797			<D:read p>
2798			x = *q;
2799			<C:read *q>	Reads from v before v updated in cache
2800			<C:unbusy>
2801			<C:commit v=2>
2802
2803Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2804no guarantee that, without intervention, the order of update will be the same
2805as that committed on CPU 1.
2806
2807
2808To intervene, we need to interpolate a data dependency barrier or a read
2809barrier between the loads.  This will force the cache to commit its coherency
2810queue before processing any further requests:
2811
2812	CPU 1		CPU 2		COMMENT
2813	===============	===============	=======================================
2814					u == 0, v == 1 and p == &u, q == &u
2815	v = 2;
2816	smp_wmb();
2817	<A:modify v=2>	<C:busy>
2818			<C:queue v=2>
2819	p = &v;		q = p;
2820			<D:request p>
2821	<B:modify p=&v>	<D:commit p=&v>
2822			<D:read p>
2823			smp_read_barrier_depends()
2824			<C:unbusy>
2825			<C:commit v=2>
2826			x = *q;
2827			<C:read *q>	Reads from v after v updated in cache
2828
2829
2830This sort of problem can be encountered on DEC Alpha processors as they have a
2831split cache that improves performance by making better use of the data bus.
2832Whilst most CPUs do imply a data dependency barrier on the read when a memory
2833access depends on a read, not all do, so it may not be relied on.
2834
2835Other CPUs may also have split caches, but must coordinate between the various
2836cachelets for normal memory accesses.  The semantics of the Alpha removes the
2837need for coordination in the absence of memory barriers.
2838
2839
2840CACHE COHERENCY VS DMA
2841----------------------
2842
2843Not all systems maintain cache coherency with respect to devices doing DMA.  In
2844such cases, a device attempting DMA may obtain stale data from RAM because
2845dirty cache lines may be resident in the caches of various CPUs, and may not
2846have been written back to RAM yet.  To deal with this, the appropriate part of
2847the kernel must flush the overlapping bits of cache on each CPU (and maybe
2848invalidate them as well).
2849
2850In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2851cache lines being written back to RAM from a CPU's cache after the device has
2852installed its own data, or cache lines present in the CPU's cache may simply
2853obscure the fact that RAM has been updated, until at such time as the cacheline
2854is discarded from the CPU's cache and reloaded.  To deal with this, the
2855appropriate part of the kernel must invalidate the overlapping bits of the
2856cache on each CPU.
2857
2858See Documentation/cachetlb.txt for more information on cache management.
2859
2860
2861CACHE COHERENCY VS MMIO
2862-----------------------
2863
2864Memory mapped I/O usually takes place through memory locations that are part of
2865a window in the CPU's memory space that has different properties assigned than
2866the usual RAM directed window.
2867
2868Amongst these properties is usually the fact that such accesses bypass the
2869caching entirely and go directly to the device buses.  This means MMIO accesses
2870may, in effect, overtake accesses to cached memory that were emitted earlier.
2871A memory barrier isn't sufficient in such a case, but rather the cache must be
2872flushed between the cached memory write and the MMIO access if the two are in
2873any way dependent.
2874
2875
2876=========================
2877THE THINGS CPUS GET UP TO
2878=========================
2879
2880A programmer might take it for granted that the CPU will perform memory
2881operations in exactly the order specified, so that if the CPU is, for example,
2882given the following piece of code to execute:
2883
2884	a = ACCESS_ONCE(*A);
2885	ACCESS_ONCE(*B) = b;
2886	c = ACCESS_ONCE(*C);
2887	d = ACCESS_ONCE(*D);
2888	ACCESS_ONCE(*E) = e;
2889
2890they would then expect that the CPU will complete the memory operation for each
2891instruction before moving on to the next one, leading to a definite sequence of
2892operations as seen by external observers in the system:
2893
2894	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2895
2896
2897Reality is, of course, much messier.  With many CPUs and compilers, the above
2898assumption doesn't hold because:
2899
2900 (*) loads are more likely to need to be completed immediately to permit
2901     execution progress, whereas stores can often be deferred without a
2902     problem;
2903
2904 (*) loads may be done speculatively, and the result discarded should it prove
2905     to have been unnecessary;
2906
2907 (*) loads may be done speculatively, leading to the result having been fetched
2908     at the wrong time in the expected sequence of events;
2909
2910 (*) the order of the memory accesses may be rearranged to promote better use
2911     of the CPU buses and caches;
2912
2913 (*) loads and stores may be combined to improve performance when talking to
2914     memory or I/O hardware that can do batched accesses of adjacent locations,
2915     thus cutting down on transaction setup costs (memory and PCI devices may
2916     both be able to do this); and
2917
2918 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2919     mechanisms may alleviate this - once the store has actually hit the cache
2920     - there's no guarantee that the coherency management will be propagated in
2921     order to other CPUs.
2922
2923So what another CPU, say, might actually observe from the above piece of code
2924is:
2925
2926	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2927
2928	(Where "LOAD {*C,*D}" is a combined load)
2929
2930
2931However, it is guaranteed that a CPU will be self-consistent: it will see its
2932_own_ accesses appear to be correctly ordered, without the need for a memory
2933barrier.  For instance with the following code:
2934
2935	U = ACCESS_ONCE(*A);
2936	ACCESS_ONCE(*A) = V;
2937	ACCESS_ONCE(*A) = W;
2938	X = ACCESS_ONCE(*A);
2939	ACCESS_ONCE(*A) = Y;
2940	Z = ACCESS_ONCE(*A);
2941
2942and assuming no intervention by an external influence, it can be assumed that
2943the final result will appear to be:
2944
2945	U == the original value of *A
2946	X == W
2947	Z == Y
2948	*A == Y
2949
2950The code above may cause the CPU to generate the full sequence of memory
2951accesses:
2952
2953	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2954
2955in that order, but, without intervention, the sequence may have almost any
2956combination of elements combined or discarded, provided the program's view of
2957the world remains consistent.  Note that ACCESS_ONCE() is -not- optional
2958in the above example, as there are architectures where a given CPU might
2959reorder successive loads to the same location.  On such architectures,
2960ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2961Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2962special ld.acq and st.rel instructions that prevent such reordering.
2963
2964The compiler may also combine, discard or defer elements of the sequence before
2965the CPU even sees them.
2966
2967For instance:
2968
2969	*A = V;
2970	*A = W;
2971
2972may be reduced to:
2973
2974	*A = W;
2975
2976since, without either a write barrier or an ACCESS_ONCE(), it can be
2977assumed that the effect of the storage of V to *A is lost.  Similarly:
2978
2979	*A = Y;
2980	Z = *A;
2981
2982may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2983
2984	*A = Y;
2985	Z = Y;
2986
2987and the LOAD operation never appear outside of the CPU.
2988
2989
2990AND THEN THERE'S THE ALPHA
2991--------------------------
2992
2993The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
2994some versions of the Alpha CPU have a split data cache, permitting them to have
2995two semantically-related cache lines updated at separate times.  This is where
2996the data dependency barrier really becomes necessary as this synchronises both
2997caches with the memory coherence system, thus making it seem like pointer
2998changes vs new data occur in the right order.
2999
3000The Alpha defines the Linux kernel's memory barrier model.
3001
3002See the subsection on "Cache Coherency" above.
3003
3004
3005============
3006EXAMPLE USES
3007============
3008
3009CIRCULAR BUFFERS
3010----------------
3011
3012Memory barriers can be used to implement circular buffering without the need
3013of a lock to serialise the producer with the consumer.  See:
3014
3015	Documentation/circular-buffers.txt
3016
3017for details.
3018
3019
3020==========
3021REFERENCES
3022==========
3023
3024Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
3025Digital Press)
3026	Chapter 5.2: Physical Address Space Characteristics
3027	Chapter 5.4: Caches and Write Buffers
3028	Chapter 5.5: Data Sharing
3029	Chapter 5.6: Read/Write Ordering
3030
3031AMD64 Architecture Programmer's Manual Volume 2: System Programming
3032	Chapter 7.1: Memory-Access Ordering
3033	Chapter 7.4: Buffering and Combining Memory Writes
3034
3035IA-32 Intel Architecture Software Developer's Manual, Volume 3:
3036System Programming Guide
3037	Chapter 7.1: Locked Atomic Operations
3038	Chapter 7.2: Memory Ordering
3039	Chapter 7.4: Serializing Instructions
3040
3041The SPARC Architecture Manual, Version 9
3042	Chapter 8: Memory Models
3043	Appendix D: Formal Specification of the Memory Models
3044	Appendix J: Programming with the Memory Models
3045
3046UltraSPARC Programmer Reference Manual
3047	Chapter 5: Memory Accesses and Cacheability
3048	Chapter 15: Sparc-V9 Memory Models
3049
3050UltraSPARC III Cu User's Manual
3051	Chapter 9: Memory Models
3052
3053UltraSPARC IIIi Processor User's Manual
3054	Chapter 8: Memory Models
3055
3056UltraSPARC Architecture 2005
3057	Chapter 9: Memory
3058	Appendix D: Formal Specifications of the Memory Models
3059
3060UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
3061	Chapter 8: Memory Models
3062	Appendix F: Caches and Cache Coherency
3063
3064Solaris Internals, Core Kernel Architecture, p63-68:
3065	Chapter 3.3: Hardware Considerations for Locks and
3066			Synchronization
3067
3068Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
3069for Kernel Programmers:
3070	Chapter 13: Other Memory Models
3071
3072Intel Itanium Architecture Software Developer's Manual: Volume 1:
3073	Section 2.6: Speculation
3074	Section 4.4: Memory Access
3075