1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7
8Contents:
9
10 (*) Abstract memory access model.
11
12     - Device operations.
13     - Guarantees.
14
15 (*) What are memory barriers?
16
17     - Varieties of memory barrier.
18     - What may not be assumed about memory barriers?
19     - Data dependency barriers.
20     - Control dependencies.
21     - SMP barrier pairing.
22     - Examples of memory barrier sequences.
23     - Read memory barriers vs load speculation.
24     - Transitivity
25
26 (*) Explicit kernel barriers.
27
28     - Compiler barrier.
29     - CPU memory barriers.
30     - MMIO write barrier.
31
32 (*) Implicit kernel memory barriers.
33
34     - Locking functions.
35     - Interrupt disabling functions.
36     - Sleep and wake-up functions.
37     - Miscellaneous functions.
38
39 (*) Inter-CPU locking barrier effects.
40
41     - Locks vs memory accesses.
42     - Locks vs I/O accesses.
43
44 (*) Where are memory barriers needed?
45
46     - Interprocessor interaction.
47     - Atomic operations.
48     - Accessing devices.
49     - Interrupts.
50
51 (*) Kernel I/O barrier effects.
52
53 (*) Assumed minimum execution ordering model.
54
55 (*) The effects of the cpu cache.
56
57     - Cache coherency.
58     - Cache coherency vs DMA.
59     - Cache coherency vs MMIO.
60
61 (*) The things CPUs get up to.
62
63     - And then there's the Alpha.
64
65 (*) Example uses.
66
67     - Circular buffers.
68
69 (*) References.
70
71
72============================
73ABSTRACT MEMORY ACCESS MODEL
74============================
75
76Consider the following abstract model of the system:
77
78		            :                :
79		            :                :
80		            :                :
81		+-------+   :   +--------+   :   +-------+
82		|       |   :   |        |   :   |       |
83		|       |   :   |        |   :   |       |
84		| CPU 1 |<----->| Memory |<----->| CPU 2 |
85		|       |   :   |        |   :   |       |
86		|       |   :   |        |   :   |       |
87		+-------+   :   +--------+   :   +-------+
88		    ^       :       ^        :       ^
89		    |       :       |        :       |
90		    |       :       |        :       |
91		    |       :       v        :       |
92		    |       :   +--------+   :       |
93		    |       :   |        |   :       |
94		    |       :   |        |   :       |
95		    +---------->| Device |<----------+
96		            :   |        |   :
97		            :   |        |   :
98		            :   +--------+   :
99		            :                :
100
101Each CPU executes a program that generates memory access operations.  In the
102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103perform the memory operations in any order it likes, provided program causality
104appears to be maintained.  Similarly, the compiler may also arrange the
105instructions it emits in any order it likes, provided it doesn't affect the
106apparent operation of the program.
107
108So in the above diagram, the effects of the memory operations performed by a
109CPU are perceived by the rest of the system as the operations cross the
110interface between the CPU and rest of the system (the dotted lines).
111
112
113For example, consider the following sequence of events:
114
115	CPU 1		CPU 2
116	===============	===============
117	{ A == 1; B == 2 }
118	A = 3;		x = B;
119	B = 4;		y = A;
120
121The set of accesses as seen by the memory system in the middle can be arranged
122in 24 different combinations:
123
124	STORE A=3,	STORE B=4,	x=LOAD A->3,	y=LOAD B->4
125	STORE A=3,	STORE B=4,	y=LOAD B->4,	x=LOAD A->3
126	STORE A=3,	x=LOAD A->3,	STORE B=4,	y=LOAD B->4
127	STORE A=3,	x=LOAD A->3,	y=LOAD B->2,	STORE B=4
128	STORE A=3,	y=LOAD B->2,	STORE B=4,	x=LOAD A->3
129	STORE A=3,	y=LOAD B->2,	x=LOAD A->3,	STORE B=4
130	STORE B=4,	STORE A=3,	x=LOAD A->3,	y=LOAD B->4
131	STORE B=4, ...
132	...
133
134and can thus result in four different combinations of values:
135
136	x == 1, y == 2
137	x == 1, y == 4
138	x == 3, y == 2
139	x == 3, y == 4
140
141
142Furthermore, the stores committed by a CPU to the memory system may not be
143perceived by the loads made by another CPU in the same order as the stores were
144committed.
145
146
147As a further example, consider this sequence of events:
148
149	CPU 1		CPU 2
150	===============	===============
151	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
152	B = 4;		Q = P;
153	P = &B		D = *Q;
154
155There is an obvious data dependency here, as the value loaded into D depends on
156the address retrieved from P by CPU 2.  At the end of the sequence, any of the
157following results are possible:
158
159	(Q == &A) and (D == 1)
160	(Q == &B) and (D == 2)
161	(Q == &B) and (D == 4)
162
163Note that CPU 2 will never try and load C into D because the CPU will load P
164into Q before issuing the load of *Q.
165
166
167DEVICE OPERATIONS
168-----------------
169
170Some devices present their control interfaces as collections of memory
171locations, but the order in which the control registers are accessed is very
172important.  For instance, imagine an ethernet card with a set of internal
173registers that are accessed through an address port register (A) and a data
174port register (D).  To read internal register 5, the following code might then
175be used:
176
177	*A = 5;
178	x = *D;
179
180but this might show up as either of the following two sequences:
181
182	STORE *A = 5, x = LOAD *D
183	x = LOAD *D, STORE *A = 5
184
185the second of which will almost certainly result in a malfunction, since it set
186the address _after_ attempting to read the register.
187
188
189GUARANTEES
190----------
191
192There are some minimal guarantees that may be expected of a CPU:
193
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195     respect to itself.  This means that for:
196
197	ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
198
199     the CPU will issue the following memory operations:
200
201	Q = LOAD P, D = LOAD *Q
202
203     and always in that order.  On most systems, smp_read_barrier_depends()
204     does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
205     is required to prevent compiler mischief.  Please note that you
206     should normally use something like rcu_dereference() instead of
207     open-coding smp_read_barrier_depends().
208
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210     ordered within that CPU.  This means that for:
211
212	a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
213
214     the CPU will only issue the following sequence of memory operations:
215
216	a = LOAD *X, STORE *X = b
217
218     And for:
219
220	ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
221
222     the CPU will only issue:
223
224	STORE *X = c, d = LOAD *X
225
226     (Loads and stores overlap if they are targeted at overlapping pieces of
227     memory).
228
229And there are a number of things that _must_ or _must_not_ be assumed:
230
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232     memory references that are not protected by ACCESS_ONCE().  Without
233     ACCESS_ONCE(), the compiler is within its rights to do all sorts
234     of "creative" transformations, which are covered in the Compiler
235     Barrier section.
236
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
238     in the order given.  This means that for:
239
240	X = *A; Y = *B; *D = Z;
241
242     we may get any of the following sequences:
243
244	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
245	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
246	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
247	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
248	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
249	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
250
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252     discarded.  This means that for:
253
254	X = *A; Y = *(A + 4);
255
256     we may get any one of the following sequences:
257
258	X = LOAD *A; Y = LOAD *(A + 4);
259	Y = LOAD *(A + 4); X = LOAD *A;
260	{X, Y} = LOAD {*A, *(A + 4) };
261
262     And for:
263
264	*A = X; *(A + 4) = Y;
265
266     we may get any of:
267
268	STORE *A = X; STORE *(A + 4) = Y;
269	STORE *(A + 4) = Y; STORE *A = X;
270	STORE {*A, *(A + 4) } = {X, Y};
271
272
273=========================
274WHAT ARE MEMORY BARRIERS?
275=========================
276
277As can be seen above, independent memory operations are effectively performed
278in random order, but this can be a problem for CPU-CPU interaction and for I/O.
279What is required is some way of intervening to instruct the compiler and the
280CPU to restrict the order.
281
282Memory barriers are such interventions.  They impose a perceived partial
283ordering over the memory operations on either side of the barrier.
284
285Such enforcement is important because the CPUs and other devices in a system
286can use a variety of tricks to improve performance, including reordering,
287deferral and combination of memory operations; speculative loads; speculative
288branch prediction and various types of caching.  Memory barriers are used to
289override or suppress these tricks, allowing the code to sanely control the
290interaction of multiple CPUs and/or devices.
291
292
293VARIETIES OF MEMORY BARRIER
294---------------------------
295
296Memory barriers come in four basic varieties:
297
298 (1) Write (or store) memory barriers.
299
300     A write memory barrier gives a guarantee that all the STORE operations
301     specified before the barrier will appear to happen before all the STORE
302     operations specified after the barrier with respect to the other
303     components of the system.
304
305     A write barrier is a partial ordering on stores only; it is not required
306     to have any effect on loads.
307
308     A CPU can be viewed as committing a sequence of store operations to the
309     memory system as time progresses.  All stores before a write barrier will
310     occur in the sequence _before_ all the stores after the write barrier.
311
312     [!] Note that write barriers should normally be paired with read or data
313     dependency barriers; see the "SMP barrier pairing" subsection.
314
315
316 (2) Data dependency barriers.
317
318     A data dependency barrier is a weaker form of read barrier.  In the case
319     where two loads are performed such that the second depends on the result
320     of the first (eg: the first load retrieves the address to which the second
321     load will be directed), a data dependency barrier would be required to
322     make sure that the target of the second load is updated before the address
323     obtained by the first load is accessed.
324
325     A data dependency barrier is a partial ordering on interdependent loads
326     only; it is not required to have any effect on stores, independent loads
327     or overlapping loads.
328
329     As mentioned in (1), the other CPUs in the system can be viewed as
330     committing sequences of stores to the memory system that the CPU being
331     considered can then perceive.  A data dependency barrier issued by the CPU
332     under consideration guarantees that for any load preceding it, if that
333     load touches one of a sequence of stores from another CPU, then by the
334     time the barrier completes, the effects of all the stores prior to that
335     touched by the load will be perceptible to any loads issued after the data
336     dependency barrier.
337
338     See the "Examples of memory barrier sequences" subsection for diagrams
339     showing the ordering constraints.
340
341     [!] Note that the first load really has to have a _data_ dependency and
342     not a control dependency.  If the address for the second load is dependent
343     on the first load, but the dependency is through a conditional rather than
344     actually loading the address itself, then it's a _control_ dependency and
345     a full read barrier or better is required.  See the "Control dependencies"
346     subsection for more information.
347
348     [!] Note that data dependency barriers should normally be paired with
349     write barriers; see the "SMP barrier pairing" subsection.
350
351
352 (3) Read (or load) memory barriers.
353
354     A read barrier is a data dependency barrier plus a guarantee that all the
355     LOAD operations specified before the barrier will appear to happen before
356     all the LOAD operations specified after the barrier with respect to the
357     other components of the system.
358
359     A read barrier is a partial ordering on loads only; it is not required to
360     have any effect on stores.
361
362     Read memory barriers imply data dependency barriers, and so can substitute
363     for them.
364
365     [!] Note that read barriers should normally be paired with write barriers;
366     see the "SMP barrier pairing" subsection.
367
368
369 (4) General memory barriers.
370
371     A general memory barrier gives a guarantee that all the LOAD and STORE
372     operations specified before the barrier will appear to happen before all
373     the LOAD and STORE operations specified after the barrier with respect to
374     the other components of the system.
375
376     A general memory barrier is a partial ordering over both loads and stores.
377
378     General memory barriers imply both read and write memory barriers, and so
379     can substitute for either.
380
381
382And a couple of implicit varieties:
383
384 (5) ACQUIRE operations.
385
386     This acts as a one-way permeable barrier.  It guarantees that all memory
387     operations after the ACQUIRE operation will appear to happen after the
388     ACQUIRE operation with respect to the other components of the system.
389     ACQUIRE operations include LOCK operations and smp_load_acquire()
390     operations.
391
392     Memory operations that occur before an ACQUIRE operation may appear to
393     happen after it completes.
394
395     An ACQUIRE operation should almost always be paired with a RELEASE
396     operation.
397
398
399 (6) RELEASE operations.
400
401     This also acts as a one-way permeable barrier.  It guarantees that all
402     memory operations before the RELEASE operation will appear to happen
403     before the RELEASE operation with respect to the other components of the
404     system. RELEASE operations include UNLOCK operations and
405     smp_store_release() operations.
406
407     Memory operations that occur after a RELEASE operation may appear to
408     happen before it completes.
409
410     The use of ACQUIRE and RELEASE operations generally precludes the need
411     for other sorts of memory barrier (but note the exceptions mentioned in
412     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
413     pair is -not- guaranteed to act as a full memory barrier.  However, after
414     an ACQUIRE on a given variable, all memory accesses preceding any prior
415     RELEASE on that same variable are guaranteed to be visible.  In other
416     words, within a given variable's critical section, all accesses of all
417     previous critical sections for that variable are guaranteed to have
418     completed.
419
420     This means that ACQUIRE acts as a minimal "acquire" operation and
421     RELEASE acts as a minimal "release" operation.
422
423
424Memory barriers are only required where there's a possibility of interaction
425between two CPUs or between a CPU and a device.  If it can be guaranteed that
426there won't be any such interaction in any particular piece of code, then
427memory barriers are unnecessary in that piece of code.
428
429
430Note that these are the _minimum_ guarantees.  Different architectures may give
431more substantial guarantees, but they may _not_ be relied upon outside of arch
432specific code.
433
434
435WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
436----------------------------------------------
437
438There are certain things that the Linux kernel memory barriers do not guarantee:
439
440 (*) There is no guarantee that any of the memory accesses specified before a
441     memory barrier will be _complete_ by the completion of a memory barrier
442     instruction; the barrier can be considered to draw a line in that CPU's
443     access queue that accesses of the appropriate type may not cross.
444
445 (*) There is no guarantee that issuing a memory barrier on one CPU will have
446     any direct effect on another CPU or any other hardware in the system.  The
447     indirect effect will be the order in which the second CPU sees the effects
448     of the first CPU's accesses occur, but see the next point:
449
450 (*) There is no guarantee that a CPU will see the correct order of effects
451     from a second CPU's accesses, even _if_ the second CPU uses a memory
452     barrier, unless the first CPU _also_ uses a matching memory barrier (see
453     the subsection on "SMP Barrier Pairing").
454
455 (*) There is no guarantee that some intervening piece of off-the-CPU
456     hardware[*] will not reorder the memory accesses.  CPU cache coherency
457     mechanisms should propagate the indirect effects of a memory barrier
458     between CPUs, but might not do so in order.
459
460	[*] For information on bus mastering DMA and coherency please read:
461
462	    Documentation/PCI/pci.txt
463	    Documentation/DMA-API-HOWTO.txt
464	    Documentation/DMA-API.txt
465
466
467DATA DEPENDENCY BARRIERS
468------------------------
469
470The usage requirements of data dependency barriers are a little subtle, and
471it's not always obvious that they're needed.  To illustrate, consider the
472following sequence of events:
473
474	CPU 1		      CPU 2
475	===============	      ===============
476	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
477	B = 4;
478	<write barrier>
479	ACCESS_ONCE(P) = &B
480			      Q = ACCESS_ONCE(P);
481			      D = *Q;
482
483There's a clear data dependency here, and it would seem that by the end of the
484sequence, Q must be either &A or &B, and that:
485
486	(Q == &A) implies (D == 1)
487	(Q == &B) implies (D == 4)
488
489But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
490leading to the following situation:
491
492	(Q == &B) and (D == 2) ????
493
494Whilst this may seem like a failure of coherency or causality maintenance, it
495isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
496Alpha).
497
498To deal with this, a data dependency barrier or better must be inserted
499between the address load and the data load:
500
501	CPU 1		      CPU 2
502	===============	      ===============
503	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
504	B = 4;
505	<write barrier>
506	ACCESS_ONCE(P) = &B
507			      Q = ACCESS_ONCE(P);
508			      <data dependency barrier>
509			      D = *Q;
510
511This enforces the occurrence of one of the two implications, and prevents the
512third possibility from arising.
513
514[!] Note that this extremely counterintuitive situation arises most easily on
515machines with split caches, so that, for example, one cache bank processes
516even-numbered cache lines and the other bank processes odd-numbered cache
517lines.  The pointer P might be stored in an odd-numbered cache line, and the
518variable B might be stored in an even-numbered cache line.  Then, if the
519even-numbered bank of the reading CPU's cache is extremely busy while the
520odd-numbered bank is idle, one can see the new value of the pointer P (&B),
521but the old value of the variable B (2).
522
523
524Another example of where data dependency barriers might be required is where a
525number is read from memory and then used to calculate the index for an array
526access:
527
528	CPU 1		      CPU 2
529	===============	      ===============
530	{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
531	M[1] = 4;
532	<write barrier>
533	ACCESS_ONCE(P) = 1
534			      Q = ACCESS_ONCE(P);
535			      <data dependency barrier>
536			      D = M[Q];
537
538
539The data dependency barrier is very important to the RCU system,
540for example.  See rcu_assign_pointer() and rcu_dereference() in
541include/linux/rcupdate.h.  This permits the current target of an RCU'd
542pointer to be replaced with a new modified target, without the replacement
543target appearing to be incompletely initialised.
544
545See also the subsection on "Cache Coherency" for a more thorough example.
546
547
548CONTROL DEPENDENCIES
549--------------------
550
551A control dependency requires a full read memory barrier, not simply a data
552dependency barrier to make it work correctly.  Consider the following bit of
553code:
554
555	q = ACCESS_ONCE(a);
556	if (q) {
557		<data dependency barrier>  /* BUG: No data dependency!!! */
558		p = ACCESS_ONCE(b);
559	}
560
561This will not have the desired effect because there is no actual data
562dependency, but rather a control dependency that the CPU may short-circuit
563by attempting to predict the outcome in advance, so that other CPUs see
564the load from b as having happened before the load from a.  In such a
565case what's actually required is:
566
567	q = ACCESS_ONCE(a);
568	if (q) {
569		<read barrier>
570		p = ACCESS_ONCE(b);
571	}
572
573However, stores are not speculated.  This means that ordering -is- provided
574in the following example:
575
576	q = ACCESS_ONCE(a);
577	if (q) {
578		ACCESS_ONCE(b) = p;
579	}
580
581Please note that ACCESS_ONCE() is not optional!  Without the
582ACCESS_ONCE(), might combine the load from 'a' with other loads from
583'a', and the store to 'b' with other stores to 'b', with possible highly
584counterintuitive effects on ordering.
585
586Worse yet, if the compiler is able to prove (say) that the value of
587variable 'a' is always non-zero, it would be well within its rights
588to optimize the original example by eliminating the "if" statement
589as follows:
590
591	q = a;
592	b = p;  /* BUG: Compiler and CPU can both reorder!!! */
593
594So don't leave out the ACCESS_ONCE().
595
596It is tempting to try to enforce ordering on identical stores on both
597branches of the "if" statement as follows:
598
599	q = ACCESS_ONCE(a);
600	if (q) {
601		barrier();
602		ACCESS_ONCE(b) = p;
603		do_something();
604	} else {
605		barrier();
606		ACCESS_ONCE(b) = p;
607		do_something_else();
608	}
609
610Unfortunately, current compilers will transform this as follows at high
611optimization levels:
612
613	q = ACCESS_ONCE(a);
614	barrier();
615	ACCESS_ONCE(b) = p;  /* BUG: No ordering vs. load from a!!! */
616	if (q) {
617		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
618		do_something();
619	} else {
620		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
621		do_something_else();
622	}
623
624Now there is no conditional between the load from 'a' and the store to
625'b', which means that the CPU is within its rights to reorder them:
626The conditional is absolutely required, and must be present in the
627assembly code even after all compiler optimizations have been applied.
628Therefore, if you need ordering in this example, you need explicit
629memory barriers, for example, smp_store_release():
630
631	q = ACCESS_ONCE(a);
632	if (q) {
633		smp_store_release(&b, p);
634		do_something();
635	} else {
636		smp_store_release(&b, p);
637		do_something_else();
638	}
639
640In contrast, without explicit memory barriers, two-legged-if control
641ordering is guaranteed only when the stores differ, for example:
642
643	q = ACCESS_ONCE(a);
644	if (q) {
645		ACCESS_ONCE(b) = p;
646		do_something();
647	} else {
648		ACCESS_ONCE(b) = r;
649		do_something_else();
650	}
651
652The initial ACCESS_ONCE() is still required to prevent the compiler from
653proving the value of 'a'.
654
655In addition, you need to be careful what you do with the local variable 'q',
656otherwise the compiler might be able to guess the value and again remove
657the needed conditional.  For example:
658
659	q = ACCESS_ONCE(a);
660	if (q % MAX) {
661		ACCESS_ONCE(b) = p;
662		do_something();
663	} else {
664		ACCESS_ONCE(b) = r;
665		do_something_else();
666	}
667
668If MAX is defined to be 1, then the compiler knows that (q % MAX) is
669equal to zero, in which case the compiler is within its rights to
670transform the above code into the following:
671
672	q = ACCESS_ONCE(a);
673	ACCESS_ONCE(b) = p;
674	do_something_else();
675
676Given this transformation, the CPU is not required to respect the ordering
677between the load from variable 'a' and the store to variable 'b'.  It is
678tempting to add a barrier(), but this does not help.  The conditional
679is gone, and the barrier won't bring it back.  Therefore, if you are
680relying on this ordering, you should make sure that MAX is greater than
681one, perhaps as follows:
682
683	q = ACCESS_ONCE(a);
684	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
685	if (q % MAX) {
686		ACCESS_ONCE(b) = p;
687		do_something();
688	} else {
689		ACCESS_ONCE(b) = r;
690		do_something_else();
691	}
692
693Please note once again that the stores to 'b' differ.  If they were
694identical, as noted earlier, the compiler could pull this store outside
695of the 'if' statement.
696
697Finally, control dependencies do -not- provide transitivity.  This is
698demonstrated by two related examples, with the initial values of
699x and y both being zero:
700
701	CPU 0                     CPU 1
702	=====================     =====================
703	r1 = ACCESS_ONCE(x);      r2 = ACCESS_ONCE(y);
704	if (r1 > 0)               if (r2 > 0)
705	  ACCESS_ONCE(y) = 1;       ACCESS_ONCE(x) = 1;
706
707	assert(!(r1 == 1 && r2 == 1));
708
709The above two-CPU example will never trigger the assert().  However,
710if control dependencies guaranteed transitivity (which they do not),
711then adding the following CPU would guarantee a related assertion:
712
713	CPU 2
714	=====================
715	ACCESS_ONCE(x) = 2;
716
717	assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
718
719But because control dependencies do -not- provide transitivity, the above
720assertion can fail after the combined three-CPU example completes.  If you
721need the three-CPU example to provide ordering, you will need smp_mb()
722between the loads and stores in the CPU 0 and CPU 1 code fragments,
723that is, just before or just after the "if" statements.
724
725These two examples are the LB and WWC litmus tests from this paper:
726http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
727site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
728
729In summary:
730
731  (*) Control dependencies can order prior loads against later stores.
732      However, they do -not- guarantee any other sort of ordering:
733      Not prior loads against later loads, nor prior stores against
734      later anything.  If you need these other forms of ordering,
735      use smb_rmb(), smp_wmb(), or, in the case of prior stores and
736      later loads, smp_mb().
737
738  (*) If both legs of the "if" statement begin with identical stores
739      to the same variable, a barrier() statement is required at the
740      beginning of each leg of the "if" statement.
741
742  (*) Control dependencies require at least one run-time conditional
743      between the prior load and the subsequent store, and this
744      conditional must involve the prior load.  If the compiler
745      is able to optimize the conditional away, it will have also
746      optimized away the ordering.  Careful use of ACCESS_ONCE() can
747      help to preserve the needed conditional.
748
749  (*) Control dependencies require that the compiler avoid reordering the
750      dependency into nonexistence.  Careful use of ACCESS_ONCE() or
751      barrier() can help to preserve your control dependency.  Please
752      see the Compiler Barrier section for more information.
753
754  (*) Control dependencies do -not- provide transitivity.  If you
755      need transitivity, use smp_mb().
756
757
758SMP BARRIER PAIRING
759-------------------
760
761When dealing with CPU-CPU interactions, certain types of memory barrier should
762always be paired.  A lack of appropriate pairing is almost certainly an error.
763
764General barriers pair with each other, though they also pair with
765most other types of barriers, albeit without transitivity.  An acquire
766barrier pairs with a release barrier, but both may also pair with other
767barriers, including of course general barriers.  A write barrier pairs
768with a data dependency barrier, an acquire barrier, a release barrier,
769a read barrier, or a general barrier.  Similarly a read barrier or a
770data dependency barrier pairs with a write barrier, an acquire barrier,
771a release barrier, or a general barrier:
772
773	CPU 1		      CPU 2
774	===============	      ===============
775	ACCESS_ONCE(a) = 1;
776	<write barrier>
777	ACCESS_ONCE(b) = 2;   x = ACCESS_ONCE(b);
778			      <read barrier>
779			      y = ACCESS_ONCE(a);
780
781Or:
782
783	CPU 1		      CPU 2
784	===============	      ===============================
785	a = 1;
786	<write barrier>
787	ACCESS_ONCE(b) = &a;  x = ACCESS_ONCE(b);
788			      <data dependency barrier>
789			      y = *x;
790
791Basically, the read barrier always has to be there, even though it can be of
792the "weaker" type.
793
794[!] Note that the stores before the write barrier would normally be expected to
795match the loads after the read barrier or the data dependency barrier, and vice
796versa:
797
798	CPU 1                               CPU 2
799	===================                 ===================
800	ACCESS_ONCE(a) = 1;  }----   --->{  v = ACCESS_ONCE(c);
801	ACCESS_ONCE(b) = 2;  }    \ /    {  w = ACCESS_ONCE(d);
802	<write barrier>            \        <read barrier>
803	ACCESS_ONCE(c) = 3;  }    / \    {  x = ACCESS_ONCE(a);
804	ACCESS_ONCE(d) = 4;  }----   --->{  y = ACCESS_ONCE(b);
805
806
807EXAMPLES OF MEMORY BARRIER SEQUENCES
808------------------------------------
809
810Firstly, write barriers act as partial orderings on store operations.
811Consider the following sequence of events:
812
813	CPU 1
814	=======================
815	STORE A = 1
816	STORE B = 2
817	STORE C = 3
818	<write barrier>
819	STORE D = 4
820	STORE E = 5
821
822This sequence of events is committed to the memory coherence system in an order
823that the rest of the system might perceive as the unordered set of { STORE A,
824STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
825}:
826
827	+-------+       :      :
828	|       |       +------+
829	|       |------>| C=3  |     }     /\
830	|       |  :    +------+     }-----  \  -----> Events perceptible to
831	|       |  :    | A=1  |     }        \/       the rest of the system
832	|       |  :    +------+     }
833	| CPU 1 |  :    | B=2  |     }
834	|       |       +------+     }
835	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
836	|       |       +------+     }        requires all stores prior to the
837	|       |  :    | E=5  |     }        barrier to be committed before
838	|       |  :    +------+     }        further stores may take place
839	|       |------>| D=4  |     }
840	|       |       +------+
841	+-------+       :      :
842	                   |
843	                   | Sequence in which stores are committed to the
844	                   | memory system by CPU 1
845	                   V
846
847
848Secondly, data dependency barriers act as partial orderings on data-dependent
849loads.  Consider the following sequence of events:
850
851	CPU 1			CPU 2
852	=======================	=======================
853		{ B = 7; X = 9; Y = 8; C = &Y }
854	STORE A = 1
855	STORE B = 2
856	<write barrier>
857	STORE C = &B		LOAD X
858	STORE D = 4		LOAD C (gets &B)
859				LOAD *C (reads B)
860
861Without intervention, CPU 2 may perceive the events on CPU 1 in some
862effectively random order, despite the write barrier issued by CPU 1:
863
864	+-------+       :      :                :       :
865	|       |       +------+                +-------+  | Sequence of update
866	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
867	|       |  :    +------+     \          +-------+  | CPU 2
868	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
869	|       |       +------+       |        +-------+
870	|       |   wwwwwwwwwwwwwwww   |        :       :
871	|       |       +------+       |        :       :
872	|       |  :    | C=&B |---    |        :       :       +-------+
873	|       |  :    +------+   \   |        +-------+       |       |
874	|       |------>| D=4  |    ----------->| C->&B |------>|       |
875	|       |       +------+       |        +-------+       |       |
876	+-------+       :      :       |        :       :       |       |
877	                               |        :       :       |       |
878	                               |        :       :       | CPU 2 |
879	                               |        +-------+       |       |
880	    Apparently incorrect --->  |        | B->7  |------>|       |
881	    perception of B (!)        |        +-------+       |       |
882	                               |        :       :       |       |
883	                               |        +-------+       |       |
884	    The load of X holds --->    \       | X->9  |------>|       |
885	    up the maintenance           \      +-------+       |       |
886	    of coherence of B             ----->| B->2  |       +-------+
887	                                        +-------+
888	                                        :       :
889
890
891In the above example, CPU 2 perceives that B is 7, despite the load of *C
892(which would be B) coming after the LOAD of C.
893
894If, however, a data dependency barrier were to be placed between the load of C
895and the load of *C (ie: B) on CPU 2:
896
897	CPU 1			CPU 2
898	=======================	=======================
899		{ B = 7; X = 9; Y = 8; C = &Y }
900	STORE A = 1
901	STORE B = 2
902	<write barrier>
903	STORE C = &B		LOAD X
904	STORE D = 4		LOAD C (gets &B)
905				<data dependency barrier>
906				LOAD *C (reads B)
907
908then the following will occur:
909
910	+-------+       :      :                :       :
911	|       |       +------+                +-------+
912	|       |------>| B=2  |-----       --->| Y->8  |
913	|       |  :    +------+     \          +-------+
914	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
915	|       |       +------+       |        +-------+
916	|       |   wwwwwwwwwwwwwwww   |        :       :
917	|       |       +------+       |        :       :
918	|       |  :    | C=&B |---    |        :       :       +-------+
919	|       |  :    +------+   \   |        +-------+       |       |
920	|       |------>| D=4  |    ----------->| C->&B |------>|       |
921	|       |       +------+       |        +-------+       |       |
922	+-------+       :      :       |        :       :       |       |
923	                               |        :       :       |       |
924	                               |        :       :       | CPU 2 |
925	                               |        +-------+       |       |
926	                               |        | X->9  |------>|       |
927	                               |        +-------+       |       |
928	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
929	  prior to the store of C        \      +-------+       |       |
930	  are perceptible to              ----->| B->2  |------>|       |
931	  subsequent loads                      +-------+       |       |
932	                                        :       :       +-------+
933
934
935And thirdly, a read barrier acts as a partial order on loads.  Consider the
936following sequence of events:
937
938	CPU 1			CPU 2
939	=======================	=======================
940		{ A = 0, B = 9 }
941	STORE A=1
942	<write barrier>
943	STORE B=2
944				LOAD B
945				LOAD A
946
947Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
948some effectively random order, despite the write barrier issued by CPU 1:
949
950	+-------+       :      :                :       :
951	|       |       +------+                +-------+
952	|       |------>| A=1  |------      --->| A->0  |
953	|       |       +------+      \         +-------+
954	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
955	|       |       +------+        |       +-------+
956	|       |------>| B=2  |---     |       :       :
957	|       |       +------+   \    |       :       :       +-------+
958	+-------+       :      :    \   |       +-------+       |       |
959	                             ---------->| B->2  |------>|       |
960	                                |       +-------+       | CPU 2 |
961	                                |       | A->0  |------>|       |
962	                                |       +-------+       |       |
963	                                |       :       :       +-------+
964	                                 \      :       :
965	                                  \     +-------+
966	                                   ---->| A->1  |
967	                                        +-------+
968	                                        :       :
969
970
971If, however, a read barrier were to be placed between the load of B and the
972load of A on CPU 2:
973
974	CPU 1			CPU 2
975	=======================	=======================
976		{ A = 0, B = 9 }
977	STORE A=1
978	<write barrier>
979	STORE B=2
980				LOAD B
981				<read barrier>
982				LOAD A
983
984then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
9852:
986
987	+-------+       :      :                :       :
988	|       |       +------+                +-------+
989	|       |------>| A=1  |------      --->| A->0  |
990	|       |       +------+      \         +-------+
991	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
992	|       |       +------+        |       +-------+
993	|       |------>| B=2  |---     |       :       :
994	|       |       +------+   \    |       :       :       +-------+
995	+-------+       :      :    \   |       +-------+       |       |
996	                             ---------->| B->2  |------>|       |
997	                                |       +-------+       | CPU 2 |
998	                                |       :       :       |       |
999	                                |       :       :       |       |
1000	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1001	  barrier causes all effects      \     +-------+       |       |
1002	  prior to the storage of B        ---->| A->1  |------>|       |
1003	  to be perceptible to CPU 2            +-------+       |       |
1004	                                        :       :       +-------+
1005
1006
1007To illustrate this more completely, consider what could happen if the code
1008contained a load of A either side of the read barrier:
1009
1010	CPU 1			CPU 2
1011	=======================	=======================
1012		{ A = 0, B = 9 }
1013	STORE A=1
1014	<write barrier>
1015	STORE B=2
1016				LOAD B
1017				LOAD A [first load of A]
1018				<read barrier>
1019				LOAD A [second load of A]
1020
1021Even though the two loads of A both occur after the load of B, they may both
1022come up with different values:
1023
1024	+-------+       :      :                :       :
1025	|       |       +------+                +-------+
1026	|       |------>| A=1  |------      --->| A->0  |
1027	|       |       +------+      \         +-------+
1028	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1029	|       |       +------+        |       +-------+
1030	|       |------>| B=2  |---     |       :       :
1031	|       |       +------+   \    |       :       :       +-------+
1032	+-------+       :      :    \   |       +-------+       |       |
1033	                             ---------->| B->2  |------>|       |
1034	                                |       +-------+       | CPU 2 |
1035	                                |       :       :       |       |
1036	                                |       :       :       |       |
1037	                                |       +-------+       |       |
1038	                                |       | A->0  |------>| 1st   |
1039	                                |       +-------+       |       |
1040	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1041	  barrier causes all effects      \     +-------+       |       |
1042	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1043	  to be perceptible to CPU 2            +-------+       |       |
1044	                                        :       :       +-------+
1045
1046
1047But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1048before the read barrier completes anyway:
1049
1050	+-------+       :      :                :       :
1051	|       |       +------+                +-------+
1052	|       |------>| A=1  |------      --->| A->0  |
1053	|       |       +------+      \         +-------+
1054	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1055	|       |       +------+        |       +-------+
1056	|       |------>| B=2  |---     |       :       :
1057	|       |       +------+   \    |       :       :       +-------+
1058	+-------+       :      :    \   |       +-------+       |       |
1059	                             ---------->| B->2  |------>|       |
1060	                                |       +-------+       | CPU 2 |
1061	                                |       :       :       |       |
1062	                                 \      :       :       |       |
1063	                                  \     +-------+       |       |
1064	                                   ---->| A->1  |------>| 1st   |
1065	                                        +-------+       |       |
1066	                                    rrrrrrrrrrrrrrrrr   |       |
1067	                                        +-------+       |       |
1068	                                        | A->1  |------>| 2nd   |
1069	                                        +-------+       |       |
1070	                                        :       :       +-------+
1071
1072
1073The guarantee is that the second load will always come up with A == 1 if the
1074load of B came up with B == 2.  No such guarantee exists for the first load of
1075A; that may come up with either A == 0 or A == 1.
1076
1077
1078READ MEMORY BARRIERS VS LOAD SPECULATION
1079----------------------------------------
1080
1081Many CPUs speculate with loads: that is they see that they will need to load an
1082item from memory, and they find a time where they're not using the bus for any
1083other loads, and so do the load in advance - even though they haven't actually
1084got to that point in the instruction execution flow yet.  This permits the
1085actual load instruction to potentially complete immediately because the CPU
1086already has the value to hand.
1087
1088It may turn out that the CPU didn't actually need the value - perhaps because a
1089branch circumvented the load - in which case it can discard the value or just
1090cache it for later use.
1091
1092Consider:
1093
1094	CPU 1			CPU 2
1095	=======================	=======================
1096				LOAD B
1097				DIVIDE		} Divide instructions generally
1098				DIVIDE		} take a long time to perform
1099				LOAD A
1100
1101Which might appear as this:
1102
1103	                                        :       :       +-------+
1104	                                        +-------+       |       |
1105	                                    --->| B->2  |------>|       |
1106	                                        +-------+       | CPU 2 |
1107	                                        :       :DIVIDE |       |
1108	                                        +-------+       |       |
1109	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1110	division speculates on the              +-------+   ~   |       |
1111	LOAD of A                               :       :   ~   |       |
1112	                                        :       :DIVIDE |       |
1113	                                        :       :   ~   |       |
1114	Once the divisions are complete -->     :       :   ~-->|       |
1115	the CPU can then perform the            :       :       |       |
1116	LOAD with immediate effect              :       :       +-------+
1117
1118
1119Placing a read barrier or a data dependency barrier just before the second
1120load:
1121
1122	CPU 1			CPU 2
1123	=======================	=======================
1124				LOAD B
1125				DIVIDE
1126				DIVIDE
1127				<read barrier>
1128				LOAD A
1129
1130will force any value speculatively obtained to be reconsidered to an extent
1131dependent on the type of barrier used.  If there was no change made to the
1132speculated memory location, then the speculated value will just be used:
1133
1134	                                        :       :       +-------+
1135	                                        +-------+       |       |
1136	                                    --->| B->2  |------>|       |
1137	                                        +-------+       | CPU 2 |
1138	                                        :       :DIVIDE |       |
1139	                                        +-------+       |       |
1140	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1141	division speculates on the              +-------+   ~   |       |
1142	LOAD of A                               :       :   ~   |       |
1143	                                        :       :DIVIDE |       |
1144	                                        :       :   ~   |       |
1145	                                        :       :   ~   |       |
1146	                                    rrrrrrrrrrrrrrrr~   |       |
1147	                                        :       :   ~   |       |
1148	                                        :       :   ~-->|       |
1149	                                        :       :       |       |
1150	                                        :       :       +-------+
1151
1152
1153but if there was an update or an invalidation from another CPU pending, then
1154the speculation will be cancelled and the value reloaded:
1155
1156	                                        :       :       +-------+
1157	                                        +-------+       |       |
1158	                                    --->| B->2  |------>|       |
1159	                                        +-------+       | CPU 2 |
1160	                                        :       :DIVIDE |       |
1161	                                        +-------+       |       |
1162	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1163	division speculates on the              +-------+   ~   |       |
1164	LOAD of A                               :       :   ~   |       |
1165	                                        :       :DIVIDE |       |
1166	                                        :       :   ~   |       |
1167	                                        :       :   ~   |       |
1168	                                    rrrrrrrrrrrrrrrrr   |       |
1169	                                        +-------+       |       |
1170	The speculation is discarded --->   --->| A->1  |------>|       |
1171	and an updated value is                 +-------+       |       |
1172	retrieved                               :       :       +-------+
1173
1174
1175TRANSITIVITY
1176------------
1177
1178Transitivity is a deeply intuitive notion about ordering that is not
1179always provided by real computer systems.  The following example
1180demonstrates transitivity (also called "cumulativity"):
1181
1182	CPU 1			CPU 2			CPU 3
1183	=======================	=======================	=======================
1184		{ X = 0, Y = 0 }
1185	STORE X=1		LOAD X			STORE Y=1
1186				<general barrier>	<general barrier>
1187				LOAD Y			LOAD X
1188
1189Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1190This indicates that CPU 2's load from X in some sense follows CPU 1's
1191store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1192store to Y.  The question is then "Can CPU 3's load from X return 0?"
1193
1194Because CPU 2's load from X in some sense came after CPU 1's store, it
1195is natural to expect that CPU 3's load from X must therefore return 1.
1196This expectation is an example of transitivity: if a load executing on
1197CPU A follows a load from the same variable executing on CPU B, then
1198CPU A's load must either return the same value that CPU B's load did,
1199or must return some later value.
1200
1201In the Linux kernel, use of general memory barriers guarantees
1202transitivity.  Therefore, in the above example, if CPU 2's load from X
1203returns 1 and its load from Y returns 0, then CPU 3's load from X must
1204also return 1.
1205
1206However, transitivity is -not- guaranteed for read or write barriers.
1207For example, suppose that CPU 2's general barrier in the above example
1208is changed to a read barrier as shown below:
1209
1210	CPU 1			CPU 2			CPU 3
1211	=======================	=======================	=======================
1212		{ X = 0, Y = 0 }
1213	STORE X=1		LOAD X			STORE Y=1
1214				<read barrier>		<general barrier>
1215				LOAD Y			LOAD X
1216
1217This substitution destroys transitivity: in this example, it is perfectly
1218legal for CPU 2's load from X to return 1, its load from Y to return 0,
1219and CPU 3's load from X to return 0.
1220
1221The key point is that although CPU 2's read barrier orders its pair
1222of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1223this example runs on a system where CPUs 1 and 2 share a store buffer
1224or a level of cache, CPU 2 might have early access to CPU 1's writes.
1225General barriers are therefore required to ensure that all CPUs agree
1226on the combined order of CPU 1's and CPU 2's accesses.
1227
1228To reiterate, if your code requires transitivity, use general barriers
1229throughout.
1230
1231
1232========================
1233EXPLICIT KERNEL BARRIERS
1234========================
1235
1236The Linux kernel has a variety of different barriers that act at different
1237levels:
1238
1239  (*) Compiler barrier.
1240
1241  (*) CPU memory barriers.
1242
1243  (*) MMIO write barrier.
1244
1245
1246COMPILER BARRIER
1247----------------
1248
1249The Linux kernel has an explicit compiler barrier function that prevents the
1250compiler from moving the memory accesses either side of it to the other side:
1251
1252	barrier();
1253
1254This is a general barrier -- there are no read-read or write-write variants
1255of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
1256for barrier() that affects only the specific accesses flagged by the
1257ACCESS_ONCE().
1258
1259The barrier() function has the following effects:
1260
1261 (*) Prevents the compiler from reordering accesses following the
1262     barrier() to precede any accesses preceding the barrier().
1263     One example use for this property is to ease communication between
1264     interrupt-handler code and the code that was interrupted.
1265
1266 (*) Within a loop, forces the compiler to load the variables used
1267     in that loop's conditional on each pass through that loop.
1268
1269The ACCESS_ONCE() function can prevent any number of optimizations that,
1270while perfectly safe in single-threaded code, can be fatal in concurrent
1271code.  Here are some examples of these sorts of optimizations:
1272
1273 (*) The compiler is within its rights to reorder loads and stores
1274     to the same variable, and in some cases, the CPU is within its
1275     rights to reorder loads to the same variable.  This means that
1276     the following code:
1277
1278	a[0] = x;
1279	a[1] = x;
1280
1281     Might result in an older value of x stored in a[1] than in a[0].
1282     Prevent both the compiler and the CPU from doing this as follows:
1283
1284	a[0] = ACCESS_ONCE(x);
1285	a[1] = ACCESS_ONCE(x);
1286
1287     In short, ACCESS_ONCE() provides cache coherence for accesses from
1288     multiple CPUs to a single variable.
1289
1290 (*) The compiler is within its rights to merge successive loads from
1291     the same variable.  Such merging can cause the compiler to "optimize"
1292     the following code:
1293
1294	while (tmp = a)
1295		do_something_with(tmp);
1296
1297     into the following code, which, although in some sense legitimate
1298     for single-threaded code, is almost certainly not what the developer
1299     intended:
1300
1301	if (tmp = a)
1302		for (;;)
1303			do_something_with(tmp);
1304
1305     Use ACCESS_ONCE() to prevent the compiler from doing this to you:
1306
1307	while (tmp = ACCESS_ONCE(a))
1308		do_something_with(tmp);
1309
1310 (*) The compiler is within its rights to reload a variable, for example,
1311     in cases where high register pressure prevents the compiler from
1312     keeping all data of interest in registers.  The compiler might
1313     therefore optimize the variable 'tmp' out of our previous example:
1314
1315	while (tmp = a)
1316		do_something_with(tmp);
1317
1318     This could result in the following code, which is perfectly safe in
1319     single-threaded code, but can be fatal in concurrent code:
1320
1321	while (a)
1322		do_something_with(a);
1323
1324     For example, the optimized version of this code could result in
1325     passing a zero to do_something_with() in the case where the variable
1326     a was modified by some other CPU between the "while" statement and
1327     the call to do_something_with().
1328
1329     Again, use ACCESS_ONCE() to prevent the compiler from doing this:
1330
1331	while (tmp = ACCESS_ONCE(a))
1332		do_something_with(tmp);
1333
1334     Note that if the compiler runs short of registers, it might save
1335     tmp onto the stack.  The overhead of this saving and later restoring
1336     is why compilers reload variables.  Doing so is perfectly safe for
1337     single-threaded code, so you need to tell the compiler about cases
1338     where it is not safe.
1339
1340 (*) The compiler is within its rights to omit a load entirely if it knows
1341     what the value will be.  For example, if the compiler can prove that
1342     the value of variable 'a' is always zero, it can optimize this code:
1343
1344	while (tmp = a)
1345		do_something_with(tmp);
1346
1347     Into this:
1348
1349	do { } while (0);
1350
1351     This transformation is a win for single-threaded code because it gets
1352     rid of a load and a branch.  The problem is that the compiler will
1353     carry out its proof assuming that the current CPU is the only one
1354     updating variable 'a'.  If variable 'a' is shared, then the compiler's
1355     proof will be erroneous.  Use ACCESS_ONCE() to tell the compiler
1356     that it doesn't know as much as it thinks it does:
1357
1358	while (tmp = ACCESS_ONCE(a))
1359		do_something_with(tmp);
1360
1361     But please note that the compiler is also closely watching what you
1362     do with the value after the ACCESS_ONCE().  For example, suppose you
1363     do the following and MAX is a preprocessor macro with the value 1:
1364
1365	while ((tmp = ACCESS_ONCE(a)) % MAX)
1366		do_something_with(tmp);
1367
1368     Then the compiler knows that the result of the "%" operator applied
1369     to MAX will always be zero, again allowing the compiler to optimize
1370     the code into near-nonexistence.  (It will still load from the
1371     variable 'a'.)
1372
1373 (*) Similarly, the compiler is within its rights to omit a store entirely
1374     if it knows that the variable already has the value being stored.
1375     Again, the compiler assumes that the current CPU is the only one
1376     storing into the variable, which can cause the compiler to do the
1377     wrong thing for shared variables.  For example, suppose you have
1378     the following:
1379
1380	a = 0;
1381	/* Code that does not store to variable a. */
1382	a = 0;
1383
1384     The compiler sees that the value of variable 'a' is already zero, so
1385     it might well omit the second store.  This would come as a fatal
1386     surprise if some other CPU might have stored to variable 'a' in the
1387     meantime.
1388
1389     Use ACCESS_ONCE() to prevent the compiler from making this sort of
1390     wrong guess:
1391
1392	ACCESS_ONCE(a) = 0;
1393	/* Code that does not store to variable a. */
1394	ACCESS_ONCE(a) = 0;
1395
1396 (*) The compiler is within its rights to reorder memory accesses unless
1397     you tell it not to.  For example, consider the following interaction
1398     between process-level code and an interrupt handler:
1399
1400	void process_level(void)
1401	{
1402		msg = get_message();
1403		flag = true;
1404	}
1405
1406	void interrupt_handler(void)
1407	{
1408		if (flag)
1409			process_message(msg);
1410	}
1411
1412     There is nothing to prevent the compiler from transforming
1413     process_level() to the following, in fact, this might well be a
1414     win for single-threaded code:
1415
1416	void process_level(void)
1417	{
1418		flag = true;
1419		msg = get_message();
1420	}
1421
1422     If the interrupt occurs between these two statement, then
1423     interrupt_handler() might be passed a garbled msg.  Use ACCESS_ONCE()
1424     to prevent this as follows:
1425
1426	void process_level(void)
1427	{
1428		ACCESS_ONCE(msg) = get_message();
1429		ACCESS_ONCE(flag) = true;
1430	}
1431
1432	void interrupt_handler(void)
1433	{
1434		if (ACCESS_ONCE(flag))
1435			process_message(ACCESS_ONCE(msg));
1436	}
1437
1438     Note that the ACCESS_ONCE() wrappers in interrupt_handler()
1439     are needed if this interrupt handler can itself be interrupted
1440     by something that also accesses 'flag' and 'msg', for example,
1441     a nested interrupt or an NMI.  Otherwise, ACCESS_ONCE() is not
1442     needed in interrupt_handler() other than for documentation purposes.
1443     (Note also that nested interrupts do not typically occur in modern
1444     Linux kernels, in fact, if an interrupt handler returns with
1445     interrupts enabled, you will get a WARN_ONCE() splat.)
1446
1447     You should assume that the compiler can move ACCESS_ONCE() past
1448     code not containing ACCESS_ONCE(), barrier(), or similar primitives.
1449
1450     This effect could also be achieved using barrier(), but ACCESS_ONCE()
1451     is more selective:  With ACCESS_ONCE(), the compiler need only forget
1452     the contents of the indicated memory locations, while with barrier()
1453     the compiler must discard the value of all memory locations that
1454     it has currented cached in any machine registers.  Of course,
1455     the compiler must also respect the order in which the ACCESS_ONCE()s
1456     occur, though the CPU of course need not do so.
1457
1458 (*) The compiler is within its rights to invent stores to a variable,
1459     as in the following example:
1460
1461	if (a)
1462		b = a;
1463	else
1464		b = 42;
1465
1466     The compiler might save a branch by optimizing this as follows:
1467
1468	b = 42;
1469	if (a)
1470		b = a;
1471
1472     In single-threaded code, this is not only safe, but also saves
1473     a branch.  Unfortunately, in concurrent code, this optimization
1474     could cause some other CPU to see a spurious value of 42 -- even
1475     if variable 'a' was never zero -- when loading variable 'b'.
1476     Use ACCESS_ONCE() to prevent this as follows:
1477
1478	if (a)
1479		ACCESS_ONCE(b) = a;
1480	else
1481		ACCESS_ONCE(b) = 42;
1482
1483     The compiler can also invent loads.  These are usually less
1484     damaging, but they can result in cache-line bouncing and thus in
1485     poor performance and scalability.  Use ACCESS_ONCE() to prevent
1486     invented loads.
1487
1488 (*) For aligned memory locations whose size allows them to be accessed
1489     with a single memory-reference instruction, prevents "load tearing"
1490     and "store tearing," in which a single large access is replaced by
1491     multiple smaller accesses.  For example, given an architecture having
1492     16-bit store instructions with 7-bit immediate fields, the compiler
1493     might be tempted to use two 16-bit store-immediate instructions to
1494     implement the following 32-bit store:
1495
1496	p = 0x00010002;
1497
1498     Please note that GCC really does use this sort of optimization,
1499     which is not surprising given that it would likely take more
1500     than two instructions to build the constant and then store it.
1501     This optimization can therefore be a win in single-threaded code.
1502     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1503     this optimization in a volatile store.  In the absence of such bugs,
1504     use of ACCESS_ONCE() prevents store tearing in the following example:
1505
1506	ACCESS_ONCE(p) = 0x00010002;
1507
1508     Use of packed structures can also result in load and store tearing,
1509     as in this example:
1510
1511	struct __attribute__((__packed__)) foo {
1512		short a;
1513		int b;
1514		short c;
1515	};
1516	struct foo foo1, foo2;
1517	...
1518
1519	foo2.a = foo1.a;
1520	foo2.b = foo1.b;
1521	foo2.c = foo1.c;
1522
1523     Because there are no ACCESS_ONCE() wrappers and no volatile markings,
1524     the compiler would be well within its rights to implement these three
1525     assignment statements as a pair of 32-bit loads followed by a pair
1526     of 32-bit stores.  This would result in load tearing on 'foo1.b'
1527     and store tearing on 'foo2.b'.  ACCESS_ONCE() again prevents tearing
1528     in this example:
1529
1530	foo2.a = foo1.a;
1531	ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
1532	foo2.c = foo1.c;
1533
1534All that aside, it is never necessary to use ACCESS_ONCE() on a variable
1535that has been marked volatile.  For example, because 'jiffies' is marked
1536volatile, it is never necessary to say ACCESS_ONCE(jiffies).  The reason
1537for this is that ACCESS_ONCE() is implemented as a volatile cast, which
1538has no effect when its argument is already marked volatile.
1539
1540Please note that these compiler barriers have no direct effect on the CPU,
1541which may then reorder things however it wishes.
1542
1543
1544CPU MEMORY BARRIERS
1545-------------------
1546
1547The Linux kernel has eight basic CPU memory barriers:
1548
1549	TYPE		MANDATORY		SMP CONDITIONAL
1550	===============	=======================	===========================
1551	GENERAL		mb()			smp_mb()
1552	WRITE		wmb()			smp_wmb()
1553	READ		rmb()			smp_rmb()
1554	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1555
1556
1557All memory barriers except the data dependency barriers imply a compiler
1558barrier. Data dependencies do not impose any additional compiler ordering.
1559
1560Aside: In the case of data dependencies, the compiler would be expected to
1561issue the loads in the correct order (eg. `a[b]` would have to load the value
1562of b before loading a[b]), however there is no guarantee in the C specification
1563that the compiler may not speculate the value of b (eg. is equal to 1) and load
1564a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1565problem of a compiler reloading b after having loaded a[b], thus having a newer
1566copy of b than a[b]. A consensus has not yet been reached about these problems,
1567however the ACCESS_ONCE macro is a good place to start looking.
1568
1569SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1570systems because it is assumed that a CPU will appear to be self-consistent,
1571and will order overlapping accesses correctly with respect to itself.
1572
1573[!] Note that SMP memory barriers _must_ be used to control the ordering of
1574references to shared memory on SMP systems, though the use of locking instead
1575is sufficient.
1576
1577Mandatory barriers should not be used to control SMP effects, since mandatory
1578barriers unnecessarily impose overhead on UP systems. They may, however, be
1579used to control MMIO effects on accesses through relaxed memory I/O windows.
1580These are required even on non-SMP systems as they affect the order in which
1581memory operations appear to a device by prohibiting both the compiler and the
1582CPU from reordering them.
1583
1584
1585There are some more advanced barrier functions:
1586
1587 (*) set_mb(var, value)
1588
1589     This assigns the value to the variable and then inserts a full memory
1590     barrier after it, depending on the function.  It isn't guaranteed to
1591     insert anything more than a compiler barrier in a UP compilation.
1592
1593
1594 (*) smp_mb__before_atomic();
1595 (*) smp_mb__after_atomic();
1596
1597     These are for use with atomic (such as add, subtract, increment and
1598     decrement) functions that don't return a value, especially when used for
1599     reference counting.  These functions do not imply memory barriers.
1600
1601     These are also used for atomic bitop functions that do not return a
1602     value (such as set_bit and clear_bit).
1603
1604     As an example, consider a piece of code that marks an object as being dead
1605     and then decrements the object's reference count:
1606
1607	obj->dead = 1;
1608	smp_mb__before_atomic();
1609	atomic_dec(&obj->ref_count);
1610
1611     This makes sure that the death mark on the object is perceived to be set
1612     *before* the reference counter is decremented.
1613
1614     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1615     operations" subsection for information on where to use these.
1616
1617
1618MMIO WRITE BARRIER
1619------------------
1620
1621The Linux kernel also has a special barrier for use with memory-mapped I/O
1622writes:
1623
1624	mmiowb();
1625
1626This is a variation on the mandatory write barrier that causes writes to weakly
1627ordered I/O regions to be partially ordered.  Its effects may go beyond the
1628CPU->Hardware interface and actually affect the hardware at some level.
1629
1630See the subsection "Locks vs I/O accesses" for more information.
1631
1632
1633===============================
1634IMPLICIT KERNEL MEMORY BARRIERS
1635===============================
1636
1637Some of the other functions in the linux kernel imply memory barriers, amongst
1638which are locking and scheduling functions.
1639
1640This specification is a _minimum_ guarantee; any particular architecture may
1641provide more substantial guarantees, but these may not be relied upon outside
1642of arch specific code.
1643
1644
1645ACQUIRING FUNCTIONS
1646-------------------
1647
1648The Linux kernel has a number of locking constructs:
1649
1650 (*) spin locks
1651 (*) R/W spin locks
1652 (*) mutexes
1653 (*) semaphores
1654 (*) R/W semaphores
1655 (*) RCU
1656
1657In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1658for each construct.  These operations all imply certain barriers:
1659
1660 (1) ACQUIRE operation implication:
1661
1662     Memory operations issued after the ACQUIRE will be completed after the
1663     ACQUIRE operation has completed.
1664
1665     Memory operations issued before the ACQUIRE may be completed after
1666     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1667     combined with a following ACQUIRE, orders prior loads against
1668     subsequent loads and stores and also orders prior stores against
1669     subsequent stores.  Note that this is weaker than smp_mb()!  The
1670     smp_mb__before_spinlock() primitive is free on many architectures.
1671
1672 (2) RELEASE operation implication:
1673
1674     Memory operations issued before the RELEASE will be completed before the
1675     RELEASE operation has completed.
1676
1677     Memory operations issued after the RELEASE may be completed before the
1678     RELEASE operation has completed.
1679
1680 (3) ACQUIRE vs ACQUIRE implication:
1681
1682     All ACQUIRE operations issued before another ACQUIRE operation will be
1683     completed before that ACQUIRE operation.
1684
1685 (4) ACQUIRE vs RELEASE implication:
1686
1687     All ACQUIRE operations issued before a RELEASE operation will be
1688     completed before the RELEASE operation.
1689
1690 (5) Failed conditional ACQUIRE implication:
1691
1692     Certain locking variants of the ACQUIRE operation may fail, either due to
1693     being unable to get the lock immediately, or due to receiving an unblocked
1694     signal whilst asleep waiting for the lock to become available.  Failed
1695     locks do not imply any sort of barrier.
1696
1697[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1698one-way barriers is that the effects of instructions outside of a critical
1699section may seep into the inside of the critical section.
1700
1701An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1702because it is possible for an access preceding the ACQUIRE to happen after the
1703ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1704the two accesses can themselves then cross:
1705
1706	*A = a;
1707	ACQUIRE M
1708	RELEASE M
1709	*B = b;
1710
1711may occur as:
1712
1713	ACQUIRE M, STORE *B, STORE *A, RELEASE M
1714
1715When the ACQUIRE and RELEASE are a lock acquisition and release,
1716respectively, this same reordering can occur if the lock's ACQUIRE and
1717RELEASE are to the same lock variable, but only from the perspective of
1718another CPU not holding that lock.  In short, a ACQUIRE followed by an
1719RELEASE may -not- be assumed to be a full memory barrier.
1720
1721Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not
1722imply a full memory barrier.  If it is necessary for a RELEASE-ACQUIRE
1723pair to produce a full barrier, the ACQUIRE can be followed by an
1724smp_mb__after_unlock_lock() invocation.  This will produce a full barrier
1725if either (a) the RELEASE and the ACQUIRE are executed by the same
1726CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable.
1727The smp_mb__after_unlock_lock() primitive is free on many architectures.
1728Without smp_mb__after_unlock_lock(), the CPU's execution of the critical
1729sections corresponding to the RELEASE and the ACQUIRE can cross, so that:
1730
1731	*A = a;
1732	RELEASE M
1733	ACQUIRE N
1734	*B = b;
1735
1736could occur as:
1737
1738	ACQUIRE N, STORE *B, STORE *A, RELEASE M
1739
1740It might appear that this reordering could introduce a deadlock.
1741However, this cannot happen because if such a deadlock threatened,
1742the RELEASE would simply complete, thereby avoiding the deadlock.
1743
1744	Why does this work?
1745
1746	One key point is that we are only talking about the CPU doing
1747	the reordering, not the compiler.  If the compiler (or, for
1748	that matter, the developer) switched the operations, deadlock
1749	-could- occur.
1750
1751	But suppose the CPU reordered the operations.  In this case,
1752	the unlock precedes the lock in the assembly code.  The CPU
1753	simply elected to try executing the later lock operation first.
1754	If there is a deadlock, this lock operation will simply spin (or
1755	try to sleep, but more on that later).	The CPU will eventually
1756	execute the unlock operation (which preceded the lock operation
1757	in the assembly code), which will unravel the potential deadlock,
1758	allowing the lock operation to succeed.
1759
1760	But what if the lock is a sleeplock?  In that case, the code will
1761	try to enter the scheduler, where it will eventually encounter
1762	a memory barrier, which will force the earlier unlock operation
1763	to complete, again unraveling the deadlock.  There might be
1764	a sleep-unlock race, but the locking primitive needs to resolve
1765	such races properly in any case.
1766
1767With smp_mb__after_unlock_lock(), the two critical sections cannot overlap.
1768For example, with the following code, the store to *A will always be
1769seen by other CPUs before the store to *B:
1770
1771	*A = a;
1772	RELEASE M
1773	ACQUIRE N
1774	smp_mb__after_unlock_lock();
1775	*B = b;
1776
1777The operations will always occur in one of the following orders:
1778
1779	STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B
1780	STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1781	ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1782
1783If the RELEASE and ACQUIRE were instead both operating on the same lock
1784variable, only the first of these alternatives can occur.  In addition,
1785the more strongly ordered systems may rule out some of the above orders.
1786But in any case, as noted earlier, the smp_mb__after_unlock_lock()
1787ensures that the store to *A will always be seen as happening before
1788the store to *B.
1789
1790Locks and semaphores may not provide any guarantee of ordering on UP compiled
1791systems, and so cannot be counted on in such a situation to actually achieve
1792anything at all - especially with respect to I/O accesses - unless combined
1793with interrupt disabling operations.
1794
1795See also the section on "Inter-CPU locking barrier effects".
1796
1797
1798As an example, consider the following:
1799
1800	*A = a;
1801	*B = b;
1802	ACQUIRE
1803	*C = c;
1804	*D = d;
1805	RELEASE
1806	*E = e;
1807	*F = f;
1808
1809The following sequence of events is acceptable:
1810
1811	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
1812
1813	[+] Note that {*F,*A} indicates a combined access.
1814
1815But none of the following are:
1816
1817	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
1818	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
1819	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
1820	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
1821
1822
1823
1824INTERRUPT DISABLING FUNCTIONS
1825-----------------------------
1826
1827Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
1828(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
1829barriers are required in such a situation, they must be provided from some
1830other means.
1831
1832
1833SLEEP AND WAKE-UP FUNCTIONS
1834---------------------------
1835
1836Sleeping and waking on an event flagged in global data can be viewed as an
1837interaction between two pieces of data: the task state of the task waiting for
1838the event and the global data used to indicate the event.  To make sure that
1839these appear to happen in the right order, the primitives to begin the process
1840of going to sleep, and the primitives to initiate a wake up imply certain
1841barriers.
1842
1843Firstly, the sleeper normally follows something like this sequence of events:
1844
1845	for (;;) {
1846		set_current_state(TASK_UNINTERRUPTIBLE);
1847		if (event_indicated)
1848			break;
1849		schedule();
1850	}
1851
1852A general memory barrier is interpolated automatically by set_current_state()
1853after it has altered the task state:
1854
1855	CPU 1
1856	===============================
1857	set_current_state();
1858	  set_mb();
1859	    STORE current->state
1860	    <general barrier>
1861	LOAD event_indicated
1862
1863set_current_state() may be wrapped by:
1864
1865	prepare_to_wait();
1866	prepare_to_wait_exclusive();
1867
1868which therefore also imply a general memory barrier after setting the state.
1869The whole sequence above is available in various canned forms, all of which
1870interpolate the memory barrier in the right place:
1871
1872	wait_event();
1873	wait_event_interruptible();
1874	wait_event_interruptible_exclusive();
1875	wait_event_interruptible_timeout();
1876	wait_event_killable();
1877	wait_event_timeout();
1878	wait_on_bit();
1879	wait_on_bit_lock();
1880
1881
1882Secondly, code that performs a wake up normally follows something like this:
1883
1884	event_indicated = 1;
1885	wake_up(&event_wait_queue);
1886
1887or:
1888
1889	event_indicated = 1;
1890	wake_up_process(event_daemon);
1891
1892A write memory barrier is implied by wake_up() and co. if and only if they wake
1893something up.  The barrier occurs before the task state is cleared, and so sits
1894between the STORE to indicate the event and the STORE to set TASK_RUNNING:
1895
1896	CPU 1				CPU 2
1897	===============================	===============================
1898	set_current_state();		STORE event_indicated
1899	  set_mb();			wake_up();
1900	    STORE current->state	  <write barrier>
1901	    <general barrier>		  STORE current->state
1902	LOAD event_indicated
1903
1904To repeat, this write memory barrier is present if and only if something
1905is actually awakened.  To see this, consider the following sequence of
1906events, where X and Y are both initially zero:
1907
1908	CPU 1				CPU 2
1909	===============================	===============================
1910	X = 1;				STORE event_indicated
1911	smp_mb();			wake_up();
1912	Y = 1;				wait_event(wq, Y == 1);
1913	wake_up();			  load from Y sees 1, no memory barrier
1914					load from X might see 0
1915
1916In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
1917to see 1.
1918
1919The available waker functions include:
1920
1921	complete();
1922	wake_up();
1923	wake_up_all();
1924	wake_up_bit();
1925	wake_up_interruptible();
1926	wake_up_interruptible_all();
1927	wake_up_interruptible_nr();
1928	wake_up_interruptible_poll();
1929	wake_up_interruptible_sync();
1930	wake_up_interruptible_sync_poll();
1931	wake_up_locked();
1932	wake_up_locked_poll();
1933	wake_up_nr();
1934	wake_up_poll();
1935	wake_up_process();
1936
1937
1938[!] Note that the memory barriers implied by the sleeper and the waker do _not_
1939order multiple stores before the wake-up with respect to loads of those stored
1940values after the sleeper has called set_current_state().  For instance, if the
1941sleeper does:
1942
1943	set_current_state(TASK_INTERRUPTIBLE);
1944	if (event_indicated)
1945		break;
1946	__set_current_state(TASK_RUNNING);
1947	do_something(my_data);
1948
1949and the waker does:
1950
1951	my_data = value;
1952	event_indicated = 1;
1953	wake_up(&event_wait_queue);
1954
1955there's no guarantee that the change to event_indicated will be perceived by
1956the sleeper as coming after the change to my_data.  In such a circumstance, the
1957code on both sides must interpolate its own memory barriers between the
1958separate data accesses.  Thus the above sleeper ought to do:
1959
1960	set_current_state(TASK_INTERRUPTIBLE);
1961	if (event_indicated) {
1962		smp_rmb();
1963		do_something(my_data);
1964	}
1965
1966and the waker should do:
1967
1968	my_data = value;
1969	smp_wmb();
1970	event_indicated = 1;
1971	wake_up(&event_wait_queue);
1972
1973
1974MISCELLANEOUS FUNCTIONS
1975-----------------------
1976
1977Other functions that imply barriers:
1978
1979 (*) schedule() and similar imply full memory barriers.
1980
1981
1982===================================
1983INTER-CPU ACQUIRING BARRIER EFFECTS
1984===================================
1985
1986On SMP systems locking primitives give a more substantial form of barrier: one
1987that does affect memory access ordering on other CPUs, within the context of
1988conflict on any particular lock.
1989
1990
1991ACQUIRES VS MEMORY ACCESSES
1992---------------------------
1993
1994Consider the following: the system has a pair of spinlocks (M) and (Q), and
1995three CPUs; then should the following sequence of events occur:
1996
1997	CPU 1				CPU 2
1998	===============================	===============================
1999	ACCESS_ONCE(*A) = a;		ACCESS_ONCE(*E) = e;
2000	ACQUIRE M			ACQUIRE Q
2001	ACCESS_ONCE(*B) = b;		ACCESS_ONCE(*F) = f;
2002	ACCESS_ONCE(*C) = c;		ACCESS_ONCE(*G) = g;
2003	RELEASE M			RELEASE Q
2004	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*H) = h;
2005
2006Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2007through *H occur in, other than the constraints imposed by the separate locks
2008on the separate CPUs. It might, for example, see:
2009
2010	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2011
2012But it won't see any of:
2013
2014	*B, *C or *D preceding ACQUIRE M
2015	*A, *B or *C following RELEASE M
2016	*F, *G or *H preceding ACQUIRE Q
2017	*E, *F or *G following RELEASE Q
2018
2019
2020However, if the following occurs:
2021
2022	CPU 1				CPU 2
2023	===============================	===============================
2024	ACCESS_ONCE(*A) = a;
2025	ACQUIRE M		     [1]
2026	ACCESS_ONCE(*B) = b;
2027	ACCESS_ONCE(*C) = c;
2028	RELEASE M	     [1]
2029	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*E) = e;
2030					ACQUIRE M		     [2]
2031					smp_mb__after_unlock_lock();
2032					ACCESS_ONCE(*F) = f;
2033					ACCESS_ONCE(*G) = g;
2034					RELEASE M	     [2]
2035					ACCESS_ONCE(*H) = h;
2036
2037CPU 3 might see:
2038
2039	*E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1],
2040		ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D
2041
2042But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
2043
2044	*B, *C, *D, *F, *G or *H preceding ACQUIRE M [1]
2045	*A, *B or *C following RELEASE M [1]
2046	*F, *G or *H preceding ACQUIRE M [2]
2047	*A, *B, *C, *E, *F or *G following RELEASE M [2]
2048
2049Note that the smp_mb__after_unlock_lock() is critically important
2050here: Without it CPU 3 might see some of the above orderings.
2051Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2052to be seen in order unless CPU 3 holds lock M.
2053
2054
2055ACQUIRES VS I/O ACCESSES
2056------------------------
2057
2058Under certain circumstances (especially involving NUMA), I/O accesses within
2059two spinlocked sections on two different CPUs may be seen as interleaved by the
2060PCI bridge, because the PCI bridge does not necessarily participate in the
2061cache-coherence protocol, and is therefore incapable of issuing the required
2062read memory barriers.
2063
2064For example:
2065
2066	CPU 1				CPU 2
2067	===============================	===============================
2068	spin_lock(Q)
2069	writel(0, ADDR)
2070	writel(1, DATA);
2071	spin_unlock(Q);
2072					spin_lock(Q);
2073					writel(4, ADDR);
2074					writel(5, DATA);
2075					spin_unlock(Q);
2076
2077may be seen by the PCI bridge as follows:
2078
2079	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2080
2081which would probably cause the hardware to malfunction.
2082
2083
2084What is necessary here is to intervene with an mmiowb() before dropping the
2085spinlock, for example:
2086
2087	CPU 1				CPU 2
2088	===============================	===============================
2089	spin_lock(Q)
2090	writel(0, ADDR)
2091	writel(1, DATA);
2092	mmiowb();
2093	spin_unlock(Q);
2094					spin_lock(Q);
2095					writel(4, ADDR);
2096					writel(5, DATA);
2097					mmiowb();
2098					spin_unlock(Q);
2099
2100this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2101before either of the stores issued on CPU 2.
2102
2103
2104Furthermore, following a store by a load from the same device obviates the need
2105for the mmiowb(), because the load forces the store to complete before the load
2106is performed:
2107
2108	CPU 1				CPU 2
2109	===============================	===============================
2110	spin_lock(Q)
2111	writel(0, ADDR)
2112	a = readl(DATA);
2113	spin_unlock(Q);
2114					spin_lock(Q);
2115					writel(4, ADDR);
2116					b = readl(DATA);
2117					spin_unlock(Q);
2118
2119
2120See Documentation/DocBook/deviceiobook.tmpl for more information.
2121
2122
2123=================================
2124WHERE ARE MEMORY BARRIERS NEEDED?
2125=================================
2126
2127Under normal operation, memory operation reordering is generally not going to
2128be a problem as a single-threaded linear piece of code will still appear to
2129work correctly, even if it's in an SMP kernel.  There are, however, four
2130circumstances in which reordering definitely _could_ be a problem:
2131
2132 (*) Interprocessor interaction.
2133
2134 (*) Atomic operations.
2135
2136 (*) Accessing devices.
2137
2138 (*) Interrupts.
2139
2140
2141INTERPROCESSOR INTERACTION
2142--------------------------
2143
2144When there's a system with more than one processor, more than one CPU in the
2145system may be working on the same data set at the same time.  This can cause
2146synchronisation problems, and the usual way of dealing with them is to use
2147locks.  Locks, however, are quite expensive, and so it may be preferable to
2148operate without the use of a lock if at all possible.  In such a case
2149operations that affect both CPUs may have to be carefully ordered to prevent
2150a malfunction.
2151
2152Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2153queued on the semaphore, by virtue of it having a piece of its stack linked to
2154the semaphore's list of waiting processes:
2155
2156	struct rw_semaphore {
2157		...
2158		spinlock_t lock;
2159		struct list_head waiters;
2160	};
2161
2162	struct rwsem_waiter {
2163		struct list_head list;
2164		struct task_struct *task;
2165	};
2166
2167To wake up a particular waiter, the up_read() or up_write() functions have to:
2168
2169 (1) read the next pointer from this waiter's record to know as to where the
2170     next waiter record is;
2171
2172 (2) read the pointer to the waiter's task structure;
2173
2174 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2175
2176 (4) call wake_up_process() on the task; and
2177
2178 (5) release the reference held on the waiter's task struct.
2179
2180In other words, it has to perform this sequence of events:
2181
2182	LOAD waiter->list.next;
2183	LOAD waiter->task;
2184	STORE waiter->task;
2185	CALL wakeup
2186	RELEASE task
2187
2188and if any of these steps occur out of order, then the whole thing may
2189malfunction.
2190
2191Once it has queued itself and dropped the semaphore lock, the waiter does not
2192get the lock again; it instead just waits for its task pointer to be cleared
2193before proceeding.  Since the record is on the waiter's stack, this means that
2194if the task pointer is cleared _before_ the next pointer in the list is read,
2195another CPU might start processing the waiter and might clobber the waiter's
2196stack before the up*() function has a chance to read the next pointer.
2197
2198Consider then what might happen to the above sequence of events:
2199
2200	CPU 1				CPU 2
2201	===============================	===============================
2202					down_xxx()
2203					Queue waiter
2204					Sleep
2205	up_yyy()
2206	LOAD waiter->task;
2207	STORE waiter->task;
2208					Woken up by other event
2209	<preempt>
2210					Resume processing
2211					down_xxx() returns
2212					call foo()
2213					foo() clobbers *waiter
2214	</preempt>
2215	LOAD waiter->list.next;
2216	--- OOPS ---
2217
2218This could be dealt with using the semaphore lock, but then the down_xxx()
2219function has to needlessly get the spinlock again after being woken up.
2220
2221The way to deal with this is to insert a general SMP memory barrier:
2222
2223	LOAD waiter->list.next;
2224	LOAD waiter->task;
2225	smp_mb();
2226	STORE waiter->task;
2227	CALL wakeup
2228	RELEASE task
2229
2230In this case, the barrier makes a guarantee that all memory accesses before the
2231barrier will appear to happen before all the memory accesses after the barrier
2232with respect to the other CPUs on the system.  It does _not_ guarantee that all
2233the memory accesses before the barrier will be complete by the time the barrier
2234instruction itself is complete.
2235
2236On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2237compiler barrier, thus making sure the compiler emits the instructions in the
2238right order without actually intervening in the CPU.  Since there's only one
2239CPU, that CPU's dependency ordering logic will take care of everything else.
2240
2241
2242ATOMIC OPERATIONS
2243-----------------
2244
2245Whilst they are technically interprocessor interaction considerations, atomic
2246operations are noted specially as some of them imply full memory barriers and
2247some don't, but they're very heavily relied on as a group throughout the
2248kernel.
2249
2250Any atomic operation that modifies some state in memory and returns information
2251about the state (old or new) implies an SMP-conditional general memory barrier
2252(smp_mb()) on each side of the actual operation (with the exception of
2253explicit lock operations, described later).  These include:
2254
2255	xchg();
2256	cmpxchg();
2257	atomic_xchg();			atomic_long_xchg();
2258	atomic_cmpxchg();		atomic_long_cmpxchg();
2259	atomic_inc_return();		atomic_long_inc_return();
2260	atomic_dec_return();		atomic_long_dec_return();
2261	atomic_add_return();		atomic_long_add_return();
2262	atomic_sub_return();		atomic_long_sub_return();
2263	atomic_inc_and_test();		atomic_long_inc_and_test();
2264	atomic_dec_and_test();		atomic_long_dec_and_test();
2265	atomic_sub_and_test();		atomic_long_sub_and_test();
2266	atomic_add_negative();		atomic_long_add_negative();
2267	test_and_set_bit();
2268	test_and_clear_bit();
2269	test_and_change_bit();
2270
2271	/* when succeeds (returns 1) */
2272	atomic_add_unless();		atomic_long_add_unless();
2273
2274These are used for such things as implementing ACQUIRE-class and RELEASE-class
2275operations and adjusting reference counters towards object destruction, and as
2276such the implicit memory barrier effects are necessary.
2277
2278
2279The following operations are potential problems as they do _not_ imply memory
2280barriers, but might be used for implementing such things as RELEASE-class
2281operations:
2282
2283	atomic_set();
2284	set_bit();
2285	clear_bit();
2286	change_bit();
2287
2288With these the appropriate explicit memory barrier should be used if necessary
2289(smp_mb__before_atomic() for instance).
2290
2291
2292The following also do _not_ imply memory barriers, and so may require explicit
2293memory barriers under some circumstances (smp_mb__before_atomic() for
2294instance):
2295
2296	atomic_add();
2297	atomic_sub();
2298	atomic_inc();
2299	atomic_dec();
2300
2301If they're used for statistics generation, then they probably don't need memory
2302barriers, unless there's a coupling between statistical data.
2303
2304If they're used for reference counting on an object to control its lifetime,
2305they probably don't need memory barriers because either the reference count
2306will be adjusted inside a locked section, or the caller will already hold
2307sufficient references to make the lock, and thus a memory barrier unnecessary.
2308
2309If they're used for constructing a lock of some description, then they probably
2310do need memory barriers as a lock primitive generally has to do things in a
2311specific order.
2312
2313Basically, each usage case has to be carefully considered as to whether memory
2314barriers are needed or not.
2315
2316The following operations are special locking primitives:
2317
2318	test_and_set_bit_lock();
2319	clear_bit_unlock();
2320	__clear_bit_unlock();
2321
2322These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2323preference to other operations when implementing locking primitives, because
2324their implementations can be optimised on many architectures.
2325
2326[!] Note that special memory barrier primitives are available for these
2327situations because on some CPUs the atomic instructions used imply full memory
2328barriers, and so barrier instructions are superfluous in conjunction with them,
2329and in such cases the special barrier primitives will be no-ops.
2330
2331See Documentation/atomic_ops.txt for more information.
2332
2333
2334ACCESSING DEVICES
2335-----------------
2336
2337Many devices can be memory mapped, and so appear to the CPU as if they're just
2338a set of memory locations.  To control such a device, the driver usually has to
2339make the right memory accesses in exactly the right order.
2340
2341However, having a clever CPU or a clever compiler creates a potential problem
2342in that the carefully sequenced accesses in the driver code won't reach the
2343device in the requisite order if the CPU or the compiler thinks it is more
2344efficient to reorder, combine or merge accesses - something that would cause
2345the device to malfunction.
2346
2347Inside of the Linux kernel, I/O should be done through the appropriate accessor
2348routines - such as inb() or writel() - which know how to make such accesses
2349appropriately sequential.  Whilst this, for the most part, renders the explicit
2350use of memory barriers unnecessary, there are a couple of situations where they
2351might be needed:
2352
2353 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2354     so for _all_ general drivers locks should be used and mmiowb() must be
2355     issued prior to unlocking the critical section.
2356
2357 (2) If the accessor functions are used to refer to an I/O memory window with
2358     relaxed memory access properties, then _mandatory_ memory barriers are
2359     required to enforce ordering.
2360
2361See Documentation/DocBook/deviceiobook.tmpl for more information.
2362
2363
2364INTERRUPTS
2365----------
2366
2367A driver may be interrupted by its own interrupt service routine, and thus the
2368two parts of the driver may interfere with each other's attempts to control or
2369access the device.
2370
2371This may be alleviated - at least in part - by disabling local interrupts (a
2372form of locking), such that the critical operations are all contained within
2373the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2374routine is executing, the driver's core may not run on the same CPU, and its
2375interrupt is not permitted to happen again until the current interrupt has been
2376handled, thus the interrupt handler does not need to lock against that.
2377
2378However, consider a driver that was talking to an ethernet card that sports an
2379address register and a data register.  If that driver's core talks to the card
2380under interrupt-disablement and then the driver's interrupt handler is invoked:
2381
2382	LOCAL IRQ DISABLE
2383	writew(ADDR, 3);
2384	writew(DATA, y);
2385	LOCAL IRQ ENABLE
2386	<interrupt>
2387	writew(ADDR, 4);
2388	q = readw(DATA);
2389	</interrupt>
2390
2391The store to the data register might happen after the second store to the
2392address register if ordering rules are sufficiently relaxed:
2393
2394	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2395
2396
2397If ordering rules are relaxed, it must be assumed that accesses done inside an
2398interrupt disabled section may leak outside of it and may interleave with
2399accesses performed in an interrupt - and vice versa - unless implicit or
2400explicit barriers are used.
2401
2402Normally this won't be a problem because the I/O accesses done inside such
2403sections will include synchronous load operations on strictly ordered I/O
2404registers that form implicit I/O barriers. If this isn't sufficient then an
2405mmiowb() may need to be used explicitly.
2406
2407
2408A similar situation may occur between an interrupt routine and two routines
2409running on separate CPUs that communicate with each other. If such a case is
2410likely, then interrupt-disabling locks should be used to guarantee ordering.
2411
2412
2413==========================
2414KERNEL I/O BARRIER EFFECTS
2415==========================
2416
2417When accessing I/O memory, drivers should use the appropriate accessor
2418functions:
2419
2420 (*) inX(), outX():
2421
2422     These are intended to talk to I/O space rather than memory space, but
2423     that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2424     indeed have special I/O space access cycles and instructions, but many
2425     CPUs don't have such a concept.
2426
2427     The PCI bus, amongst others, defines an I/O space concept which - on such
2428     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2429     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2430     memory map, particularly on those CPUs that don't support alternate I/O
2431     spaces.
2432
2433     Accesses to this space may be fully synchronous (as on i386), but
2434     intermediary bridges (such as the PCI host bridge) may not fully honour
2435     that.
2436
2437     They are guaranteed to be fully ordered with respect to each other.
2438
2439     They are not guaranteed to be fully ordered with respect to other types of
2440     memory and I/O operation.
2441
2442 (*) readX(), writeX():
2443
2444     Whether these are guaranteed to be fully ordered and uncombined with
2445     respect to each other on the issuing CPU depends on the characteristics
2446     defined for the memory window through which they're accessing. On later
2447     i386 architecture machines, for example, this is controlled by way of the
2448     MTRR registers.
2449
2450     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2451     provided they're not accessing a prefetchable device.
2452
2453     However, intermediary hardware (such as a PCI bridge) may indulge in
2454     deferral if it so wishes; to flush a store, a load from the same location
2455     is preferred[*], but a load from the same device or from configuration
2456     space should suffice for PCI.
2457
2458     [*] NOTE! attempting to load from the same location as was written to may
2459	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2460	 example.
2461
2462     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2463     force stores to be ordered.
2464
2465     Please refer to the PCI specification for more information on interactions
2466     between PCI transactions.
2467
2468 (*) readX_relaxed()
2469
2470     These are similar to readX(), but are not guaranteed to be ordered in any
2471     way. Be aware that there is no I/O read barrier available.
2472
2473 (*) ioreadX(), iowriteX()
2474
2475     These will perform appropriately for the type of access they're actually
2476     doing, be it inX()/outX() or readX()/writeX().
2477
2478
2479========================================
2480ASSUMED MINIMUM EXECUTION ORDERING MODEL
2481========================================
2482
2483It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2484maintain the appearance of program causality with respect to itself.  Some CPUs
2485(such as i386 or x86_64) are more constrained than others (such as powerpc or
2486frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2487of arch-specific code.
2488
2489This means that it must be considered that the CPU will execute its instruction
2490stream in any order it feels like - or even in parallel - provided that if an
2491instruction in the stream depends on an earlier instruction, then that
2492earlier instruction must be sufficiently complete[*] before the later
2493instruction may proceed; in other words: provided that the appearance of
2494causality is maintained.
2495
2496 [*] Some instructions have more than one effect - such as changing the
2497     condition codes, changing registers or changing memory - and different
2498     instructions may depend on different effects.
2499
2500A CPU may also discard any instruction sequence that winds up having no
2501ultimate effect.  For example, if two adjacent instructions both load an
2502immediate value into the same register, the first may be discarded.
2503
2504
2505Similarly, it has to be assumed that compiler might reorder the instruction
2506stream in any way it sees fit, again provided the appearance of causality is
2507maintained.
2508
2509
2510============================
2511THE EFFECTS OF THE CPU CACHE
2512============================
2513
2514The way cached memory operations are perceived across the system is affected to
2515a certain extent by the caches that lie between CPUs and memory, and by the
2516memory coherence system that maintains the consistency of state in the system.
2517
2518As far as the way a CPU interacts with another part of the system through the
2519caches goes, the memory system has to include the CPU's caches, and memory
2520barriers for the most part act at the interface between the CPU and its cache
2521(memory barriers logically act on the dotted line in the following diagram):
2522
2523	    <--- CPU --->         :       <----------- Memory ----------->
2524	                          :
2525	+--------+    +--------+  :   +--------+    +-----------+
2526	|        |    |        |  :   |        |    |           |    +--------+
2527	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2528	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2529	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2530	|        |    |        |  :   |        |    |           |    |        |
2531	+--------+    +--------+  :   +--------+    |           |    |        |
2532	                          :                 | Cache     |    +--------+
2533	                          :                 | Coherency |
2534	                          :                 | Mechanism |    +--------+
2535	+--------+    +--------+  :   +--------+    |           |    |	      |
2536	|        |    |        |  :   |        |    |           |    |        |
2537	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2538	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2539	|        |    | Queue  |  :   |        |    |           |    |        |
2540	|        |    |        |  :   |        |    |           |    +--------+
2541	+--------+    +--------+  :   +--------+    +-----------+
2542	                          :
2543	                          :
2544
2545Although any particular load or store may not actually appear outside of the
2546CPU that issued it since it may have been satisfied within the CPU's own cache,
2547it will still appear as if the full memory access had taken place as far as the
2548other CPUs are concerned since the cache coherency mechanisms will migrate the
2549cacheline over to the accessing CPU and propagate the effects upon conflict.
2550
2551The CPU core may execute instructions in any order it deems fit, provided the
2552expected program causality appears to be maintained.  Some of the instructions
2553generate load and store operations which then go into the queue of memory
2554accesses to be performed.  The core may place these in the queue in any order
2555it wishes, and continue execution until it is forced to wait for an instruction
2556to complete.
2557
2558What memory barriers are concerned with is controlling the order in which
2559accesses cross from the CPU side of things to the memory side of things, and
2560the order in which the effects are perceived to happen by the other observers
2561in the system.
2562
2563[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2564their own loads and stores as if they had happened in program order.
2565
2566[!] MMIO or other device accesses may bypass the cache system.  This depends on
2567the properties of the memory window through which devices are accessed and/or
2568the use of any special device communication instructions the CPU may have.
2569
2570
2571CACHE COHERENCY
2572---------------
2573
2574Life isn't quite as simple as it may appear above, however: for while the
2575caches are expected to be coherent, there's no guarantee that that coherency
2576will be ordered.  This means that whilst changes made on one CPU will
2577eventually become visible on all CPUs, there's no guarantee that they will
2578become apparent in the same order on those other CPUs.
2579
2580
2581Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2582has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2583
2584	            :
2585	            :                          +--------+
2586	            :      +---------+         |        |
2587	+--------+  : +--->| Cache A |<------->|        |
2588	|        |  : |    +---------+         |        |
2589	|  CPU 1 |<---+                        |        |
2590	|        |  : |    +---------+         |        |
2591	+--------+  : +--->| Cache B |<------->|        |
2592	            :      +---------+         |        |
2593	            :                          | Memory |
2594	            :      +---------+         | System |
2595	+--------+  : +--->| Cache C |<------->|        |
2596	|        |  : |    +---------+         |        |
2597	|  CPU 2 |<---+                        |        |
2598	|        |  : |    +---------+         |        |
2599	+--------+  : +--->| Cache D |<------->|        |
2600	            :      +---------+         |        |
2601	            :                          +--------+
2602	            :
2603
2604Imagine the system has the following properties:
2605
2606 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2607     resident in memory;
2608
2609 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2610     resident in memory;
2611
2612 (*) whilst the CPU core is interrogating one cache, the other cache may be
2613     making use of the bus to access the rest of the system - perhaps to
2614     displace a dirty cacheline or to do a speculative load;
2615
2616 (*) each cache has a queue of operations that need to be applied to that cache
2617     to maintain coherency with the rest of the system;
2618
2619 (*) the coherency queue is not flushed by normal loads to lines already
2620     present in the cache, even though the contents of the queue may
2621     potentially affect those loads.
2622
2623Imagine, then, that two writes are made on the first CPU, with a write barrier
2624between them to guarantee that they will appear to reach that CPU's caches in
2625the requisite order:
2626
2627	CPU 1		CPU 2		COMMENT
2628	===============	===============	=======================================
2629					u == 0, v == 1 and p == &u, q == &u
2630	v = 2;
2631	smp_wmb();			Make sure change to v is visible before
2632					 change to p
2633	<A:modify v=2>			v is now in cache A exclusively
2634	p = &v;
2635	<B:modify p=&v>			p is now in cache B exclusively
2636
2637The write memory barrier forces the other CPUs in the system to perceive that
2638the local CPU's caches have apparently been updated in the correct order.  But
2639now imagine that the second CPU wants to read those values:
2640
2641	CPU 1		CPU 2		COMMENT
2642	===============	===============	=======================================
2643	...
2644			q = p;
2645			x = *q;
2646
2647The above pair of reads may then fail to happen in the expected order, as the
2648cacheline holding p may get updated in one of the second CPU's caches whilst
2649the update to the cacheline holding v is delayed in the other of the second
2650CPU's caches by some other cache event:
2651
2652	CPU 1		CPU 2		COMMENT
2653	===============	===============	=======================================
2654					u == 0, v == 1 and p == &u, q == &u
2655	v = 2;
2656	smp_wmb();
2657	<A:modify v=2>	<C:busy>
2658			<C:queue v=2>
2659	p = &v;		q = p;
2660			<D:request p>
2661	<B:modify p=&v>	<D:commit p=&v>
2662			<D:read p>
2663			x = *q;
2664			<C:read *q>	Reads from v before v updated in cache
2665			<C:unbusy>
2666			<C:commit v=2>
2667
2668Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2669no guarantee that, without intervention, the order of update will be the same
2670as that committed on CPU 1.
2671
2672
2673To intervene, we need to interpolate a data dependency barrier or a read
2674barrier between the loads.  This will force the cache to commit its coherency
2675queue before processing any further requests:
2676
2677	CPU 1		CPU 2		COMMENT
2678	===============	===============	=======================================
2679					u == 0, v == 1 and p == &u, q == &u
2680	v = 2;
2681	smp_wmb();
2682	<A:modify v=2>	<C:busy>
2683			<C:queue v=2>
2684	p = &v;		q = p;
2685			<D:request p>
2686	<B:modify p=&v>	<D:commit p=&v>
2687			<D:read p>
2688			smp_read_barrier_depends()
2689			<C:unbusy>
2690			<C:commit v=2>
2691			x = *q;
2692			<C:read *q>	Reads from v after v updated in cache
2693
2694
2695This sort of problem can be encountered on DEC Alpha processors as they have a
2696split cache that improves performance by making better use of the data bus.
2697Whilst most CPUs do imply a data dependency barrier on the read when a memory
2698access depends on a read, not all do, so it may not be relied on.
2699
2700Other CPUs may also have split caches, but must coordinate between the various
2701cachelets for normal memory accesses.  The semantics of the Alpha removes the
2702need for coordination in the absence of memory barriers.
2703
2704
2705CACHE COHERENCY VS DMA
2706----------------------
2707
2708Not all systems maintain cache coherency with respect to devices doing DMA.  In
2709such cases, a device attempting DMA may obtain stale data from RAM because
2710dirty cache lines may be resident in the caches of various CPUs, and may not
2711have been written back to RAM yet.  To deal with this, the appropriate part of
2712the kernel must flush the overlapping bits of cache on each CPU (and maybe
2713invalidate them as well).
2714
2715In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2716cache lines being written back to RAM from a CPU's cache after the device has
2717installed its own data, or cache lines present in the CPU's cache may simply
2718obscure the fact that RAM has been updated, until at such time as the cacheline
2719is discarded from the CPU's cache and reloaded.  To deal with this, the
2720appropriate part of the kernel must invalidate the overlapping bits of the
2721cache on each CPU.
2722
2723See Documentation/cachetlb.txt for more information on cache management.
2724
2725
2726CACHE COHERENCY VS MMIO
2727-----------------------
2728
2729Memory mapped I/O usually takes place through memory locations that are part of
2730a window in the CPU's memory space that has different properties assigned than
2731the usual RAM directed window.
2732
2733Amongst these properties is usually the fact that such accesses bypass the
2734caching entirely and go directly to the device buses.  This means MMIO accesses
2735may, in effect, overtake accesses to cached memory that were emitted earlier.
2736A memory barrier isn't sufficient in such a case, but rather the cache must be
2737flushed between the cached memory write and the MMIO access if the two are in
2738any way dependent.
2739
2740
2741=========================
2742THE THINGS CPUS GET UP TO
2743=========================
2744
2745A programmer might take it for granted that the CPU will perform memory
2746operations in exactly the order specified, so that if the CPU is, for example,
2747given the following piece of code to execute:
2748
2749	a = ACCESS_ONCE(*A);
2750	ACCESS_ONCE(*B) = b;
2751	c = ACCESS_ONCE(*C);
2752	d = ACCESS_ONCE(*D);
2753	ACCESS_ONCE(*E) = e;
2754
2755they would then expect that the CPU will complete the memory operation for each
2756instruction before moving on to the next one, leading to a definite sequence of
2757operations as seen by external observers in the system:
2758
2759	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2760
2761
2762Reality is, of course, much messier.  With many CPUs and compilers, the above
2763assumption doesn't hold because:
2764
2765 (*) loads are more likely to need to be completed immediately to permit
2766     execution progress, whereas stores can often be deferred without a
2767     problem;
2768
2769 (*) loads may be done speculatively, and the result discarded should it prove
2770     to have been unnecessary;
2771
2772 (*) loads may be done speculatively, leading to the result having been fetched
2773     at the wrong time in the expected sequence of events;
2774
2775 (*) the order of the memory accesses may be rearranged to promote better use
2776     of the CPU buses and caches;
2777
2778 (*) loads and stores may be combined to improve performance when talking to
2779     memory or I/O hardware that can do batched accesses of adjacent locations,
2780     thus cutting down on transaction setup costs (memory and PCI devices may
2781     both be able to do this); and
2782
2783 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2784     mechanisms may alleviate this - once the store has actually hit the cache
2785     - there's no guarantee that the coherency management will be propagated in
2786     order to other CPUs.
2787
2788So what another CPU, say, might actually observe from the above piece of code
2789is:
2790
2791	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2792
2793	(Where "LOAD {*C,*D}" is a combined load)
2794
2795
2796However, it is guaranteed that a CPU will be self-consistent: it will see its
2797_own_ accesses appear to be correctly ordered, without the need for a memory
2798barrier.  For instance with the following code:
2799
2800	U = ACCESS_ONCE(*A);
2801	ACCESS_ONCE(*A) = V;
2802	ACCESS_ONCE(*A) = W;
2803	X = ACCESS_ONCE(*A);
2804	ACCESS_ONCE(*A) = Y;
2805	Z = ACCESS_ONCE(*A);
2806
2807and assuming no intervention by an external influence, it can be assumed that
2808the final result will appear to be:
2809
2810	U == the original value of *A
2811	X == W
2812	Z == Y
2813	*A == Y
2814
2815The code above may cause the CPU to generate the full sequence of memory
2816accesses:
2817
2818	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2819
2820in that order, but, without intervention, the sequence may have almost any
2821combination of elements combined or discarded, provided the program's view of
2822the world remains consistent.  Note that ACCESS_ONCE() is -not- optional
2823in the above example, as there are architectures where a given CPU might
2824reorder successive loads to the same location.  On such architectures,
2825ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2826Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2827special ld.acq and st.rel instructions that prevent such reordering.
2828
2829The compiler may also combine, discard or defer elements of the sequence before
2830the CPU even sees them.
2831
2832For instance:
2833
2834	*A = V;
2835	*A = W;
2836
2837may be reduced to:
2838
2839	*A = W;
2840
2841since, without either a write barrier or an ACCESS_ONCE(), it can be
2842assumed that the effect of the storage of V to *A is lost.  Similarly:
2843
2844	*A = Y;
2845	Z = *A;
2846
2847may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2848
2849	*A = Y;
2850	Z = Y;
2851
2852and the LOAD operation never appear outside of the CPU.
2853
2854
2855AND THEN THERE'S THE ALPHA
2856--------------------------
2857
2858The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
2859some versions of the Alpha CPU have a split data cache, permitting them to have
2860two semantically-related cache lines updated at separate times.  This is where
2861the data dependency barrier really becomes necessary as this synchronises both
2862caches with the memory coherence system, thus making it seem like pointer
2863changes vs new data occur in the right order.
2864
2865The Alpha defines the Linux kernel's memory barrier model.
2866
2867See the subsection on "Cache Coherency" above.
2868
2869
2870============
2871EXAMPLE USES
2872============
2873
2874CIRCULAR BUFFERS
2875----------------
2876
2877Memory barriers can be used to implement circular buffering without the need
2878of a lock to serialise the producer with the consumer.  See:
2879
2880	Documentation/circular-buffers.txt
2881
2882for details.
2883
2884
2885==========
2886REFERENCES
2887==========
2888
2889Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2890Digital Press)
2891	Chapter 5.2: Physical Address Space Characteristics
2892	Chapter 5.4: Caches and Write Buffers
2893	Chapter 5.5: Data Sharing
2894	Chapter 5.6: Read/Write Ordering
2895
2896AMD64 Architecture Programmer's Manual Volume 2: System Programming
2897	Chapter 7.1: Memory-Access Ordering
2898	Chapter 7.4: Buffering and Combining Memory Writes
2899
2900IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2901System Programming Guide
2902	Chapter 7.1: Locked Atomic Operations
2903	Chapter 7.2: Memory Ordering
2904	Chapter 7.4: Serializing Instructions
2905
2906The SPARC Architecture Manual, Version 9
2907	Chapter 8: Memory Models
2908	Appendix D: Formal Specification of the Memory Models
2909	Appendix J: Programming with the Memory Models
2910
2911UltraSPARC Programmer Reference Manual
2912	Chapter 5: Memory Accesses and Cacheability
2913	Chapter 15: Sparc-V9 Memory Models
2914
2915UltraSPARC III Cu User's Manual
2916	Chapter 9: Memory Models
2917
2918UltraSPARC IIIi Processor User's Manual
2919	Chapter 8: Memory Models
2920
2921UltraSPARC Architecture 2005
2922	Chapter 9: Memory
2923	Appendix D: Formal Specifications of the Memory Models
2924
2925UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2926	Chapter 8: Memory Models
2927	Appendix F: Caches and Cache Coherency
2928
2929Solaris Internals, Core Kernel Architecture, p63-68:
2930	Chapter 3.3: Hardware Considerations for Locks and
2931			Synchronization
2932
2933Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2934for Kernel Programmers:
2935	Chapter 13: Other Memory Models
2936
2937Intel Itanium Architecture Software Developer's Manual: Volume 1:
2938	Section 2.6: Speculation
2939	Section 4.4: Memory Access
2940