1			 ============================
2			 LINUX KERNEL MEMORY BARRIERS
3			 ============================
4
5By: David Howells <dhowells@redhat.com>
6    Paul E. McKenney <paulmck@linux.vnet.ibm.com>
7
8Contents:
9
10 (*) Abstract memory access model.
11
12     - Device operations.
13     - Guarantees.
14
15 (*) What are memory barriers?
16
17     - Varieties of memory barrier.
18     - What may not be assumed about memory barriers?
19     - Data dependency barriers.
20     - Control dependencies.
21     - SMP barrier pairing.
22     - Examples of memory barrier sequences.
23     - Read memory barriers vs load speculation.
24     - Transitivity
25
26 (*) Explicit kernel barriers.
27
28     - Compiler barrier.
29     - CPU memory barriers.
30     - MMIO write barrier.
31
32 (*) Implicit kernel memory barriers.
33
34     - Locking functions.
35     - Interrupt disabling functions.
36     - Sleep and wake-up functions.
37     - Miscellaneous functions.
38
39 (*) Inter-CPU locking barrier effects.
40
41     - Locks vs memory accesses.
42     - Locks vs I/O accesses.
43
44 (*) Where are memory barriers needed?
45
46     - Interprocessor interaction.
47     - Atomic operations.
48     - Accessing devices.
49     - Interrupts.
50
51 (*) Kernel I/O barrier effects.
52
53 (*) Assumed minimum execution ordering model.
54
55 (*) The effects of the cpu cache.
56
57     - Cache coherency.
58     - Cache coherency vs DMA.
59     - Cache coherency vs MMIO.
60
61 (*) The things CPUs get up to.
62
63     - And then there's the Alpha.
64
65 (*) Example uses.
66
67     - Circular buffers.
68
69 (*) References.
70
71
72============================
73ABSTRACT MEMORY ACCESS MODEL
74============================
75
76Consider the following abstract model of the system:
77
78		            :                :
79		            :                :
80		            :                :
81		+-------+   :   +--------+   :   +-------+
82		|       |   :   |        |   :   |       |
83		|       |   :   |        |   :   |       |
84		| CPU 1 |<----->| Memory |<----->| CPU 2 |
85		|       |   :   |        |   :   |       |
86		|       |   :   |        |   :   |       |
87		+-------+   :   +--------+   :   +-------+
88		    ^       :       ^        :       ^
89		    |       :       |        :       |
90		    |       :       |        :       |
91		    |       :       v        :       |
92		    |       :   +--------+   :       |
93		    |       :   |        |   :       |
94		    |       :   |        |   :       |
95		    +---------->| Device |<----------+
96		            :   |        |   :
97		            :   |        |   :
98		            :   +--------+   :
99		            :                :
100
101Each CPU executes a program that generates memory access operations.  In the
102abstract CPU, memory operation ordering is very relaxed, and a CPU may actually
103perform the memory operations in any order it likes, provided program causality
104appears to be maintained.  Similarly, the compiler may also arrange the
105instructions it emits in any order it likes, provided it doesn't affect the
106apparent operation of the program.
107
108So in the above diagram, the effects of the memory operations performed by a
109CPU are perceived by the rest of the system as the operations cross the
110interface between the CPU and rest of the system (the dotted lines).
111
112
113For example, consider the following sequence of events:
114
115	CPU 1		CPU 2
116	===============	===============
117	{ A == 1; B == 2 }
118	A = 3;		x = B;
119	B = 4;		y = A;
120
121The set of accesses as seen by the memory system in the middle can be arranged
122in 24 different combinations:
123
124	STORE A=3,	STORE B=4,	y=LOAD A->3,	x=LOAD B->4
125	STORE A=3,	STORE B=4,	x=LOAD B->4,	y=LOAD A->3
126	STORE A=3,	y=LOAD A->3,	STORE B=4,	x=LOAD B->4
127	STORE A=3,	y=LOAD A->3,	x=LOAD B->2,	STORE B=4
128	STORE A=3,	x=LOAD B->2,	STORE B=4,	y=LOAD A->3
129	STORE A=3,	x=LOAD B->2,	y=LOAD A->3,	STORE B=4
130	STORE B=4,	STORE A=3,	y=LOAD A->3,	x=LOAD B->4
131	STORE B=4, ...
132	...
133
134and can thus result in four different combinations of values:
135
136	x == 2, y == 1
137	x == 2, y == 3
138	x == 4, y == 1
139	x == 4, y == 3
140
141
142Furthermore, the stores committed by a CPU to the memory system may not be
143perceived by the loads made by another CPU in the same order as the stores were
144committed.
145
146
147As a further example, consider this sequence of events:
148
149	CPU 1		CPU 2
150	===============	===============
151	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
152	B = 4;		Q = P;
153	P = &B		D = *Q;
154
155There is an obvious data dependency here, as the value loaded into D depends on
156the address retrieved from P by CPU 2.  At the end of the sequence, any of the
157following results are possible:
158
159	(Q == &A) and (D == 1)
160	(Q == &B) and (D == 2)
161	(Q == &B) and (D == 4)
162
163Note that CPU 2 will never try and load C into D because the CPU will load P
164into Q before issuing the load of *Q.
165
166
167DEVICE OPERATIONS
168-----------------
169
170Some devices present their control interfaces as collections of memory
171locations, but the order in which the control registers are accessed is very
172important.  For instance, imagine an ethernet card with a set of internal
173registers that are accessed through an address port register (A) and a data
174port register (D).  To read internal register 5, the following code might then
175be used:
176
177	*A = 5;
178	x = *D;
179
180but this might show up as either of the following two sequences:
181
182	STORE *A = 5, x = LOAD *D
183	x = LOAD *D, STORE *A = 5
184
185the second of which will almost certainly result in a malfunction, since it set
186the address _after_ attempting to read the register.
187
188
189GUARANTEES
190----------
191
192There are some minimal guarantees that may be expected of a CPU:
193
194 (*) On any given CPU, dependent memory accesses will be issued in order, with
195     respect to itself.  This means that for:
196
197	ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
198
199     the CPU will issue the following memory operations:
200
201	Q = LOAD P, D = LOAD *Q
202
203     and always in that order.  On most systems, smp_read_barrier_depends()
204     does nothing, but it is required for DEC Alpha.  The ACCESS_ONCE()
205     is required to prevent compiler mischief.  Please note that you
206     should normally use something like rcu_dereference() instead of
207     open-coding smp_read_barrier_depends().
208
209 (*) Overlapping loads and stores within a particular CPU will appear to be
210     ordered within that CPU.  This means that for:
211
212	a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
213
214     the CPU will only issue the following sequence of memory operations:
215
216	a = LOAD *X, STORE *X = b
217
218     And for:
219
220	ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
221
222     the CPU will only issue:
223
224	STORE *X = c, d = LOAD *X
225
226     (Loads and stores overlap if they are targeted at overlapping pieces of
227     memory).
228
229And there are a number of things that _must_ or _must_not_ be assumed:
230
231 (*) It _must_not_ be assumed that the compiler will do what you want with
232     memory references that are not protected by ACCESS_ONCE().  Without
233     ACCESS_ONCE(), the compiler is within its rights to do all sorts
234     of "creative" transformations, which are covered in the Compiler
235     Barrier section.
236
237 (*) It _must_not_ be assumed that independent loads and stores will be issued
238     in the order given.  This means that for:
239
240	X = *A; Y = *B; *D = Z;
241
242     we may get any of the following sequences:
243
244	X = LOAD *A,  Y = LOAD *B,  STORE *D = Z
245	X = LOAD *A,  STORE *D = Z, Y = LOAD *B
246	Y = LOAD *B,  X = LOAD *A,  STORE *D = Z
247	Y = LOAD *B,  STORE *D = Z, X = LOAD *A
248	STORE *D = Z, X = LOAD *A,  Y = LOAD *B
249	STORE *D = Z, Y = LOAD *B,  X = LOAD *A
250
251 (*) It _must_ be assumed that overlapping memory accesses may be merged or
252     discarded.  This means that for:
253
254	X = *A; Y = *(A + 4);
255
256     we may get any one of the following sequences:
257
258	X = LOAD *A; Y = LOAD *(A + 4);
259	Y = LOAD *(A + 4); X = LOAD *A;
260	{X, Y} = LOAD {*A, *(A + 4) };
261
262     And for:
263
264	*A = X; *(A + 4) = Y;
265
266     we may get any of:
267
268	STORE *A = X; STORE *(A + 4) = Y;
269	STORE *(A + 4) = Y; STORE *A = X;
270	STORE {*A, *(A + 4) } = {X, Y};
271
272
273=========================
274WHAT ARE MEMORY BARRIERS?
275=========================
276
277As can be seen above, independent memory operations are effectively performed
278in random order, but this can be a problem for CPU-CPU interaction and for I/O.
279What is required is some way of intervening to instruct the compiler and the
280CPU to restrict the order.
281
282Memory barriers are such interventions.  They impose a perceived partial
283ordering over the memory operations on either side of the barrier.
284
285Such enforcement is important because the CPUs and other devices in a system
286can use a variety of tricks to improve performance, including reordering,
287deferral and combination of memory operations; speculative loads; speculative
288branch prediction and various types of caching.  Memory barriers are used to
289override or suppress these tricks, allowing the code to sanely control the
290interaction of multiple CPUs and/or devices.
291
292
293VARIETIES OF MEMORY BARRIER
294---------------------------
295
296Memory barriers come in four basic varieties:
297
298 (1) Write (or store) memory barriers.
299
300     A write memory barrier gives a guarantee that all the STORE operations
301     specified before the barrier will appear to happen before all the STORE
302     operations specified after the barrier with respect to the other
303     components of the system.
304
305     A write barrier is a partial ordering on stores only; it is not required
306     to have any effect on loads.
307
308     A CPU can be viewed as committing a sequence of store operations to the
309     memory system as time progresses.  All stores before a write barrier will
310     occur in the sequence _before_ all the stores after the write barrier.
311
312     [!] Note that write barriers should normally be paired with read or data
313     dependency barriers; see the "SMP barrier pairing" subsection.
314
315
316 (2) Data dependency barriers.
317
318     A data dependency barrier is a weaker form of read barrier.  In the case
319     where two loads are performed such that the second depends on the result
320     of the first (eg: the first load retrieves the address to which the second
321     load will be directed), a data dependency barrier would be required to
322     make sure that the target of the second load is updated before the address
323     obtained by the first load is accessed.
324
325     A data dependency barrier is a partial ordering on interdependent loads
326     only; it is not required to have any effect on stores, independent loads
327     or overlapping loads.
328
329     As mentioned in (1), the other CPUs in the system can be viewed as
330     committing sequences of stores to the memory system that the CPU being
331     considered can then perceive.  A data dependency barrier issued by the CPU
332     under consideration guarantees that for any load preceding it, if that
333     load touches one of a sequence of stores from another CPU, then by the
334     time the barrier completes, the effects of all the stores prior to that
335     touched by the load will be perceptible to any loads issued after the data
336     dependency barrier.
337
338     See the "Examples of memory barrier sequences" subsection for diagrams
339     showing the ordering constraints.
340
341     [!] Note that the first load really has to have a _data_ dependency and
342     not a control dependency.  If the address for the second load is dependent
343     on the first load, but the dependency is through a conditional rather than
344     actually loading the address itself, then it's a _control_ dependency and
345     a full read barrier or better is required.  See the "Control dependencies"
346     subsection for more information.
347
348     [!] Note that data dependency barriers should normally be paired with
349     write barriers; see the "SMP barrier pairing" subsection.
350
351
352 (3) Read (or load) memory barriers.
353
354     A read barrier is a data dependency barrier plus a guarantee that all the
355     LOAD operations specified before the barrier will appear to happen before
356     all the LOAD operations specified after the barrier with respect to the
357     other components of the system.
358
359     A read barrier is a partial ordering on loads only; it is not required to
360     have any effect on stores.
361
362     Read memory barriers imply data dependency barriers, and so can substitute
363     for them.
364
365     [!] Note that read barriers should normally be paired with write barriers;
366     see the "SMP barrier pairing" subsection.
367
368
369 (4) General memory barriers.
370
371     A general memory barrier gives a guarantee that all the LOAD and STORE
372     operations specified before the barrier will appear to happen before all
373     the LOAD and STORE operations specified after the barrier with respect to
374     the other components of the system.
375
376     A general memory barrier is a partial ordering over both loads and stores.
377
378     General memory barriers imply both read and write memory barriers, and so
379     can substitute for either.
380
381
382And a couple of implicit varieties:
383
384 (5) ACQUIRE operations.
385
386     This acts as a one-way permeable barrier.  It guarantees that all memory
387     operations after the ACQUIRE operation will appear to happen after the
388     ACQUIRE operation with respect to the other components of the system.
389     ACQUIRE operations include LOCK operations and smp_load_acquire()
390     operations.
391
392     Memory operations that occur before an ACQUIRE operation may appear to
393     happen after it completes.
394
395     An ACQUIRE operation should almost always be paired with a RELEASE
396     operation.
397
398
399 (6) RELEASE operations.
400
401     This also acts as a one-way permeable barrier.  It guarantees that all
402     memory operations before the RELEASE operation will appear to happen
403     before the RELEASE operation with respect to the other components of the
404     system. RELEASE operations include UNLOCK operations and
405     smp_store_release() operations.
406
407     Memory operations that occur after a RELEASE operation may appear to
408     happen before it completes.
409
410     The use of ACQUIRE and RELEASE operations generally precludes the need
411     for other sorts of memory barrier (but note the exceptions mentioned in
412     the subsection "MMIO write barrier").  In addition, a RELEASE+ACQUIRE
413     pair is -not- guaranteed to act as a full memory barrier.  However, after
414     an ACQUIRE on a given variable, all memory accesses preceding any prior
415     RELEASE on that same variable are guaranteed to be visible.  In other
416     words, within a given variable's critical section, all accesses of all
417     previous critical sections for that variable are guaranteed to have
418     completed.
419
420     This means that ACQUIRE acts as a minimal "acquire" operation and
421     RELEASE acts as a minimal "release" operation.
422
423
424Memory barriers are only required where there's a possibility of interaction
425between two CPUs or between a CPU and a device.  If it can be guaranteed that
426there won't be any such interaction in any particular piece of code, then
427memory barriers are unnecessary in that piece of code.
428
429
430Note that these are the _minimum_ guarantees.  Different architectures may give
431more substantial guarantees, but they may _not_ be relied upon outside of arch
432specific code.
433
434
435WHAT MAY NOT BE ASSUMED ABOUT MEMORY BARRIERS?
436----------------------------------------------
437
438There are certain things that the Linux kernel memory barriers do not guarantee:
439
440 (*) There is no guarantee that any of the memory accesses specified before a
441     memory barrier will be _complete_ by the completion of a memory barrier
442     instruction; the barrier can be considered to draw a line in that CPU's
443     access queue that accesses of the appropriate type may not cross.
444
445 (*) There is no guarantee that issuing a memory barrier on one CPU will have
446     any direct effect on another CPU or any other hardware in the system.  The
447     indirect effect will be the order in which the second CPU sees the effects
448     of the first CPU's accesses occur, but see the next point:
449
450 (*) There is no guarantee that a CPU will see the correct order of effects
451     from a second CPU's accesses, even _if_ the second CPU uses a memory
452     barrier, unless the first CPU _also_ uses a matching memory barrier (see
453     the subsection on "SMP Barrier Pairing").
454
455 (*) There is no guarantee that some intervening piece of off-the-CPU
456     hardware[*] will not reorder the memory accesses.  CPU cache coherency
457     mechanisms should propagate the indirect effects of a memory barrier
458     between CPUs, but might not do so in order.
459
460	[*] For information on bus mastering DMA and coherency please read:
461
462	    Documentation/PCI/pci.txt
463	    Documentation/DMA-API-HOWTO.txt
464	    Documentation/DMA-API.txt
465
466
467DATA DEPENDENCY BARRIERS
468------------------------
469
470The usage requirements of data dependency barriers are a little subtle, and
471it's not always obvious that they're needed.  To illustrate, consider the
472following sequence of events:
473
474	CPU 1		      CPU 2
475	===============	      ===============
476	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
477	B = 4;
478	<write barrier>
479	ACCESS_ONCE(P) = &B
480			      Q = ACCESS_ONCE(P);
481			      D = *Q;
482
483There's a clear data dependency here, and it would seem that by the end of the
484sequence, Q must be either &A or &B, and that:
485
486	(Q == &A) implies (D == 1)
487	(Q == &B) implies (D == 4)
488
489But!  CPU 2's perception of P may be updated _before_ its perception of B, thus
490leading to the following situation:
491
492	(Q == &B) and (D == 2) ????
493
494Whilst this may seem like a failure of coherency or causality maintenance, it
495isn't, and this behaviour can be observed on certain real CPUs (such as the DEC
496Alpha).
497
498To deal with this, a data dependency barrier or better must be inserted
499between the address load and the data load:
500
501	CPU 1		      CPU 2
502	===============	      ===============
503	{ A == 1, B == 2, C = 3, P == &A, Q == &C }
504	B = 4;
505	<write barrier>
506	ACCESS_ONCE(P) = &B
507			      Q = ACCESS_ONCE(P);
508			      <data dependency barrier>
509			      D = *Q;
510
511This enforces the occurrence of one of the two implications, and prevents the
512third possibility from arising.
513
514[!] Note that this extremely counterintuitive situation arises most easily on
515machines with split caches, so that, for example, one cache bank processes
516even-numbered cache lines and the other bank processes odd-numbered cache
517lines.  The pointer P might be stored in an odd-numbered cache line, and the
518variable B might be stored in an even-numbered cache line.  Then, if the
519even-numbered bank of the reading CPU's cache is extremely busy while the
520odd-numbered bank is idle, one can see the new value of the pointer P (&B),
521but the old value of the variable B (2).
522
523
524Another example of where data dependency barriers might be required is where a
525number is read from memory and then used to calculate the index for an array
526access:
527
528	CPU 1		      CPU 2
529	===============	      ===============
530	{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
531	M[1] = 4;
532	<write barrier>
533	ACCESS_ONCE(P) = 1
534			      Q = ACCESS_ONCE(P);
535			      <data dependency barrier>
536			      D = M[Q];
537
538
539The data dependency barrier is very important to the RCU system,
540for example.  See rcu_assign_pointer() and rcu_dereference() in
541include/linux/rcupdate.h.  This permits the current target of an RCU'd
542pointer to be replaced with a new modified target, without the replacement
543target appearing to be incompletely initialised.
544
545See also the subsection on "Cache Coherency" for a more thorough example.
546
547
548CONTROL DEPENDENCIES
549--------------------
550
551A control dependency requires a full read memory barrier, not simply a data
552dependency barrier to make it work correctly.  Consider the following bit of
553code:
554
555	q = ACCESS_ONCE(a);
556	if (q) {
557		<data dependency barrier>  /* BUG: No data dependency!!! */
558		p = ACCESS_ONCE(b);
559	}
560
561This will not have the desired effect because there is no actual data
562dependency, but rather a control dependency that the CPU may short-circuit
563by attempting to predict the outcome in advance, so that other CPUs see
564the load from b as having happened before the load from a.  In such a
565case what's actually required is:
566
567	q = ACCESS_ONCE(a);
568	if (q) {
569		<read barrier>
570		p = ACCESS_ONCE(b);
571	}
572
573However, stores are not speculated.  This means that ordering -is- provided
574in the following example:
575
576	q = ACCESS_ONCE(a);
577	if (q) {
578		ACCESS_ONCE(b) = p;
579	}
580
581Please note that ACCESS_ONCE() is not optional!  Without the
582ACCESS_ONCE(), might combine the load from 'a' with other loads from
583'a', and the store to 'b' with other stores to 'b', with possible highly
584counterintuitive effects on ordering.
585
586Worse yet, if the compiler is able to prove (say) that the value of
587variable 'a' is always non-zero, it would be well within its rights
588to optimize the original example by eliminating the "if" statement
589as follows:
590
591	q = a;
592	b = p;  /* BUG: Compiler and CPU can both reorder!!! */
593
594So don't leave out the ACCESS_ONCE().
595
596It is tempting to try to enforce ordering on identical stores on both
597branches of the "if" statement as follows:
598
599	q = ACCESS_ONCE(a);
600	if (q) {
601		barrier();
602		ACCESS_ONCE(b) = p;
603		do_something();
604	} else {
605		barrier();
606		ACCESS_ONCE(b) = p;
607		do_something_else();
608	}
609
610Unfortunately, current compilers will transform this as follows at high
611optimization levels:
612
613	q = ACCESS_ONCE(a);
614	barrier();
615	ACCESS_ONCE(b) = p;  /* BUG: No ordering vs. load from a!!! */
616	if (q) {
617		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
618		do_something();
619	} else {
620		/* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
621		do_something_else();
622	}
623
624Now there is no conditional between the load from 'a' and the store to
625'b', which means that the CPU is within its rights to reorder them:
626The conditional is absolutely required, and must be present in the
627assembly code even after all compiler optimizations have been applied.
628Therefore, if you need ordering in this example, you need explicit
629memory barriers, for example, smp_store_release():
630
631	q = ACCESS_ONCE(a);
632	if (q) {
633		smp_store_release(&b, p);
634		do_something();
635	} else {
636		smp_store_release(&b, p);
637		do_something_else();
638	}
639
640In contrast, without explicit memory barriers, two-legged-if control
641ordering is guaranteed only when the stores differ, for example:
642
643	q = ACCESS_ONCE(a);
644	if (q) {
645		ACCESS_ONCE(b) = p;
646		do_something();
647	} else {
648		ACCESS_ONCE(b) = r;
649		do_something_else();
650	}
651
652The initial ACCESS_ONCE() is still required to prevent the compiler from
653proving the value of 'a'.
654
655In addition, you need to be careful what you do with the local variable 'q',
656otherwise the compiler might be able to guess the value and again remove
657the needed conditional.  For example:
658
659	q = ACCESS_ONCE(a);
660	if (q % MAX) {
661		ACCESS_ONCE(b) = p;
662		do_something();
663	} else {
664		ACCESS_ONCE(b) = r;
665		do_something_else();
666	}
667
668If MAX is defined to be 1, then the compiler knows that (q % MAX) is
669equal to zero, in which case the compiler is within its rights to
670transform the above code into the following:
671
672	q = ACCESS_ONCE(a);
673	ACCESS_ONCE(b) = p;
674	do_something_else();
675
676Given this transformation, the CPU is not required to respect the ordering
677between the load from variable 'a' and the store to variable 'b'.  It is
678tempting to add a barrier(), but this does not help.  The conditional
679is gone, and the barrier won't bring it back.  Therefore, if you are
680relying on this ordering, you should make sure that MAX is greater than
681one, perhaps as follows:
682
683	q = ACCESS_ONCE(a);
684	BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
685	if (q % MAX) {
686		ACCESS_ONCE(b) = p;
687		do_something();
688	} else {
689		ACCESS_ONCE(b) = r;
690		do_something_else();
691	}
692
693Please note once again that the stores to 'b' differ.  If they were
694identical, as noted earlier, the compiler could pull this store outside
695of the 'if' statement.
696
697You must also be careful not to rely too much on boolean short-circuit
698evaluation.  Consider this example:
699
700	q = ACCESS_ONCE(a);
701	if (a || 1 > 0)
702		ACCESS_ONCE(b) = 1;
703
704Because the second condition is always true, the compiler can transform
705this example as following, defeating control dependency:
706
707	q = ACCESS_ONCE(a);
708	ACCESS_ONCE(b) = 1;
709
710This example underscores the need to ensure that the compiler cannot
711out-guess your code.  More generally, although ACCESS_ONCE() does force
712the compiler to actually emit code for a given load, it does not force
713the compiler to use the results.
714
715Finally, control dependencies do -not- provide transitivity.  This is
716demonstrated by two related examples, with the initial values of
717x and y both being zero:
718
719	CPU 0                     CPU 1
720	=====================     =====================
721	r1 = ACCESS_ONCE(x);      r2 = ACCESS_ONCE(y);
722	if (r1 > 0)               if (r2 > 0)
723	  ACCESS_ONCE(y) = 1;       ACCESS_ONCE(x) = 1;
724
725	assert(!(r1 == 1 && r2 == 1));
726
727The above two-CPU example will never trigger the assert().  However,
728if control dependencies guaranteed transitivity (which they do not),
729then adding the following CPU would guarantee a related assertion:
730
731	CPU 2
732	=====================
733	ACCESS_ONCE(x) = 2;
734
735	assert(!(r1 == 2 && r2 == 1 && x == 2)); /* FAILS!!! */
736
737But because control dependencies do -not- provide transitivity, the above
738assertion can fail after the combined three-CPU example completes.  If you
739need the three-CPU example to provide ordering, you will need smp_mb()
740between the loads and stores in the CPU 0 and CPU 1 code fragments,
741that is, just before or just after the "if" statements.
742
743These two examples are the LB and WWC litmus tests from this paper:
744http://www.cl.cam.ac.uk/users/pes20/ppc-supplemental/test6.pdf and this
745site: https://www.cl.cam.ac.uk/~pes20/ppcmem/index.html.
746
747In summary:
748
749  (*) Control dependencies can order prior loads against later stores.
750      However, they do -not- guarantee any other sort of ordering:
751      Not prior loads against later loads, nor prior stores against
752      later anything.  If you need these other forms of ordering,
753      use smb_rmb(), smp_wmb(), or, in the case of prior stores and
754      later loads, smp_mb().
755
756  (*) If both legs of the "if" statement begin with identical stores
757      to the same variable, a barrier() statement is required at the
758      beginning of each leg of the "if" statement.
759
760  (*) Control dependencies require at least one run-time conditional
761      between the prior load and the subsequent store, and this
762      conditional must involve the prior load.  If the compiler
763      is able to optimize the conditional away, it will have also
764      optimized away the ordering.  Careful use of ACCESS_ONCE() can
765      help to preserve the needed conditional.
766
767  (*) Control dependencies require that the compiler avoid reordering the
768      dependency into nonexistence.  Careful use of ACCESS_ONCE() or
769      barrier() can help to preserve your control dependency.  Please
770      see the Compiler Barrier section for more information.
771
772  (*) Control dependencies do -not- provide transitivity.  If you
773      need transitivity, use smp_mb().
774
775
776SMP BARRIER PAIRING
777-------------------
778
779When dealing with CPU-CPU interactions, certain types of memory barrier should
780always be paired.  A lack of appropriate pairing is almost certainly an error.
781
782General barriers pair with each other, though they also pair with
783most other types of barriers, albeit without transitivity.  An acquire
784barrier pairs with a release barrier, but both may also pair with other
785barriers, including of course general barriers.  A write barrier pairs
786with a data dependency barrier, an acquire barrier, a release barrier,
787a read barrier, or a general barrier.  Similarly a read barrier or a
788data dependency barrier pairs with a write barrier, an acquire barrier,
789a release barrier, or a general barrier:
790
791	CPU 1		      CPU 2
792	===============	      ===============
793	ACCESS_ONCE(a) = 1;
794	<write barrier>
795	ACCESS_ONCE(b) = 2;   x = ACCESS_ONCE(b);
796			      <read barrier>
797			      y = ACCESS_ONCE(a);
798
799Or:
800
801	CPU 1		      CPU 2
802	===============	      ===============================
803	a = 1;
804	<write barrier>
805	ACCESS_ONCE(b) = &a;  x = ACCESS_ONCE(b);
806			      <data dependency barrier>
807			      y = *x;
808
809Basically, the read barrier always has to be there, even though it can be of
810the "weaker" type.
811
812[!] Note that the stores before the write barrier would normally be expected to
813match the loads after the read barrier or the data dependency barrier, and vice
814versa:
815
816	CPU 1                               CPU 2
817	===================                 ===================
818	ACCESS_ONCE(a) = 1;  }----   --->{  v = ACCESS_ONCE(c);
819	ACCESS_ONCE(b) = 2;  }    \ /    {  w = ACCESS_ONCE(d);
820	<write barrier>            \        <read barrier>
821	ACCESS_ONCE(c) = 3;  }    / \    {  x = ACCESS_ONCE(a);
822	ACCESS_ONCE(d) = 4;  }----   --->{  y = ACCESS_ONCE(b);
823
824
825EXAMPLES OF MEMORY BARRIER SEQUENCES
826------------------------------------
827
828Firstly, write barriers act as partial orderings on store operations.
829Consider the following sequence of events:
830
831	CPU 1
832	=======================
833	STORE A = 1
834	STORE B = 2
835	STORE C = 3
836	<write barrier>
837	STORE D = 4
838	STORE E = 5
839
840This sequence of events is committed to the memory coherence system in an order
841that the rest of the system might perceive as the unordered set of { STORE A,
842STORE B, STORE C } all occurring before the unordered set of { STORE D, STORE E
843}:
844
845	+-------+       :      :
846	|       |       +------+
847	|       |------>| C=3  |     }     /\
848	|       |  :    +------+     }-----  \  -----> Events perceptible to
849	|       |  :    | A=1  |     }        \/       the rest of the system
850	|       |  :    +------+     }
851	| CPU 1 |  :    | B=2  |     }
852	|       |       +------+     }
853	|       |   wwwwwwwwwwwwwwww }   <--- At this point the write barrier
854	|       |       +------+     }        requires all stores prior to the
855	|       |  :    | E=5  |     }        barrier to be committed before
856	|       |  :    +------+     }        further stores may take place
857	|       |------>| D=4  |     }
858	|       |       +------+
859	+-------+       :      :
860	                   |
861	                   | Sequence in which stores are committed to the
862	                   | memory system by CPU 1
863	                   V
864
865
866Secondly, data dependency barriers act as partial orderings on data-dependent
867loads.  Consider the following sequence of events:
868
869	CPU 1			CPU 2
870	=======================	=======================
871		{ B = 7; X = 9; Y = 8; C = &Y }
872	STORE A = 1
873	STORE B = 2
874	<write barrier>
875	STORE C = &B		LOAD X
876	STORE D = 4		LOAD C (gets &B)
877				LOAD *C (reads B)
878
879Without intervention, CPU 2 may perceive the events on CPU 1 in some
880effectively random order, despite the write barrier issued by CPU 1:
881
882	+-------+       :      :                :       :
883	|       |       +------+                +-------+  | Sequence of update
884	|       |------>| B=2  |-----       --->| Y->8  |  | of perception on
885	|       |  :    +------+     \          +-------+  | CPU 2
886	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |  V
887	|       |       +------+       |        +-------+
888	|       |   wwwwwwwwwwwwwwww   |        :       :
889	|       |       +------+       |        :       :
890	|       |  :    | C=&B |---    |        :       :       +-------+
891	|       |  :    +------+   \   |        +-------+       |       |
892	|       |------>| D=4  |    ----------->| C->&B |------>|       |
893	|       |       +------+       |        +-------+       |       |
894	+-------+       :      :       |        :       :       |       |
895	                               |        :       :       |       |
896	                               |        :       :       | CPU 2 |
897	                               |        +-------+       |       |
898	    Apparently incorrect --->  |        | B->7  |------>|       |
899	    perception of B (!)        |        +-------+       |       |
900	                               |        :       :       |       |
901	                               |        +-------+       |       |
902	    The load of X holds --->    \       | X->9  |------>|       |
903	    up the maintenance           \      +-------+       |       |
904	    of coherence of B             ----->| B->2  |       +-------+
905	                                        +-------+
906	                                        :       :
907
908
909In the above example, CPU 2 perceives that B is 7, despite the load of *C
910(which would be B) coming after the LOAD of C.
911
912If, however, a data dependency barrier were to be placed between the load of C
913and the load of *C (ie: B) on CPU 2:
914
915	CPU 1			CPU 2
916	=======================	=======================
917		{ B = 7; X = 9; Y = 8; C = &Y }
918	STORE A = 1
919	STORE B = 2
920	<write barrier>
921	STORE C = &B		LOAD X
922	STORE D = 4		LOAD C (gets &B)
923				<data dependency barrier>
924				LOAD *C (reads B)
925
926then the following will occur:
927
928	+-------+       :      :                :       :
929	|       |       +------+                +-------+
930	|       |------>| B=2  |-----       --->| Y->8  |
931	|       |  :    +------+     \          +-------+
932	| CPU 1 |  :    | A=1  |      \     --->| C->&Y |
933	|       |       +------+       |        +-------+
934	|       |   wwwwwwwwwwwwwwww   |        :       :
935	|       |       +------+       |        :       :
936	|       |  :    | C=&B |---    |        :       :       +-------+
937	|       |  :    +------+   \   |        +-------+       |       |
938	|       |------>| D=4  |    ----------->| C->&B |------>|       |
939	|       |       +------+       |        +-------+       |       |
940	+-------+       :      :       |        :       :       |       |
941	                               |        :       :       |       |
942	                               |        :       :       | CPU 2 |
943	                               |        +-------+       |       |
944	                               |        | X->9  |------>|       |
945	                               |        +-------+       |       |
946	  Makes sure all effects --->   \   ddddddddddddddddd   |       |
947	  prior to the store of C        \      +-------+       |       |
948	  are perceptible to              ----->| B->2  |------>|       |
949	  subsequent loads                      +-------+       |       |
950	                                        :       :       +-------+
951
952
953And thirdly, a read barrier acts as a partial order on loads.  Consider the
954following sequence of events:
955
956	CPU 1			CPU 2
957	=======================	=======================
958		{ A = 0, B = 9 }
959	STORE A=1
960	<write barrier>
961	STORE B=2
962				LOAD B
963				LOAD A
964
965Without intervention, CPU 2 may then choose to perceive the events on CPU 1 in
966some effectively random order, despite the write barrier issued by CPU 1:
967
968	+-------+       :      :                :       :
969	|       |       +------+                +-------+
970	|       |------>| A=1  |------      --->| A->0  |
971	|       |       +------+      \         +-------+
972	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
973	|       |       +------+        |       +-------+
974	|       |------>| B=2  |---     |       :       :
975	|       |       +------+   \    |       :       :       +-------+
976	+-------+       :      :    \   |       +-------+       |       |
977	                             ---------->| B->2  |------>|       |
978	                                |       +-------+       | CPU 2 |
979	                                |       | A->0  |------>|       |
980	                                |       +-------+       |       |
981	                                |       :       :       +-------+
982	                                 \      :       :
983	                                  \     +-------+
984	                                   ---->| A->1  |
985	                                        +-------+
986	                                        :       :
987
988
989If, however, a read barrier were to be placed between the load of B and the
990load of A on CPU 2:
991
992	CPU 1			CPU 2
993	=======================	=======================
994		{ A = 0, B = 9 }
995	STORE A=1
996	<write barrier>
997	STORE B=2
998				LOAD B
999				<read barrier>
1000				LOAD A
1001
1002then the partial ordering imposed by CPU 1 will be perceived correctly by CPU
10032:
1004
1005	+-------+       :      :                :       :
1006	|       |       +------+                +-------+
1007	|       |------>| A=1  |------      --->| A->0  |
1008	|       |       +------+      \         +-------+
1009	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1010	|       |       +------+        |       +-------+
1011	|       |------>| B=2  |---     |       :       :
1012	|       |       +------+   \    |       :       :       +-------+
1013	+-------+       :      :    \   |       +-------+       |       |
1014	                             ---------->| B->2  |------>|       |
1015	                                |       +-------+       | CPU 2 |
1016	                                |       :       :       |       |
1017	                                |       :       :       |       |
1018	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1019	  barrier causes all effects      \     +-------+       |       |
1020	  prior to the storage of B        ---->| A->1  |------>|       |
1021	  to be perceptible to CPU 2            +-------+       |       |
1022	                                        :       :       +-------+
1023
1024
1025To illustrate this more completely, consider what could happen if the code
1026contained a load of A either side of the read barrier:
1027
1028	CPU 1			CPU 2
1029	=======================	=======================
1030		{ A = 0, B = 9 }
1031	STORE A=1
1032	<write barrier>
1033	STORE B=2
1034				LOAD B
1035				LOAD A [first load of A]
1036				<read barrier>
1037				LOAD A [second load of A]
1038
1039Even though the two loads of A both occur after the load of B, they may both
1040come up with different values:
1041
1042	+-------+       :      :                :       :
1043	|       |       +------+                +-------+
1044	|       |------>| A=1  |------      --->| A->0  |
1045	|       |       +------+      \         +-------+
1046	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1047	|       |       +------+        |       +-------+
1048	|       |------>| B=2  |---     |       :       :
1049	|       |       +------+   \    |       :       :       +-------+
1050	+-------+       :      :    \   |       +-------+       |       |
1051	                             ---------->| B->2  |------>|       |
1052	                                |       +-------+       | CPU 2 |
1053	                                |       :       :       |       |
1054	                                |       :       :       |       |
1055	                                |       +-------+       |       |
1056	                                |       | A->0  |------>| 1st   |
1057	                                |       +-------+       |       |
1058	  At this point the read ---->   \  rrrrrrrrrrrrrrrrr   |       |
1059	  barrier causes all effects      \     +-------+       |       |
1060	  prior to the storage of B        ---->| A->1  |------>| 2nd   |
1061	  to be perceptible to CPU 2            +-------+       |       |
1062	                                        :       :       +-------+
1063
1064
1065But it may be that the update to A from CPU 1 becomes perceptible to CPU 2
1066before the read barrier completes anyway:
1067
1068	+-------+       :      :                :       :
1069	|       |       +------+                +-------+
1070	|       |------>| A=1  |------      --->| A->0  |
1071	|       |       +------+      \         +-------+
1072	| CPU 1 |   wwwwwwwwwwwwwwww   \    --->| B->9  |
1073	|       |       +------+        |       +-------+
1074	|       |------>| B=2  |---     |       :       :
1075	|       |       +------+   \    |       :       :       +-------+
1076	+-------+       :      :    \   |       +-------+       |       |
1077	                             ---------->| B->2  |------>|       |
1078	                                |       +-------+       | CPU 2 |
1079	                                |       :       :       |       |
1080	                                 \      :       :       |       |
1081	                                  \     +-------+       |       |
1082	                                   ---->| A->1  |------>| 1st   |
1083	                                        +-------+       |       |
1084	                                    rrrrrrrrrrrrrrrrr   |       |
1085	                                        +-------+       |       |
1086	                                        | A->1  |------>| 2nd   |
1087	                                        +-------+       |       |
1088	                                        :       :       +-------+
1089
1090
1091The guarantee is that the second load will always come up with A == 1 if the
1092load of B came up with B == 2.  No such guarantee exists for the first load of
1093A; that may come up with either A == 0 or A == 1.
1094
1095
1096READ MEMORY BARRIERS VS LOAD SPECULATION
1097----------------------------------------
1098
1099Many CPUs speculate with loads: that is they see that they will need to load an
1100item from memory, and they find a time where they're not using the bus for any
1101other loads, and so do the load in advance - even though they haven't actually
1102got to that point in the instruction execution flow yet.  This permits the
1103actual load instruction to potentially complete immediately because the CPU
1104already has the value to hand.
1105
1106It may turn out that the CPU didn't actually need the value - perhaps because a
1107branch circumvented the load - in which case it can discard the value or just
1108cache it for later use.
1109
1110Consider:
1111
1112	CPU 1			CPU 2
1113	=======================	=======================
1114				LOAD B
1115				DIVIDE		} Divide instructions generally
1116				DIVIDE		} take a long time to perform
1117				LOAD A
1118
1119Which might appear as this:
1120
1121	                                        :       :       +-------+
1122	                                        +-------+       |       |
1123	                                    --->| B->2  |------>|       |
1124	                                        +-------+       | CPU 2 |
1125	                                        :       :DIVIDE |       |
1126	                                        +-------+       |       |
1127	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1128	division speculates on the              +-------+   ~   |       |
1129	LOAD of A                               :       :   ~   |       |
1130	                                        :       :DIVIDE |       |
1131	                                        :       :   ~   |       |
1132	Once the divisions are complete -->     :       :   ~-->|       |
1133	the CPU can then perform the            :       :       |       |
1134	LOAD with immediate effect              :       :       +-------+
1135
1136
1137Placing a read barrier or a data dependency barrier just before the second
1138load:
1139
1140	CPU 1			CPU 2
1141	=======================	=======================
1142				LOAD B
1143				DIVIDE
1144				DIVIDE
1145				<read barrier>
1146				LOAD A
1147
1148will force any value speculatively obtained to be reconsidered to an extent
1149dependent on the type of barrier used.  If there was no change made to the
1150speculated memory location, then the speculated value will just be used:
1151
1152	                                        :       :       +-------+
1153	                                        +-------+       |       |
1154	                                    --->| B->2  |------>|       |
1155	                                        +-------+       | CPU 2 |
1156	                                        :       :DIVIDE |       |
1157	                                        +-------+       |       |
1158	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1159	division speculates on the              +-------+   ~   |       |
1160	LOAD of A                               :       :   ~   |       |
1161	                                        :       :DIVIDE |       |
1162	                                        :       :   ~   |       |
1163	                                        :       :   ~   |       |
1164	                                    rrrrrrrrrrrrrrrr~   |       |
1165	                                        :       :   ~   |       |
1166	                                        :       :   ~-->|       |
1167	                                        :       :       |       |
1168	                                        :       :       +-------+
1169
1170
1171but if there was an update or an invalidation from another CPU pending, then
1172the speculation will be cancelled and the value reloaded:
1173
1174	                                        :       :       +-------+
1175	                                        +-------+       |       |
1176	                                    --->| B->2  |------>|       |
1177	                                        +-------+       | CPU 2 |
1178	                                        :       :DIVIDE |       |
1179	                                        +-------+       |       |
1180	The CPU being busy doing a --->     --->| A->0  |~~~~   |       |
1181	division speculates on the              +-------+   ~   |       |
1182	LOAD of A                               :       :   ~   |       |
1183	                                        :       :DIVIDE |       |
1184	                                        :       :   ~   |       |
1185	                                        :       :   ~   |       |
1186	                                    rrrrrrrrrrrrrrrrr   |       |
1187	                                        +-------+       |       |
1188	The speculation is discarded --->   --->| A->1  |------>|       |
1189	and an updated value is                 +-------+       |       |
1190	retrieved                               :       :       +-------+
1191
1192
1193TRANSITIVITY
1194------------
1195
1196Transitivity is a deeply intuitive notion about ordering that is not
1197always provided by real computer systems.  The following example
1198demonstrates transitivity (also called "cumulativity"):
1199
1200	CPU 1			CPU 2			CPU 3
1201	=======================	=======================	=======================
1202		{ X = 0, Y = 0 }
1203	STORE X=1		LOAD X			STORE Y=1
1204				<general barrier>	<general barrier>
1205				LOAD Y			LOAD X
1206
1207Suppose that CPU 2's load from X returns 1 and its load from Y returns 0.
1208This indicates that CPU 2's load from X in some sense follows CPU 1's
1209store to X and that CPU 2's load from Y in some sense preceded CPU 3's
1210store to Y.  The question is then "Can CPU 3's load from X return 0?"
1211
1212Because CPU 2's load from X in some sense came after CPU 1's store, it
1213is natural to expect that CPU 3's load from X must therefore return 1.
1214This expectation is an example of transitivity: if a load executing on
1215CPU A follows a load from the same variable executing on CPU B, then
1216CPU A's load must either return the same value that CPU B's load did,
1217or must return some later value.
1218
1219In the Linux kernel, use of general memory barriers guarantees
1220transitivity.  Therefore, in the above example, if CPU 2's load from X
1221returns 1 and its load from Y returns 0, then CPU 3's load from X must
1222also return 1.
1223
1224However, transitivity is -not- guaranteed for read or write barriers.
1225For example, suppose that CPU 2's general barrier in the above example
1226is changed to a read barrier as shown below:
1227
1228	CPU 1			CPU 2			CPU 3
1229	=======================	=======================	=======================
1230		{ X = 0, Y = 0 }
1231	STORE X=1		LOAD X			STORE Y=1
1232				<read barrier>		<general barrier>
1233				LOAD Y			LOAD X
1234
1235This substitution destroys transitivity: in this example, it is perfectly
1236legal for CPU 2's load from X to return 1, its load from Y to return 0,
1237and CPU 3's load from X to return 0.
1238
1239The key point is that although CPU 2's read barrier orders its pair
1240of loads, it does not guarantee to order CPU 1's store.  Therefore, if
1241this example runs on a system where CPUs 1 and 2 share a store buffer
1242or a level of cache, CPU 2 might have early access to CPU 1's writes.
1243General barriers are therefore required to ensure that all CPUs agree
1244on the combined order of CPU 1's and CPU 2's accesses.
1245
1246To reiterate, if your code requires transitivity, use general barriers
1247throughout.
1248
1249
1250========================
1251EXPLICIT KERNEL BARRIERS
1252========================
1253
1254The Linux kernel has a variety of different barriers that act at different
1255levels:
1256
1257  (*) Compiler barrier.
1258
1259  (*) CPU memory barriers.
1260
1261  (*) MMIO write barrier.
1262
1263
1264COMPILER BARRIER
1265----------------
1266
1267The Linux kernel has an explicit compiler barrier function that prevents the
1268compiler from moving the memory accesses either side of it to the other side:
1269
1270	barrier();
1271
1272This is a general barrier -- there are no read-read or write-write variants
1273of barrier().  However, ACCESS_ONCE() can be thought of as a weak form
1274for barrier() that affects only the specific accesses flagged by the
1275ACCESS_ONCE().
1276
1277The barrier() function has the following effects:
1278
1279 (*) Prevents the compiler from reordering accesses following the
1280     barrier() to precede any accesses preceding the barrier().
1281     One example use for this property is to ease communication between
1282     interrupt-handler code and the code that was interrupted.
1283
1284 (*) Within a loop, forces the compiler to load the variables used
1285     in that loop's conditional on each pass through that loop.
1286
1287The ACCESS_ONCE() function can prevent any number of optimizations that,
1288while perfectly safe in single-threaded code, can be fatal in concurrent
1289code.  Here are some examples of these sorts of optimizations:
1290
1291 (*) The compiler is within its rights to reorder loads and stores
1292     to the same variable, and in some cases, the CPU is within its
1293     rights to reorder loads to the same variable.  This means that
1294     the following code:
1295
1296	a[0] = x;
1297	a[1] = x;
1298
1299     Might result in an older value of x stored in a[1] than in a[0].
1300     Prevent both the compiler and the CPU from doing this as follows:
1301
1302	a[0] = ACCESS_ONCE(x);
1303	a[1] = ACCESS_ONCE(x);
1304
1305     In short, ACCESS_ONCE() provides cache coherence for accesses from
1306     multiple CPUs to a single variable.
1307
1308 (*) The compiler is within its rights to merge successive loads from
1309     the same variable.  Such merging can cause the compiler to "optimize"
1310     the following code:
1311
1312	while (tmp = a)
1313		do_something_with(tmp);
1314
1315     into the following code, which, although in some sense legitimate
1316     for single-threaded code, is almost certainly not what the developer
1317     intended:
1318
1319	if (tmp = a)
1320		for (;;)
1321			do_something_with(tmp);
1322
1323     Use ACCESS_ONCE() to prevent the compiler from doing this to you:
1324
1325	while (tmp = ACCESS_ONCE(a))
1326		do_something_with(tmp);
1327
1328 (*) The compiler is within its rights to reload a variable, for example,
1329     in cases where high register pressure prevents the compiler from
1330     keeping all data of interest in registers.  The compiler might
1331     therefore optimize the variable 'tmp' out of our previous example:
1332
1333	while (tmp = a)
1334		do_something_with(tmp);
1335
1336     This could result in the following code, which is perfectly safe in
1337     single-threaded code, but can be fatal in concurrent code:
1338
1339	while (a)
1340		do_something_with(a);
1341
1342     For example, the optimized version of this code could result in
1343     passing a zero to do_something_with() in the case where the variable
1344     a was modified by some other CPU between the "while" statement and
1345     the call to do_something_with().
1346
1347     Again, use ACCESS_ONCE() to prevent the compiler from doing this:
1348
1349	while (tmp = ACCESS_ONCE(a))
1350		do_something_with(tmp);
1351
1352     Note that if the compiler runs short of registers, it might save
1353     tmp onto the stack.  The overhead of this saving and later restoring
1354     is why compilers reload variables.  Doing so is perfectly safe for
1355     single-threaded code, so you need to tell the compiler about cases
1356     where it is not safe.
1357
1358 (*) The compiler is within its rights to omit a load entirely if it knows
1359     what the value will be.  For example, if the compiler can prove that
1360     the value of variable 'a' is always zero, it can optimize this code:
1361
1362	while (tmp = a)
1363		do_something_with(tmp);
1364
1365     Into this:
1366
1367	do { } while (0);
1368
1369     This transformation is a win for single-threaded code because it gets
1370     rid of a load and a branch.  The problem is that the compiler will
1371     carry out its proof assuming that the current CPU is the only one
1372     updating variable 'a'.  If variable 'a' is shared, then the compiler's
1373     proof will be erroneous.  Use ACCESS_ONCE() to tell the compiler
1374     that it doesn't know as much as it thinks it does:
1375
1376	while (tmp = ACCESS_ONCE(a))
1377		do_something_with(tmp);
1378
1379     But please note that the compiler is also closely watching what you
1380     do with the value after the ACCESS_ONCE().  For example, suppose you
1381     do the following and MAX is a preprocessor macro with the value 1:
1382
1383	while ((tmp = ACCESS_ONCE(a)) % MAX)
1384		do_something_with(tmp);
1385
1386     Then the compiler knows that the result of the "%" operator applied
1387     to MAX will always be zero, again allowing the compiler to optimize
1388     the code into near-nonexistence.  (It will still load from the
1389     variable 'a'.)
1390
1391 (*) Similarly, the compiler is within its rights to omit a store entirely
1392     if it knows that the variable already has the value being stored.
1393     Again, the compiler assumes that the current CPU is the only one
1394     storing into the variable, which can cause the compiler to do the
1395     wrong thing for shared variables.  For example, suppose you have
1396     the following:
1397
1398	a = 0;
1399	/* Code that does not store to variable a. */
1400	a = 0;
1401
1402     The compiler sees that the value of variable 'a' is already zero, so
1403     it might well omit the second store.  This would come as a fatal
1404     surprise if some other CPU might have stored to variable 'a' in the
1405     meantime.
1406
1407     Use ACCESS_ONCE() to prevent the compiler from making this sort of
1408     wrong guess:
1409
1410	ACCESS_ONCE(a) = 0;
1411	/* Code that does not store to variable a. */
1412	ACCESS_ONCE(a) = 0;
1413
1414 (*) The compiler is within its rights to reorder memory accesses unless
1415     you tell it not to.  For example, consider the following interaction
1416     between process-level code and an interrupt handler:
1417
1418	void process_level(void)
1419	{
1420		msg = get_message();
1421		flag = true;
1422	}
1423
1424	void interrupt_handler(void)
1425	{
1426		if (flag)
1427			process_message(msg);
1428	}
1429
1430     There is nothing to prevent the compiler from transforming
1431     process_level() to the following, in fact, this might well be a
1432     win for single-threaded code:
1433
1434	void process_level(void)
1435	{
1436		flag = true;
1437		msg = get_message();
1438	}
1439
1440     If the interrupt occurs between these two statement, then
1441     interrupt_handler() might be passed a garbled msg.  Use ACCESS_ONCE()
1442     to prevent this as follows:
1443
1444	void process_level(void)
1445	{
1446		ACCESS_ONCE(msg) = get_message();
1447		ACCESS_ONCE(flag) = true;
1448	}
1449
1450	void interrupt_handler(void)
1451	{
1452		if (ACCESS_ONCE(flag))
1453			process_message(ACCESS_ONCE(msg));
1454	}
1455
1456     Note that the ACCESS_ONCE() wrappers in interrupt_handler()
1457     are needed if this interrupt handler can itself be interrupted
1458     by something that also accesses 'flag' and 'msg', for example,
1459     a nested interrupt or an NMI.  Otherwise, ACCESS_ONCE() is not
1460     needed in interrupt_handler() other than for documentation purposes.
1461     (Note also that nested interrupts do not typically occur in modern
1462     Linux kernels, in fact, if an interrupt handler returns with
1463     interrupts enabled, you will get a WARN_ONCE() splat.)
1464
1465     You should assume that the compiler can move ACCESS_ONCE() past
1466     code not containing ACCESS_ONCE(), barrier(), or similar primitives.
1467
1468     This effect could also be achieved using barrier(), but ACCESS_ONCE()
1469     is more selective:  With ACCESS_ONCE(), the compiler need only forget
1470     the contents of the indicated memory locations, while with barrier()
1471     the compiler must discard the value of all memory locations that
1472     it has currented cached in any machine registers.  Of course,
1473     the compiler must also respect the order in which the ACCESS_ONCE()s
1474     occur, though the CPU of course need not do so.
1475
1476 (*) The compiler is within its rights to invent stores to a variable,
1477     as in the following example:
1478
1479	if (a)
1480		b = a;
1481	else
1482		b = 42;
1483
1484     The compiler might save a branch by optimizing this as follows:
1485
1486	b = 42;
1487	if (a)
1488		b = a;
1489
1490     In single-threaded code, this is not only safe, but also saves
1491     a branch.  Unfortunately, in concurrent code, this optimization
1492     could cause some other CPU to see a spurious value of 42 -- even
1493     if variable 'a' was never zero -- when loading variable 'b'.
1494     Use ACCESS_ONCE() to prevent this as follows:
1495
1496	if (a)
1497		ACCESS_ONCE(b) = a;
1498	else
1499		ACCESS_ONCE(b) = 42;
1500
1501     The compiler can also invent loads.  These are usually less
1502     damaging, but they can result in cache-line bouncing and thus in
1503     poor performance and scalability.  Use ACCESS_ONCE() to prevent
1504     invented loads.
1505
1506 (*) For aligned memory locations whose size allows them to be accessed
1507     with a single memory-reference instruction, prevents "load tearing"
1508     and "store tearing," in which a single large access is replaced by
1509     multiple smaller accesses.  For example, given an architecture having
1510     16-bit store instructions with 7-bit immediate fields, the compiler
1511     might be tempted to use two 16-bit store-immediate instructions to
1512     implement the following 32-bit store:
1513
1514	p = 0x00010002;
1515
1516     Please note that GCC really does use this sort of optimization,
1517     which is not surprising given that it would likely take more
1518     than two instructions to build the constant and then store it.
1519     This optimization can therefore be a win in single-threaded code.
1520     In fact, a recent bug (since fixed) caused GCC to incorrectly use
1521     this optimization in a volatile store.  In the absence of such bugs,
1522     use of ACCESS_ONCE() prevents store tearing in the following example:
1523
1524	ACCESS_ONCE(p) = 0x00010002;
1525
1526     Use of packed structures can also result in load and store tearing,
1527     as in this example:
1528
1529	struct __attribute__((__packed__)) foo {
1530		short a;
1531		int b;
1532		short c;
1533	};
1534	struct foo foo1, foo2;
1535	...
1536
1537	foo2.a = foo1.a;
1538	foo2.b = foo1.b;
1539	foo2.c = foo1.c;
1540
1541     Because there are no ACCESS_ONCE() wrappers and no volatile markings,
1542     the compiler would be well within its rights to implement these three
1543     assignment statements as a pair of 32-bit loads followed by a pair
1544     of 32-bit stores.  This would result in load tearing on 'foo1.b'
1545     and store tearing on 'foo2.b'.  ACCESS_ONCE() again prevents tearing
1546     in this example:
1547
1548	foo2.a = foo1.a;
1549	ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
1550	foo2.c = foo1.c;
1551
1552All that aside, it is never necessary to use ACCESS_ONCE() on a variable
1553that has been marked volatile.  For example, because 'jiffies' is marked
1554volatile, it is never necessary to say ACCESS_ONCE(jiffies).  The reason
1555for this is that ACCESS_ONCE() is implemented as a volatile cast, which
1556has no effect when its argument is already marked volatile.
1557
1558Please note that these compiler barriers have no direct effect on the CPU,
1559which may then reorder things however it wishes.
1560
1561
1562CPU MEMORY BARRIERS
1563-------------------
1564
1565The Linux kernel has eight basic CPU memory barriers:
1566
1567	TYPE		MANDATORY		SMP CONDITIONAL
1568	===============	=======================	===========================
1569	GENERAL		mb()			smp_mb()
1570	WRITE		wmb()			smp_wmb()
1571	READ		rmb()			smp_rmb()
1572	DATA DEPENDENCY	read_barrier_depends()	smp_read_barrier_depends()
1573
1574
1575All memory barriers except the data dependency barriers imply a compiler
1576barrier. Data dependencies do not impose any additional compiler ordering.
1577
1578Aside: In the case of data dependencies, the compiler would be expected to
1579issue the loads in the correct order (eg. `a[b]` would have to load the value
1580of b before loading a[b]), however there is no guarantee in the C specification
1581that the compiler may not speculate the value of b (eg. is equal to 1) and load
1582a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
1583problem of a compiler reloading b after having loaded a[b], thus having a newer
1584copy of b than a[b]. A consensus has not yet been reached about these problems,
1585however the ACCESS_ONCE macro is a good place to start looking.
1586
1587SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
1588systems because it is assumed that a CPU will appear to be self-consistent,
1589and will order overlapping accesses correctly with respect to itself.
1590
1591[!] Note that SMP memory barriers _must_ be used to control the ordering of
1592references to shared memory on SMP systems, though the use of locking instead
1593is sufficient.
1594
1595Mandatory barriers should not be used to control SMP effects, since mandatory
1596barriers unnecessarily impose overhead on UP systems. They may, however, be
1597used to control MMIO effects on accesses through relaxed memory I/O windows.
1598These are required even on non-SMP systems as they affect the order in which
1599memory operations appear to a device by prohibiting both the compiler and the
1600CPU from reordering them.
1601
1602
1603There are some more advanced barrier functions:
1604
1605 (*) set_mb(var, value)
1606
1607     This assigns the value to the variable and then inserts a full memory
1608     barrier after it, depending on the function.  It isn't guaranteed to
1609     insert anything more than a compiler barrier in a UP compilation.
1610
1611
1612 (*) smp_mb__before_atomic();
1613 (*) smp_mb__after_atomic();
1614
1615     These are for use with atomic (such as add, subtract, increment and
1616     decrement) functions that don't return a value, especially when used for
1617     reference counting.  These functions do not imply memory barriers.
1618
1619     These are also used for atomic bitop functions that do not return a
1620     value (such as set_bit and clear_bit).
1621
1622     As an example, consider a piece of code that marks an object as being dead
1623     and then decrements the object's reference count:
1624
1625	obj->dead = 1;
1626	smp_mb__before_atomic();
1627	atomic_dec(&obj->ref_count);
1628
1629     This makes sure that the death mark on the object is perceived to be set
1630     *before* the reference counter is decremented.
1631
1632     See Documentation/atomic_ops.txt for more information.  See the "Atomic
1633     operations" subsection for information on where to use these.
1634
1635
1636 (*) dma_wmb();
1637 (*) dma_rmb();
1638
1639     These are for use with consistent memory to guarantee the ordering
1640     of writes or reads of shared memory accessible to both the CPU and a
1641     DMA capable device.
1642
1643     For example, consider a device driver that shares memory with a device
1644     and uses a descriptor status value to indicate if the descriptor belongs
1645     to the device or the CPU, and a doorbell to notify it when new
1646     descriptors are available:
1647
1648	if (desc->status != DEVICE_OWN) {
1649		/* do not read data until we own descriptor */
1650		dma_rmb();
1651
1652		/* read/modify data */
1653		read_data = desc->data;
1654		desc->data = write_data;
1655
1656		/* flush modifications before status update */
1657		dma_wmb();
1658
1659		/* assign ownership */
1660		desc->status = DEVICE_OWN;
1661
1662		/* force memory to sync before notifying device via MMIO */
1663		wmb();
1664
1665		/* notify device of new descriptors */
1666		writel(DESC_NOTIFY, doorbell);
1667	}
1668
1669     The dma_rmb() allows us guarantee the device has released ownership
1670     before we read the data from the descriptor, and he dma_wmb() allows
1671     us to guarantee the data is written to the descriptor before the device
1672     can see it now has ownership.  The wmb() is needed to guarantee that the
1673     cache coherent memory writes have completed before attempting a write to
1674     the cache incoherent MMIO region.
1675
1676     See Documentation/DMA-API.txt for more information on consistent memory.
1677
1678MMIO WRITE BARRIER
1679------------------
1680
1681The Linux kernel also has a special barrier for use with memory-mapped I/O
1682writes:
1683
1684	mmiowb();
1685
1686This is a variation on the mandatory write barrier that causes writes to weakly
1687ordered I/O regions to be partially ordered.  Its effects may go beyond the
1688CPU->Hardware interface and actually affect the hardware at some level.
1689
1690See the subsection "Locks vs I/O accesses" for more information.
1691
1692
1693===============================
1694IMPLICIT KERNEL MEMORY BARRIERS
1695===============================
1696
1697Some of the other functions in the linux kernel imply memory barriers, amongst
1698which are locking and scheduling functions.
1699
1700This specification is a _minimum_ guarantee; any particular architecture may
1701provide more substantial guarantees, but these may not be relied upon outside
1702of arch specific code.
1703
1704
1705ACQUIRING FUNCTIONS
1706-------------------
1707
1708The Linux kernel has a number of locking constructs:
1709
1710 (*) spin locks
1711 (*) R/W spin locks
1712 (*) mutexes
1713 (*) semaphores
1714 (*) R/W semaphores
1715 (*) RCU
1716
1717In all cases there are variants on "ACQUIRE" operations and "RELEASE" operations
1718for each construct.  These operations all imply certain barriers:
1719
1720 (1) ACQUIRE operation implication:
1721
1722     Memory operations issued after the ACQUIRE will be completed after the
1723     ACQUIRE operation has completed.
1724
1725     Memory operations issued before the ACQUIRE may be completed after
1726     the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
1727     combined with a following ACQUIRE, orders prior loads against
1728     subsequent loads and stores and also orders prior stores against
1729     subsequent stores.  Note that this is weaker than smp_mb()!  The
1730     smp_mb__before_spinlock() primitive is free on many architectures.
1731
1732 (2) RELEASE operation implication:
1733
1734     Memory operations issued before the RELEASE will be completed before the
1735     RELEASE operation has completed.
1736
1737     Memory operations issued after the RELEASE may be completed before the
1738     RELEASE operation has completed.
1739
1740 (3) ACQUIRE vs ACQUIRE implication:
1741
1742     All ACQUIRE operations issued before another ACQUIRE operation will be
1743     completed before that ACQUIRE operation.
1744
1745 (4) ACQUIRE vs RELEASE implication:
1746
1747     All ACQUIRE operations issued before a RELEASE operation will be
1748     completed before the RELEASE operation.
1749
1750 (5) Failed conditional ACQUIRE implication:
1751
1752     Certain locking variants of the ACQUIRE operation may fail, either due to
1753     being unable to get the lock immediately, or due to receiving an unblocked
1754     signal whilst asleep waiting for the lock to become available.  Failed
1755     locks do not imply any sort of barrier.
1756
1757[!] Note: one of the consequences of lock ACQUIREs and RELEASEs being only
1758one-way barriers is that the effects of instructions outside of a critical
1759section may seep into the inside of the critical section.
1760
1761An ACQUIRE followed by a RELEASE may not be assumed to be full memory barrier
1762because it is possible for an access preceding the ACQUIRE to happen after the
1763ACQUIRE, and an access following the RELEASE to happen before the RELEASE, and
1764the two accesses can themselves then cross:
1765
1766	*A = a;
1767	ACQUIRE M
1768	RELEASE M
1769	*B = b;
1770
1771may occur as:
1772
1773	ACQUIRE M, STORE *B, STORE *A, RELEASE M
1774
1775When the ACQUIRE and RELEASE are a lock acquisition and release,
1776respectively, this same reordering can occur if the lock's ACQUIRE and
1777RELEASE are to the same lock variable, but only from the perspective of
1778another CPU not holding that lock.  In short, a ACQUIRE followed by an
1779RELEASE may -not- be assumed to be a full memory barrier.
1780
1781Similarly, the reverse case of a RELEASE followed by an ACQUIRE does not
1782imply a full memory barrier.  If it is necessary for a RELEASE-ACQUIRE
1783pair to produce a full barrier, the ACQUIRE can be followed by an
1784smp_mb__after_unlock_lock() invocation.  This will produce a full barrier
1785if either (a) the RELEASE and the ACQUIRE are executed by the same
1786CPU or task, or (b) the RELEASE and ACQUIRE act on the same variable.
1787The smp_mb__after_unlock_lock() primitive is free on many architectures.
1788Without smp_mb__after_unlock_lock(), the CPU's execution of the critical
1789sections corresponding to the RELEASE and the ACQUIRE can cross, so that:
1790
1791	*A = a;
1792	RELEASE M
1793	ACQUIRE N
1794	*B = b;
1795
1796could occur as:
1797
1798	ACQUIRE N, STORE *B, STORE *A, RELEASE M
1799
1800It might appear that this reordering could introduce a deadlock.
1801However, this cannot happen because if such a deadlock threatened,
1802the RELEASE would simply complete, thereby avoiding the deadlock.
1803
1804	Why does this work?
1805
1806	One key point is that we are only talking about the CPU doing
1807	the reordering, not the compiler.  If the compiler (or, for
1808	that matter, the developer) switched the operations, deadlock
1809	-could- occur.
1810
1811	But suppose the CPU reordered the operations.  In this case,
1812	the unlock precedes the lock in the assembly code.  The CPU
1813	simply elected to try executing the later lock operation first.
1814	If there is a deadlock, this lock operation will simply spin (or
1815	try to sleep, but more on that later).	The CPU will eventually
1816	execute the unlock operation (which preceded the lock operation
1817	in the assembly code), which will unravel the potential deadlock,
1818	allowing the lock operation to succeed.
1819
1820	But what if the lock is a sleeplock?  In that case, the code will
1821	try to enter the scheduler, where it will eventually encounter
1822	a memory barrier, which will force the earlier unlock operation
1823	to complete, again unraveling the deadlock.  There might be
1824	a sleep-unlock race, but the locking primitive needs to resolve
1825	such races properly in any case.
1826
1827With smp_mb__after_unlock_lock(), the two critical sections cannot overlap.
1828For example, with the following code, the store to *A will always be
1829seen by other CPUs before the store to *B:
1830
1831	*A = a;
1832	RELEASE M
1833	ACQUIRE N
1834	smp_mb__after_unlock_lock();
1835	*B = b;
1836
1837The operations will always occur in one of the following orders:
1838
1839	STORE *A, RELEASE, ACQUIRE, smp_mb__after_unlock_lock(), STORE *B
1840	STORE *A, ACQUIRE, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1841	ACQUIRE, STORE *A, RELEASE, smp_mb__after_unlock_lock(), STORE *B
1842
1843If the RELEASE and ACQUIRE were instead both operating on the same lock
1844variable, only the first of these alternatives can occur.  In addition,
1845the more strongly ordered systems may rule out some of the above orders.
1846But in any case, as noted earlier, the smp_mb__after_unlock_lock()
1847ensures that the store to *A will always be seen as happening before
1848the store to *B.
1849
1850Locks and semaphores may not provide any guarantee of ordering on UP compiled
1851systems, and so cannot be counted on in such a situation to actually achieve
1852anything at all - especially with respect to I/O accesses - unless combined
1853with interrupt disabling operations.
1854
1855See also the section on "Inter-CPU locking barrier effects".
1856
1857
1858As an example, consider the following:
1859
1860	*A = a;
1861	*B = b;
1862	ACQUIRE
1863	*C = c;
1864	*D = d;
1865	RELEASE
1866	*E = e;
1867	*F = f;
1868
1869The following sequence of events is acceptable:
1870
1871	ACQUIRE, {*F,*A}, *E, {*C,*D}, *B, RELEASE
1872
1873	[+] Note that {*F,*A} indicates a combined access.
1874
1875But none of the following are:
1876
1877	{*F,*A}, *B,	ACQUIRE, *C, *D,	RELEASE, *E
1878	*A, *B, *C,	ACQUIRE, *D,		RELEASE, *E, *F
1879	*A, *B,		ACQUIRE, *C,		RELEASE, *D, *E, *F
1880	*B,		ACQUIRE, *C, *D,	RELEASE, {*F,*A}, *E
1881
1882
1883
1884INTERRUPT DISABLING FUNCTIONS
1885-----------------------------
1886
1887Functions that disable interrupts (ACQUIRE equivalent) and enable interrupts
1888(RELEASE equivalent) will act as compiler barriers only.  So if memory or I/O
1889barriers are required in such a situation, they must be provided from some
1890other means.
1891
1892
1893SLEEP AND WAKE-UP FUNCTIONS
1894---------------------------
1895
1896Sleeping and waking on an event flagged in global data can be viewed as an
1897interaction between two pieces of data: the task state of the task waiting for
1898the event and the global data used to indicate the event.  To make sure that
1899these appear to happen in the right order, the primitives to begin the process
1900of going to sleep, and the primitives to initiate a wake up imply certain
1901barriers.
1902
1903Firstly, the sleeper normally follows something like this sequence of events:
1904
1905	for (;;) {
1906		set_current_state(TASK_UNINTERRUPTIBLE);
1907		if (event_indicated)
1908			break;
1909		schedule();
1910	}
1911
1912A general memory barrier is interpolated automatically by set_current_state()
1913after it has altered the task state:
1914
1915	CPU 1
1916	===============================
1917	set_current_state();
1918	  set_mb();
1919	    STORE current->state
1920	    <general barrier>
1921	LOAD event_indicated
1922
1923set_current_state() may be wrapped by:
1924
1925	prepare_to_wait();
1926	prepare_to_wait_exclusive();
1927
1928which therefore also imply a general memory barrier after setting the state.
1929The whole sequence above is available in various canned forms, all of which
1930interpolate the memory barrier in the right place:
1931
1932	wait_event();
1933	wait_event_interruptible();
1934	wait_event_interruptible_exclusive();
1935	wait_event_interruptible_timeout();
1936	wait_event_killable();
1937	wait_event_timeout();
1938	wait_on_bit();
1939	wait_on_bit_lock();
1940
1941
1942Secondly, code that performs a wake up normally follows something like this:
1943
1944	event_indicated = 1;
1945	wake_up(&event_wait_queue);
1946
1947or:
1948
1949	event_indicated = 1;
1950	wake_up_process(event_daemon);
1951
1952A write memory barrier is implied by wake_up() and co. if and only if they wake
1953something up.  The barrier occurs before the task state is cleared, and so sits
1954between the STORE to indicate the event and the STORE to set TASK_RUNNING:
1955
1956	CPU 1				CPU 2
1957	===============================	===============================
1958	set_current_state();		STORE event_indicated
1959	  set_mb();			wake_up();
1960	    STORE current->state	  <write barrier>
1961	    <general barrier>		  STORE current->state
1962	LOAD event_indicated
1963
1964To repeat, this write memory barrier is present if and only if something
1965is actually awakened.  To see this, consider the following sequence of
1966events, where X and Y are both initially zero:
1967
1968	CPU 1				CPU 2
1969	===============================	===============================
1970	X = 1;				STORE event_indicated
1971	smp_mb();			wake_up();
1972	Y = 1;				wait_event(wq, Y == 1);
1973	wake_up();			  load from Y sees 1, no memory barrier
1974					load from X might see 0
1975
1976In contrast, if a wakeup does occur, CPU 2's load from X would be guaranteed
1977to see 1.
1978
1979The available waker functions include:
1980
1981	complete();
1982	wake_up();
1983	wake_up_all();
1984	wake_up_bit();
1985	wake_up_interruptible();
1986	wake_up_interruptible_all();
1987	wake_up_interruptible_nr();
1988	wake_up_interruptible_poll();
1989	wake_up_interruptible_sync();
1990	wake_up_interruptible_sync_poll();
1991	wake_up_locked();
1992	wake_up_locked_poll();
1993	wake_up_nr();
1994	wake_up_poll();
1995	wake_up_process();
1996
1997
1998[!] Note that the memory barriers implied by the sleeper and the waker do _not_
1999order multiple stores before the wake-up with respect to loads of those stored
2000values after the sleeper has called set_current_state().  For instance, if the
2001sleeper does:
2002
2003	set_current_state(TASK_INTERRUPTIBLE);
2004	if (event_indicated)
2005		break;
2006	__set_current_state(TASK_RUNNING);
2007	do_something(my_data);
2008
2009and the waker does:
2010
2011	my_data = value;
2012	event_indicated = 1;
2013	wake_up(&event_wait_queue);
2014
2015there's no guarantee that the change to event_indicated will be perceived by
2016the sleeper as coming after the change to my_data.  In such a circumstance, the
2017code on both sides must interpolate its own memory barriers between the
2018separate data accesses.  Thus the above sleeper ought to do:
2019
2020	set_current_state(TASK_INTERRUPTIBLE);
2021	if (event_indicated) {
2022		smp_rmb();
2023		do_something(my_data);
2024	}
2025
2026and the waker should do:
2027
2028	my_data = value;
2029	smp_wmb();
2030	event_indicated = 1;
2031	wake_up(&event_wait_queue);
2032
2033
2034MISCELLANEOUS FUNCTIONS
2035-----------------------
2036
2037Other functions that imply barriers:
2038
2039 (*) schedule() and similar imply full memory barriers.
2040
2041
2042===================================
2043INTER-CPU ACQUIRING BARRIER EFFECTS
2044===================================
2045
2046On SMP systems locking primitives give a more substantial form of barrier: one
2047that does affect memory access ordering on other CPUs, within the context of
2048conflict on any particular lock.
2049
2050
2051ACQUIRES VS MEMORY ACCESSES
2052---------------------------
2053
2054Consider the following: the system has a pair of spinlocks (M) and (Q), and
2055three CPUs; then should the following sequence of events occur:
2056
2057	CPU 1				CPU 2
2058	===============================	===============================
2059	ACCESS_ONCE(*A) = a;		ACCESS_ONCE(*E) = e;
2060	ACQUIRE M			ACQUIRE Q
2061	ACCESS_ONCE(*B) = b;		ACCESS_ONCE(*F) = f;
2062	ACCESS_ONCE(*C) = c;		ACCESS_ONCE(*G) = g;
2063	RELEASE M			RELEASE Q
2064	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*H) = h;
2065
2066Then there is no guarantee as to what order CPU 3 will see the accesses to *A
2067through *H occur in, other than the constraints imposed by the separate locks
2068on the separate CPUs. It might, for example, see:
2069
2070	*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
2071
2072But it won't see any of:
2073
2074	*B, *C or *D preceding ACQUIRE M
2075	*A, *B or *C following RELEASE M
2076	*F, *G or *H preceding ACQUIRE Q
2077	*E, *F or *G following RELEASE Q
2078
2079
2080However, if the following occurs:
2081
2082	CPU 1				CPU 2
2083	===============================	===============================
2084	ACCESS_ONCE(*A) = a;
2085	ACQUIRE M		     [1]
2086	ACCESS_ONCE(*B) = b;
2087	ACCESS_ONCE(*C) = c;
2088	RELEASE M	     [1]
2089	ACCESS_ONCE(*D) = d;		ACCESS_ONCE(*E) = e;
2090					ACQUIRE M		     [2]
2091					smp_mb__after_unlock_lock();
2092					ACCESS_ONCE(*F) = f;
2093					ACCESS_ONCE(*G) = g;
2094					RELEASE M	     [2]
2095					ACCESS_ONCE(*H) = h;
2096
2097CPU 3 might see:
2098
2099	*E, ACQUIRE M [1], *C, *B, *A, RELEASE M [1],
2100		ACQUIRE M [2], *H, *F, *G, RELEASE M [2], *D
2101
2102But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
2103
2104	*B, *C, *D, *F, *G or *H preceding ACQUIRE M [1]
2105	*A, *B or *C following RELEASE M [1]
2106	*F, *G or *H preceding ACQUIRE M [2]
2107	*A, *B, *C, *E, *F or *G following RELEASE M [2]
2108
2109Note that the smp_mb__after_unlock_lock() is critically important
2110here: Without it CPU 3 might see some of the above orderings.
2111Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
2112to be seen in order unless CPU 3 holds lock M.
2113
2114
2115ACQUIRES VS I/O ACCESSES
2116------------------------
2117
2118Under certain circumstances (especially involving NUMA), I/O accesses within
2119two spinlocked sections on two different CPUs may be seen as interleaved by the
2120PCI bridge, because the PCI bridge does not necessarily participate in the
2121cache-coherence protocol, and is therefore incapable of issuing the required
2122read memory barriers.
2123
2124For example:
2125
2126	CPU 1				CPU 2
2127	===============================	===============================
2128	spin_lock(Q)
2129	writel(0, ADDR)
2130	writel(1, DATA);
2131	spin_unlock(Q);
2132					spin_lock(Q);
2133					writel(4, ADDR);
2134					writel(5, DATA);
2135					spin_unlock(Q);
2136
2137may be seen by the PCI bridge as follows:
2138
2139	STORE *ADDR = 0, STORE *ADDR = 4, STORE *DATA = 1, STORE *DATA = 5
2140
2141which would probably cause the hardware to malfunction.
2142
2143
2144What is necessary here is to intervene with an mmiowb() before dropping the
2145spinlock, for example:
2146
2147	CPU 1				CPU 2
2148	===============================	===============================
2149	spin_lock(Q)
2150	writel(0, ADDR)
2151	writel(1, DATA);
2152	mmiowb();
2153	spin_unlock(Q);
2154					spin_lock(Q);
2155					writel(4, ADDR);
2156					writel(5, DATA);
2157					mmiowb();
2158					spin_unlock(Q);
2159
2160this will ensure that the two stores issued on CPU 1 appear at the PCI bridge
2161before either of the stores issued on CPU 2.
2162
2163
2164Furthermore, following a store by a load from the same device obviates the need
2165for the mmiowb(), because the load forces the store to complete before the load
2166is performed:
2167
2168	CPU 1				CPU 2
2169	===============================	===============================
2170	spin_lock(Q)
2171	writel(0, ADDR)
2172	a = readl(DATA);
2173	spin_unlock(Q);
2174					spin_lock(Q);
2175					writel(4, ADDR);
2176					b = readl(DATA);
2177					spin_unlock(Q);
2178
2179
2180See Documentation/DocBook/deviceiobook.tmpl for more information.
2181
2182
2183=================================
2184WHERE ARE MEMORY BARRIERS NEEDED?
2185=================================
2186
2187Under normal operation, memory operation reordering is generally not going to
2188be a problem as a single-threaded linear piece of code will still appear to
2189work correctly, even if it's in an SMP kernel.  There are, however, four
2190circumstances in which reordering definitely _could_ be a problem:
2191
2192 (*) Interprocessor interaction.
2193
2194 (*) Atomic operations.
2195
2196 (*) Accessing devices.
2197
2198 (*) Interrupts.
2199
2200
2201INTERPROCESSOR INTERACTION
2202--------------------------
2203
2204When there's a system with more than one processor, more than one CPU in the
2205system may be working on the same data set at the same time.  This can cause
2206synchronisation problems, and the usual way of dealing with them is to use
2207locks.  Locks, however, are quite expensive, and so it may be preferable to
2208operate without the use of a lock if at all possible.  In such a case
2209operations that affect both CPUs may have to be carefully ordered to prevent
2210a malfunction.
2211
2212Consider, for example, the R/W semaphore slow path.  Here a waiting process is
2213queued on the semaphore, by virtue of it having a piece of its stack linked to
2214the semaphore's list of waiting processes:
2215
2216	struct rw_semaphore {
2217		...
2218		spinlock_t lock;
2219		struct list_head waiters;
2220	};
2221
2222	struct rwsem_waiter {
2223		struct list_head list;
2224		struct task_struct *task;
2225	};
2226
2227To wake up a particular waiter, the up_read() or up_write() functions have to:
2228
2229 (1) read the next pointer from this waiter's record to know as to where the
2230     next waiter record is;
2231
2232 (2) read the pointer to the waiter's task structure;
2233
2234 (3) clear the task pointer to tell the waiter it has been given the semaphore;
2235
2236 (4) call wake_up_process() on the task; and
2237
2238 (5) release the reference held on the waiter's task struct.
2239
2240In other words, it has to perform this sequence of events:
2241
2242	LOAD waiter->list.next;
2243	LOAD waiter->task;
2244	STORE waiter->task;
2245	CALL wakeup
2246	RELEASE task
2247
2248and if any of these steps occur out of order, then the whole thing may
2249malfunction.
2250
2251Once it has queued itself and dropped the semaphore lock, the waiter does not
2252get the lock again; it instead just waits for its task pointer to be cleared
2253before proceeding.  Since the record is on the waiter's stack, this means that
2254if the task pointer is cleared _before_ the next pointer in the list is read,
2255another CPU might start processing the waiter and might clobber the waiter's
2256stack before the up*() function has a chance to read the next pointer.
2257
2258Consider then what might happen to the above sequence of events:
2259
2260	CPU 1				CPU 2
2261	===============================	===============================
2262					down_xxx()
2263					Queue waiter
2264					Sleep
2265	up_yyy()
2266	LOAD waiter->task;
2267	STORE waiter->task;
2268					Woken up by other event
2269	<preempt>
2270					Resume processing
2271					down_xxx() returns
2272					call foo()
2273					foo() clobbers *waiter
2274	</preempt>
2275	LOAD waiter->list.next;
2276	--- OOPS ---
2277
2278This could be dealt with using the semaphore lock, but then the down_xxx()
2279function has to needlessly get the spinlock again after being woken up.
2280
2281The way to deal with this is to insert a general SMP memory barrier:
2282
2283	LOAD waiter->list.next;
2284	LOAD waiter->task;
2285	smp_mb();
2286	STORE waiter->task;
2287	CALL wakeup
2288	RELEASE task
2289
2290In this case, the barrier makes a guarantee that all memory accesses before the
2291barrier will appear to happen before all the memory accesses after the barrier
2292with respect to the other CPUs on the system.  It does _not_ guarantee that all
2293the memory accesses before the barrier will be complete by the time the barrier
2294instruction itself is complete.
2295
2296On a UP system - where this wouldn't be a problem - the smp_mb() is just a
2297compiler barrier, thus making sure the compiler emits the instructions in the
2298right order without actually intervening in the CPU.  Since there's only one
2299CPU, that CPU's dependency ordering logic will take care of everything else.
2300
2301
2302ATOMIC OPERATIONS
2303-----------------
2304
2305Whilst they are technically interprocessor interaction considerations, atomic
2306operations are noted specially as some of them imply full memory barriers and
2307some don't, but they're very heavily relied on as a group throughout the
2308kernel.
2309
2310Any atomic operation that modifies some state in memory and returns information
2311about the state (old or new) implies an SMP-conditional general memory barrier
2312(smp_mb()) on each side of the actual operation (with the exception of
2313explicit lock operations, described later).  These include:
2314
2315	xchg();
2316	cmpxchg();
2317	atomic_xchg();			atomic_long_xchg();
2318	atomic_cmpxchg();		atomic_long_cmpxchg();
2319	atomic_inc_return();		atomic_long_inc_return();
2320	atomic_dec_return();		atomic_long_dec_return();
2321	atomic_add_return();		atomic_long_add_return();
2322	atomic_sub_return();		atomic_long_sub_return();
2323	atomic_inc_and_test();		atomic_long_inc_and_test();
2324	atomic_dec_and_test();		atomic_long_dec_and_test();
2325	atomic_sub_and_test();		atomic_long_sub_and_test();
2326	atomic_add_negative();		atomic_long_add_negative();
2327	test_and_set_bit();
2328	test_and_clear_bit();
2329	test_and_change_bit();
2330
2331	/* when succeeds (returns 1) */
2332	atomic_add_unless();		atomic_long_add_unless();
2333
2334These are used for such things as implementing ACQUIRE-class and RELEASE-class
2335operations and adjusting reference counters towards object destruction, and as
2336such the implicit memory barrier effects are necessary.
2337
2338
2339The following operations are potential problems as they do _not_ imply memory
2340barriers, but might be used for implementing such things as RELEASE-class
2341operations:
2342
2343	atomic_set();
2344	set_bit();
2345	clear_bit();
2346	change_bit();
2347
2348With these the appropriate explicit memory barrier should be used if necessary
2349(smp_mb__before_atomic() for instance).
2350
2351
2352The following also do _not_ imply memory barriers, and so may require explicit
2353memory barriers under some circumstances (smp_mb__before_atomic() for
2354instance):
2355
2356	atomic_add();
2357	atomic_sub();
2358	atomic_inc();
2359	atomic_dec();
2360
2361If they're used for statistics generation, then they probably don't need memory
2362barriers, unless there's a coupling between statistical data.
2363
2364If they're used for reference counting on an object to control its lifetime,
2365they probably don't need memory barriers because either the reference count
2366will be adjusted inside a locked section, or the caller will already hold
2367sufficient references to make the lock, and thus a memory barrier unnecessary.
2368
2369If they're used for constructing a lock of some description, then they probably
2370do need memory barriers as a lock primitive generally has to do things in a
2371specific order.
2372
2373Basically, each usage case has to be carefully considered as to whether memory
2374barriers are needed or not.
2375
2376The following operations are special locking primitives:
2377
2378	test_and_set_bit_lock();
2379	clear_bit_unlock();
2380	__clear_bit_unlock();
2381
2382These implement ACQUIRE-class and RELEASE-class operations. These should be used in
2383preference to other operations when implementing locking primitives, because
2384their implementations can be optimised on many architectures.
2385
2386[!] Note that special memory barrier primitives are available for these
2387situations because on some CPUs the atomic instructions used imply full memory
2388barriers, and so barrier instructions are superfluous in conjunction with them,
2389and in such cases the special barrier primitives will be no-ops.
2390
2391See Documentation/atomic_ops.txt for more information.
2392
2393
2394ACCESSING DEVICES
2395-----------------
2396
2397Many devices can be memory mapped, and so appear to the CPU as if they're just
2398a set of memory locations.  To control such a device, the driver usually has to
2399make the right memory accesses in exactly the right order.
2400
2401However, having a clever CPU or a clever compiler creates a potential problem
2402in that the carefully sequenced accesses in the driver code won't reach the
2403device in the requisite order if the CPU or the compiler thinks it is more
2404efficient to reorder, combine or merge accesses - something that would cause
2405the device to malfunction.
2406
2407Inside of the Linux kernel, I/O should be done through the appropriate accessor
2408routines - such as inb() or writel() - which know how to make such accesses
2409appropriately sequential.  Whilst this, for the most part, renders the explicit
2410use of memory barriers unnecessary, there are a couple of situations where they
2411might be needed:
2412
2413 (1) On some systems, I/O stores are not strongly ordered across all CPUs, and
2414     so for _all_ general drivers locks should be used and mmiowb() must be
2415     issued prior to unlocking the critical section.
2416
2417 (2) If the accessor functions are used to refer to an I/O memory window with
2418     relaxed memory access properties, then _mandatory_ memory barriers are
2419     required to enforce ordering.
2420
2421See Documentation/DocBook/deviceiobook.tmpl for more information.
2422
2423
2424INTERRUPTS
2425----------
2426
2427A driver may be interrupted by its own interrupt service routine, and thus the
2428two parts of the driver may interfere with each other's attempts to control or
2429access the device.
2430
2431This may be alleviated - at least in part - by disabling local interrupts (a
2432form of locking), such that the critical operations are all contained within
2433the interrupt-disabled section in the driver.  Whilst the driver's interrupt
2434routine is executing, the driver's core may not run on the same CPU, and its
2435interrupt is not permitted to happen again until the current interrupt has been
2436handled, thus the interrupt handler does not need to lock against that.
2437
2438However, consider a driver that was talking to an ethernet card that sports an
2439address register and a data register.  If that driver's core talks to the card
2440under interrupt-disablement and then the driver's interrupt handler is invoked:
2441
2442	LOCAL IRQ DISABLE
2443	writew(ADDR, 3);
2444	writew(DATA, y);
2445	LOCAL IRQ ENABLE
2446	<interrupt>
2447	writew(ADDR, 4);
2448	q = readw(DATA);
2449	</interrupt>
2450
2451The store to the data register might happen after the second store to the
2452address register if ordering rules are sufficiently relaxed:
2453
2454	STORE *ADDR = 3, STORE *ADDR = 4, STORE *DATA = y, q = LOAD *DATA
2455
2456
2457If ordering rules are relaxed, it must be assumed that accesses done inside an
2458interrupt disabled section may leak outside of it and may interleave with
2459accesses performed in an interrupt - and vice versa - unless implicit or
2460explicit barriers are used.
2461
2462Normally this won't be a problem because the I/O accesses done inside such
2463sections will include synchronous load operations on strictly ordered I/O
2464registers that form implicit I/O barriers. If this isn't sufficient then an
2465mmiowb() may need to be used explicitly.
2466
2467
2468A similar situation may occur between an interrupt routine and two routines
2469running on separate CPUs that communicate with each other. If such a case is
2470likely, then interrupt-disabling locks should be used to guarantee ordering.
2471
2472
2473==========================
2474KERNEL I/O BARRIER EFFECTS
2475==========================
2476
2477When accessing I/O memory, drivers should use the appropriate accessor
2478functions:
2479
2480 (*) inX(), outX():
2481
2482     These are intended to talk to I/O space rather than memory space, but
2483     that's primarily a CPU-specific concept. The i386 and x86_64 processors do
2484     indeed have special I/O space access cycles and instructions, but many
2485     CPUs don't have such a concept.
2486
2487     The PCI bus, amongst others, defines an I/O space concept which - on such
2488     CPUs as i386 and x86_64 - readily maps to the CPU's concept of I/O
2489     space.  However, it may also be mapped as a virtual I/O space in the CPU's
2490     memory map, particularly on those CPUs that don't support alternate I/O
2491     spaces.
2492
2493     Accesses to this space may be fully synchronous (as on i386), but
2494     intermediary bridges (such as the PCI host bridge) may not fully honour
2495     that.
2496
2497     They are guaranteed to be fully ordered with respect to each other.
2498
2499     They are not guaranteed to be fully ordered with respect to other types of
2500     memory and I/O operation.
2501
2502 (*) readX(), writeX():
2503
2504     Whether these are guaranteed to be fully ordered and uncombined with
2505     respect to each other on the issuing CPU depends on the characteristics
2506     defined for the memory window through which they're accessing. On later
2507     i386 architecture machines, for example, this is controlled by way of the
2508     MTRR registers.
2509
2510     Ordinarily, these will be guaranteed to be fully ordered and uncombined,
2511     provided they're not accessing a prefetchable device.
2512
2513     However, intermediary hardware (such as a PCI bridge) may indulge in
2514     deferral if it so wishes; to flush a store, a load from the same location
2515     is preferred[*], but a load from the same device or from configuration
2516     space should suffice for PCI.
2517
2518     [*] NOTE! attempting to load from the same location as was written to may
2519	 cause a malfunction - consider the 16550 Rx/Tx serial registers for
2520	 example.
2521
2522     Used with prefetchable I/O memory, an mmiowb() barrier may be required to
2523     force stores to be ordered.
2524
2525     Please refer to the PCI specification for more information on interactions
2526     between PCI transactions.
2527
2528 (*) readX_relaxed(), writeX_relaxed()
2529
2530     These are similar to readX() and writeX(), but provide weaker memory
2531     ordering guarantees. Specifically, they do not guarantee ordering with
2532     respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
2533     ordering with respect to LOCK or UNLOCK operations. If the latter is
2534     required, an mmiowb() barrier can be used. Note that relaxed accesses to
2535     the same peripheral are guaranteed to be ordered with respect to each
2536     other.
2537
2538 (*) ioreadX(), iowriteX()
2539
2540     These will perform appropriately for the type of access they're actually
2541     doing, be it inX()/outX() or readX()/writeX().
2542
2543
2544========================================
2545ASSUMED MINIMUM EXECUTION ORDERING MODEL
2546========================================
2547
2548It has to be assumed that the conceptual CPU is weakly-ordered but that it will
2549maintain the appearance of program causality with respect to itself.  Some CPUs
2550(such as i386 or x86_64) are more constrained than others (such as powerpc or
2551frv), and so the most relaxed case (namely DEC Alpha) must be assumed outside
2552of arch-specific code.
2553
2554This means that it must be considered that the CPU will execute its instruction
2555stream in any order it feels like - or even in parallel - provided that if an
2556instruction in the stream depends on an earlier instruction, then that
2557earlier instruction must be sufficiently complete[*] before the later
2558instruction may proceed; in other words: provided that the appearance of
2559causality is maintained.
2560
2561 [*] Some instructions have more than one effect - such as changing the
2562     condition codes, changing registers or changing memory - and different
2563     instructions may depend on different effects.
2564
2565A CPU may also discard any instruction sequence that winds up having no
2566ultimate effect.  For example, if two adjacent instructions both load an
2567immediate value into the same register, the first may be discarded.
2568
2569
2570Similarly, it has to be assumed that compiler might reorder the instruction
2571stream in any way it sees fit, again provided the appearance of causality is
2572maintained.
2573
2574
2575============================
2576THE EFFECTS OF THE CPU CACHE
2577============================
2578
2579The way cached memory operations are perceived across the system is affected to
2580a certain extent by the caches that lie between CPUs and memory, and by the
2581memory coherence system that maintains the consistency of state in the system.
2582
2583As far as the way a CPU interacts with another part of the system through the
2584caches goes, the memory system has to include the CPU's caches, and memory
2585barriers for the most part act at the interface between the CPU and its cache
2586(memory barriers logically act on the dotted line in the following diagram):
2587
2588	    <--- CPU --->         :       <----------- Memory ----------->
2589	                          :
2590	+--------+    +--------+  :   +--------+    +-----------+
2591	|        |    |        |  :   |        |    |           |    +--------+
2592	|  CPU   |    | Memory |  :   | CPU    |    |           |    |        |
2593	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2594	|        |    | Queue  |  :   |        |    |           |--->| Memory |
2595	|        |    |        |  :   |        |    |           |    |        |
2596	+--------+    +--------+  :   +--------+    |           |    |        |
2597	                          :                 | Cache     |    +--------+
2598	                          :                 | Coherency |
2599	                          :                 | Mechanism |    +--------+
2600	+--------+    +--------+  :   +--------+    |           |    |	      |
2601	|        |    |        |  :   |        |    |           |    |        |
2602	|  CPU   |    | Memory |  :   | CPU    |    |           |--->| Device |
2603	|  Core  |--->| Access |----->| Cache  |<-->|           |    |        |
2604	|        |    | Queue  |  :   |        |    |           |    |        |
2605	|        |    |        |  :   |        |    |           |    +--------+
2606	+--------+    +--------+  :   +--------+    +-----------+
2607	                          :
2608	                          :
2609
2610Although any particular load or store may not actually appear outside of the
2611CPU that issued it since it may have been satisfied within the CPU's own cache,
2612it will still appear as if the full memory access had taken place as far as the
2613other CPUs are concerned since the cache coherency mechanisms will migrate the
2614cacheline over to the accessing CPU and propagate the effects upon conflict.
2615
2616The CPU core may execute instructions in any order it deems fit, provided the
2617expected program causality appears to be maintained.  Some of the instructions
2618generate load and store operations which then go into the queue of memory
2619accesses to be performed.  The core may place these in the queue in any order
2620it wishes, and continue execution until it is forced to wait for an instruction
2621to complete.
2622
2623What memory barriers are concerned with is controlling the order in which
2624accesses cross from the CPU side of things to the memory side of things, and
2625the order in which the effects are perceived to happen by the other observers
2626in the system.
2627
2628[!] Memory barriers are _not_ needed within a given CPU, as CPUs always see
2629their own loads and stores as if they had happened in program order.
2630
2631[!] MMIO or other device accesses may bypass the cache system.  This depends on
2632the properties of the memory window through which devices are accessed and/or
2633the use of any special device communication instructions the CPU may have.
2634
2635
2636CACHE COHERENCY
2637---------------
2638
2639Life isn't quite as simple as it may appear above, however: for while the
2640caches are expected to be coherent, there's no guarantee that that coherency
2641will be ordered.  This means that whilst changes made on one CPU will
2642eventually become visible on all CPUs, there's no guarantee that they will
2643become apparent in the same order on those other CPUs.
2644
2645
2646Consider dealing with a system that has a pair of CPUs (1 & 2), each of which
2647has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D):
2648
2649	            :
2650	            :                          +--------+
2651	            :      +---------+         |        |
2652	+--------+  : +--->| Cache A |<------->|        |
2653	|        |  : |    +---------+         |        |
2654	|  CPU 1 |<---+                        |        |
2655	|        |  : |    +---------+         |        |
2656	+--------+  : +--->| Cache B |<------->|        |
2657	            :      +---------+         |        |
2658	            :                          | Memory |
2659	            :      +---------+         | System |
2660	+--------+  : +--->| Cache C |<------->|        |
2661	|        |  : |    +---------+         |        |
2662	|  CPU 2 |<---+                        |        |
2663	|        |  : |    +---------+         |        |
2664	+--------+  : +--->| Cache D |<------->|        |
2665	            :      +---------+         |        |
2666	            :                          +--------+
2667	            :
2668
2669Imagine the system has the following properties:
2670
2671 (*) an odd-numbered cache line may be in cache A, cache C or it may still be
2672     resident in memory;
2673
2674 (*) an even-numbered cache line may be in cache B, cache D or it may still be
2675     resident in memory;
2676
2677 (*) whilst the CPU core is interrogating one cache, the other cache may be
2678     making use of the bus to access the rest of the system - perhaps to
2679     displace a dirty cacheline or to do a speculative load;
2680
2681 (*) each cache has a queue of operations that need to be applied to that cache
2682     to maintain coherency with the rest of the system;
2683
2684 (*) the coherency queue is not flushed by normal loads to lines already
2685     present in the cache, even though the contents of the queue may
2686     potentially affect those loads.
2687
2688Imagine, then, that two writes are made on the first CPU, with a write barrier
2689between them to guarantee that they will appear to reach that CPU's caches in
2690the requisite order:
2691
2692	CPU 1		CPU 2		COMMENT
2693	===============	===============	=======================================
2694					u == 0, v == 1 and p == &u, q == &u
2695	v = 2;
2696	smp_wmb();			Make sure change to v is visible before
2697					 change to p
2698	<A:modify v=2>			v is now in cache A exclusively
2699	p = &v;
2700	<B:modify p=&v>			p is now in cache B exclusively
2701
2702The write memory barrier forces the other CPUs in the system to perceive that
2703the local CPU's caches have apparently been updated in the correct order.  But
2704now imagine that the second CPU wants to read those values:
2705
2706	CPU 1		CPU 2		COMMENT
2707	===============	===============	=======================================
2708	...
2709			q = p;
2710			x = *q;
2711
2712The above pair of reads may then fail to happen in the expected order, as the
2713cacheline holding p may get updated in one of the second CPU's caches whilst
2714the update to the cacheline holding v is delayed in the other of the second
2715CPU's caches by some other cache event:
2716
2717	CPU 1		CPU 2		COMMENT
2718	===============	===============	=======================================
2719					u == 0, v == 1 and p == &u, q == &u
2720	v = 2;
2721	smp_wmb();
2722	<A:modify v=2>	<C:busy>
2723			<C:queue v=2>
2724	p = &v;		q = p;
2725			<D:request p>
2726	<B:modify p=&v>	<D:commit p=&v>
2727			<D:read p>
2728			x = *q;
2729			<C:read *q>	Reads from v before v updated in cache
2730			<C:unbusy>
2731			<C:commit v=2>
2732
2733Basically, whilst both cachelines will be updated on CPU 2 eventually, there's
2734no guarantee that, without intervention, the order of update will be the same
2735as that committed on CPU 1.
2736
2737
2738To intervene, we need to interpolate a data dependency barrier or a read
2739barrier between the loads.  This will force the cache to commit its coherency
2740queue before processing any further requests:
2741
2742	CPU 1		CPU 2		COMMENT
2743	===============	===============	=======================================
2744					u == 0, v == 1 and p == &u, q == &u
2745	v = 2;
2746	smp_wmb();
2747	<A:modify v=2>	<C:busy>
2748			<C:queue v=2>
2749	p = &v;		q = p;
2750			<D:request p>
2751	<B:modify p=&v>	<D:commit p=&v>
2752			<D:read p>
2753			smp_read_barrier_depends()
2754			<C:unbusy>
2755			<C:commit v=2>
2756			x = *q;
2757			<C:read *q>	Reads from v after v updated in cache
2758
2759
2760This sort of problem can be encountered on DEC Alpha processors as they have a
2761split cache that improves performance by making better use of the data bus.
2762Whilst most CPUs do imply a data dependency barrier on the read when a memory
2763access depends on a read, not all do, so it may not be relied on.
2764
2765Other CPUs may also have split caches, but must coordinate between the various
2766cachelets for normal memory accesses.  The semantics of the Alpha removes the
2767need for coordination in the absence of memory barriers.
2768
2769
2770CACHE COHERENCY VS DMA
2771----------------------
2772
2773Not all systems maintain cache coherency with respect to devices doing DMA.  In
2774such cases, a device attempting DMA may obtain stale data from RAM because
2775dirty cache lines may be resident in the caches of various CPUs, and may not
2776have been written back to RAM yet.  To deal with this, the appropriate part of
2777the kernel must flush the overlapping bits of cache on each CPU (and maybe
2778invalidate them as well).
2779
2780In addition, the data DMA'd to RAM by a device may be overwritten by dirty
2781cache lines being written back to RAM from a CPU's cache after the device has
2782installed its own data, or cache lines present in the CPU's cache may simply
2783obscure the fact that RAM has been updated, until at such time as the cacheline
2784is discarded from the CPU's cache and reloaded.  To deal with this, the
2785appropriate part of the kernel must invalidate the overlapping bits of the
2786cache on each CPU.
2787
2788See Documentation/cachetlb.txt for more information on cache management.
2789
2790
2791CACHE COHERENCY VS MMIO
2792-----------------------
2793
2794Memory mapped I/O usually takes place through memory locations that are part of
2795a window in the CPU's memory space that has different properties assigned than
2796the usual RAM directed window.
2797
2798Amongst these properties is usually the fact that such accesses bypass the
2799caching entirely and go directly to the device buses.  This means MMIO accesses
2800may, in effect, overtake accesses to cached memory that were emitted earlier.
2801A memory barrier isn't sufficient in such a case, but rather the cache must be
2802flushed between the cached memory write and the MMIO access if the two are in
2803any way dependent.
2804
2805
2806=========================
2807THE THINGS CPUS GET UP TO
2808=========================
2809
2810A programmer might take it for granted that the CPU will perform memory
2811operations in exactly the order specified, so that if the CPU is, for example,
2812given the following piece of code to execute:
2813
2814	a = ACCESS_ONCE(*A);
2815	ACCESS_ONCE(*B) = b;
2816	c = ACCESS_ONCE(*C);
2817	d = ACCESS_ONCE(*D);
2818	ACCESS_ONCE(*E) = e;
2819
2820they would then expect that the CPU will complete the memory operation for each
2821instruction before moving on to the next one, leading to a definite sequence of
2822operations as seen by external observers in the system:
2823
2824	LOAD *A, STORE *B, LOAD *C, LOAD *D, STORE *E.
2825
2826
2827Reality is, of course, much messier.  With many CPUs and compilers, the above
2828assumption doesn't hold because:
2829
2830 (*) loads are more likely to need to be completed immediately to permit
2831     execution progress, whereas stores can often be deferred without a
2832     problem;
2833
2834 (*) loads may be done speculatively, and the result discarded should it prove
2835     to have been unnecessary;
2836
2837 (*) loads may be done speculatively, leading to the result having been fetched
2838     at the wrong time in the expected sequence of events;
2839
2840 (*) the order of the memory accesses may be rearranged to promote better use
2841     of the CPU buses and caches;
2842
2843 (*) loads and stores may be combined to improve performance when talking to
2844     memory or I/O hardware that can do batched accesses of adjacent locations,
2845     thus cutting down on transaction setup costs (memory and PCI devices may
2846     both be able to do this); and
2847
2848 (*) the CPU's data cache may affect the ordering, and whilst cache-coherency
2849     mechanisms may alleviate this - once the store has actually hit the cache
2850     - there's no guarantee that the coherency management will be propagated in
2851     order to other CPUs.
2852
2853So what another CPU, say, might actually observe from the above piece of code
2854is:
2855
2856	LOAD *A, ..., LOAD {*C,*D}, STORE *E, STORE *B
2857
2858	(Where "LOAD {*C,*D}" is a combined load)
2859
2860
2861However, it is guaranteed that a CPU will be self-consistent: it will see its
2862_own_ accesses appear to be correctly ordered, without the need for a memory
2863barrier.  For instance with the following code:
2864
2865	U = ACCESS_ONCE(*A);
2866	ACCESS_ONCE(*A) = V;
2867	ACCESS_ONCE(*A) = W;
2868	X = ACCESS_ONCE(*A);
2869	ACCESS_ONCE(*A) = Y;
2870	Z = ACCESS_ONCE(*A);
2871
2872and assuming no intervention by an external influence, it can be assumed that
2873the final result will appear to be:
2874
2875	U == the original value of *A
2876	X == W
2877	Z == Y
2878	*A == Y
2879
2880The code above may cause the CPU to generate the full sequence of memory
2881accesses:
2882
2883	U=LOAD *A, STORE *A=V, STORE *A=W, X=LOAD *A, STORE *A=Y, Z=LOAD *A
2884
2885in that order, but, without intervention, the sequence may have almost any
2886combination of elements combined or discarded, provided the program's view of
2887the world remains consistent.  Note that ACCESS_ONCE() is -not- optional
2888in the above example, as there are architectures where a given CPU might
2889reorder successive loads to the same location.  On such architectures,
2890ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
2891Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
2892special ld.acq and st.rel instructions that prevent such reordering.
2893
2894The compiler may also combine, discard or defer elements of the sequence before
2895the CPU even sees them.
2896
2897For instance:
2898
2899	*A = V;
2900	*A = W;
2901
2902may be reduced to:
2903
2904	*A = W;
2905
2906since, without either a write barrier or an ACCESS_ONCE(), it can be
2907assumed that the effect of the storage of V to *A is lost.  Similarly:
2908
2909	*A = Y;
2910	Z = *A;
2911
2912may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
2913
2914	*A = Y;
2915	Z = Y;
2916
2917and the LOAD operation never appear outside of the CPU.
2918
2919
2920AND THEN THERE'S THE ALPHA
2921--------------------------
2922
2923The DEC Alpha CPU is one of the most relaxed CPUs there is.  Not only that,
2924some versions of the Alpha CPU have a split data cache, permitting them to have
2925two semantically-related cache lines updated at separate times.  This is where
2926the data dependency barrier really becomes necessary as this synchronises both
2927caches with the memory coherence system, thus making it seem like pointer
2928changes vs new data occur in the right order.
2929
2930The Alpha defines the Linux kernel's memory barrier model.
2931
2932See the subsection on "Cache Coherency" above.
2933
2934
2935============
2936EXAMPLE USES
2937============
2938
2939CIRCULAR BUFFERS
2940----------------
2941
2942Memory barriers can be used to implement circular buffering without the need
2943of a lock to serialise the producer with the consumer.  See:
2944
2945	Documentation/circular-buffers.txt
2946
2947for details.
2948
2949
2950==========
2951REFERENCES
2952==========
2953
2954Alpha AXP Architecture Reference Manual, Second Edition (Sites & Witek,
2955Digital Press)
2956	Chapter 5.2: Physical Address Space Characteristics
2957	Chapter 5.4: Caches and Write Buffers
2958	Chapter 5.5: Data Sharing
2959	Chapter 5.6: Read/Write Ordering
2960
2961AMD64 Architecture Programmer's Manual Volume 2: System Programming
2962	Chapter 7.1: Memory-Access Ordering
2963	Chapter 7.4: Buffering and Combining Memory Writes
2964
2965IA-32 Intel Architecture Software Developer's Manual, Volume 3:
2966System Programming Guide
2967	Chapter 7.1: Locked Atomic Operations
2968	Chapter 7.2: Memory Ordering
2969	Chapter 7.4: Serializing Instructions
2970
2971The SPARC Architecture Manual, Version 9
2972	Chapter 8: Memory Models
2973	Appendix D: Formal Specification of the Memory Models
2974	Appendix J: Programming with the Memory Models
2975
2976UltraSPARC Programmer Reference Manual
2977	Chapter 5: Memory Accesses and Cacheability
2978	Chapter 15: Sparc-V9 Memory Models
2979
2980UltraSPARC III Cu User's Manual
2981	Chapter 9: Memory Models
2982
2983UltraSPARC IIIi Processor User's Manual
2984	Chapter 8: Memory Models
2985
2986UltraSPARC Architecture 2005
2987	Chapter 9: Memory
2988	Appendix D: Formal Specifications of the Memory Models
2989
2990UltraSPARC T1 Supplement to the UltraSPARC Architecture 2005
2991	Chapter 8: Memory Models
2992	Appendix F: Caches and Cache Coherency
2993
2994Solaris Internals, Core Kernel Architecture, p63-68:
2995	Chapter 3.3: Hardware Considerations for Locks and
2996			Synchronization
2997
2998Unix Systems for Modern Architectures, Symmetric Multiprocessing and Caching
2999for Kernel Programmers:
3000	Chapter 13: Other Memory Models
3001
3002Intel Itanium Architecture Software Developer's Manual: Volume 1:
3003	Section 2.6: Speculation
3004	Section 4.4: Memory Access
3005