xref: /openbmc/linux/Documentation/core-api/this_cpu_ops.rst (revision 2612e3bbc0386368a850140a6c9b990cd496a5ec)
1*c9b54d6fSMauro Carvalho Chehab===================
2*c9b54d6fSMauro Carvalho Chehabthis_cpu operations
3*c9b54d6fSMauro Carvalho Chehab===================
4*c9b54d6fSMauro Carvalho Chehab
5*c9b54d6fSMauro Carvalho Chehab:Author: Christoph Lameter, August 4th, 2014
6*c9b54d6fSMauro Carvalho Chehab:Author: Pranith Kumar, Aug 2nd, 2014
7*c9b54d6fSMauro Carvalho Chehab
8*c9b54d6fSMauro Carvalho Chehabthis_cpu operations are a way of optimizing access to per cpu
9*c9b54d6fSMauro Carvalho Chehabvariables associated with the *currently* executing processor. This is
10*c9b54d6fSMauro Carvalho Chehabdone through the use of segment registers (or a dedicated register where
11*c9b54d6fSMauro Carvalho Chehabthe cpu permanently stored the beginning of the per cpu	area for a
12*c9b54d6fSMauro Carvalho Chehabspecific processor).
13*c9b54d6fSMauro Carvalho Chehab
14*c9b54d6fSMauro Carvalho Chehabthis_cpu operations add a per cpu variable offset to the processor
15*c9b54d6fSMauro Carvalho Chehabspecific per cpu base and encode that operation in the instruction
16*c9b54d6fSMauro Carvalho Chehaboperating on the per cpu variable.
17*c9b54d6fSMauro Carvalho Chehab
18*c9b54d6fSMauro Carvalho ChehabThis means that there are no atomicity issues between the calculation of
19*c9b54d6fSMauro Carvalho Chehabthe offset and the operation on the data. Therefore it is not
20*c9b54d6fSMauro Carvalho Chehabnecessary to disable preemption or interrupts to ensure that the
21*c9b54d6fSMauro Carvalho Chehabprocessor is not changed between the calculation of the address and
22*c9b54d6fSMauro Carvalho Chehabthe operation on the data.
23*c9b54d6fSMauro Carvalho Chehab
24*c9b54d6fSMauro Carvalho ChehabRead-modify-write operations are of particular interest. Frequently
25*c9b54d6fSMauro Carvalho Chehabprocessors have special lower latency instructions that can operate
26*c9b54d6fSMauro Carvalho Chehabwithout the typical synchronization overhead, but still provide some
27*c9b54d6fSMauro Carvalho Chehabsort of relaxed atomicity guarantees. The x86, for example, can execute
28*c9b54d6fSMauro Carvalho ChehabRMW (Read Modify Write) instructions like inc/dec/cmpxchg without the
29*c9b54d6fSMauro Carvalho Chehablock prefix and the associated latency penalty.
30*c9b54d6fSMauro Carvalho Chehab
31*c9b54d6fSMauro Carvalho ChehabAccess to the variable without the lock prefix is not synchronized but
32*c9b54d6fSMauro Carvalho Chehabsynchronization is not necessary since we are dealing with per cpu
33*c9b54d6fSMauro Carvalho Chehabdata specific to the currently executing processor. Only the current
34*c9b54d6fSMauro Carvalho Chehabprocessor should be accessing that variable and therefore there are no
35*c9b54d6fSMauro Carvalho Chehabconcurrency issues with other processors in the system.
36*c9b54d6fSMauro Carvalho Chehab
37*c9b54d6fSMauro Carvalho ChehabPlease note that accesses by remote processors to a per cpu area are
38*c9b54d6fSMauro Carvalho Chehabexceptional situations and may impact performance and/or correctness
39*c9b54d6fSMauro Carvalho Chehab(remote write operations) of local RMW operations via this_cpu_*.
40*c9b54d6fSMauro Carvalho Chehab
41*c9b54d6fSMauro Carvalho ChehabThe main use of the this_cpu operations has been to optimize counter
42*c9b54d6fSMauro Carvalho Chehaboperations.
43*c9b54d6fSMauro Carvalho Chehab
44*c9b54d6fSMauro Carvalho ChehabThe following this_cpu() operations with implied preemption protection
45*c9b54d6fSMauro Carvalho Chehabare defined. These operations can be used without worrying about
46*c9b54d6fSMauro Carvalho Chehabpreemption and interrupts::
47*c9b54d6fSMauro Carvalho Chehab
48*c9b54d6fSMauro Carvalho Chehab	this_cpu_read(pcp)
49*c9b54d6fSMauro Carvalho Chehab	this_cpu_write(pcp, val)
50*c9b54d6fSMauro Carvalho Chehab	this_cpu_add(pcp, val)
51*c9b54d6fSMauro Carvalho Chehab	this_cpu_and(pcp, val)
52*c9b54d6fSMauro Carvalho Chehab	this_cpu_or(pcp, val)
53*c9b54d6fSMauro Carvalho Chehab	this_cpu_add_return(pcp, val)
54*c9b54d6fSMauro Carvalho Chehab	this_cpu_xchg(pcp, nval)
55*c9b54d6fSMauro Carvalho Chehab	this_cpu_cmpxchg(pcp, oval, nval)
56*c9b54d6fSMauro Carvalho Chehab	this_cpu_sub(pcp, val)
57*c9b54d6fSMauro Carvalho Chehab	this_cpu_inc(pcp)
58*c9b54d6fSMauro Carvalho Chehab	this_cpu_dec(pcp)
59*c9b54d6fSMauro Carvalho Chehab	this_cpu_sub_return(pcp, val)
60*c9b54d6fSMauro Carvalho Chehab	this_cpu_inc_return(pcp)
61*c9b54d6fSMauro Carvalho Chehab	this_cpu_dec_return(pcp)
62*c9b54d6fSMauro Carvalho Chehab
63*c9b54d6fSMauro Carvalho Chehab
64*c9b54d6fSMauro Carvalho ChehabInner working of this_cpu operations
65*c9b54d6fSMauro Carvalho Chehab------------------------------------
66*c9b54d6fSMauro Carvalho Chehab
67*c9b54d6fSMauro Carvalho ChehabOn x86 the fs: or the gs: segment registers contain the base of the
68*c9b54d6fSMauro Carvalho Chehabper cpu area. It is then possible to simply use the segment override
69*c9b54d6fSMauro Carvalho Chehabto relocate a per cpu relative address to the proper per cpu area for
70*c9b54d6fSMauro Carvalho Chehabthe processor. So the relocation to the per cpu base is encoded in the
71*c9b54d6fSMauro Carvalho Chehabinstruction via a segment register prefix.
72*c9b54d6fSMauro Carvalho Chehab
73*c9b54d6fSMauro Carvalho ChehabFor example::
74*c9b54d6fSMauro Carvalho Chehab
75*c9b54d6fSMauro Carvalho Chehab	DEFINE_PER_CPU(int, x);
76*c9b54d6fSMauro Carvalho Chehab	int z;
77*c9b54d6fSMauro Carvalho Chehab
78*c9b54d6fSMauro Carvalho Chehab	z = this_cpu_read(x);
79*c9b54d6fSMauro Carvalho Chehab
80*c9b54d6fSMauro Carvalho Chehabresults in a single instruction::
81*c9b54d6fSMauro Carvalho Chehab
82*c9b54d6fSMauro Carvalho Chehab	mov ax, gs:[x]
83*c9b54d6fSMauro Carvalho Chehab
84*c9b54d6fSMauro Carvalho Chehabinstead of a sequence of calculation of the address and then a fetch
85*c9b54d6fSMauro Carvalho Chehabfrom that address which occurs with the per cpu operations. Before
86*c9b54d6fSMauro Carvalho Chehabthis_cpu_ops such sequence also required preempt disable/enable to
87*c9b54d6fSMauro Carvalho Chehabprevent the kernel from moving the thread to a different processor
88*c9b54d6fSMauro Carvalho Chehabwhile the calculation is performed.
89*c9b54d6fSMauro Carvalho Chehab
90*c9b54d6fSMauro Carvalho ChehabConsider the following this_cpu operation::
91*c9b54d6fSMauro Carvalho Chehab
92*c9b54d6fSMauro Carvalho Chehab	this_cpu_inc(x)
93*c9b54d6fSMauro Carvalho Chehab
94*c9b54d6fSMauro Carvalho ChehabThe above results in the following single instruction (no lock prefix!)::
95*c9b54d6fSMauro Carvalho Chehab
96*c9b54d6fSMauro Carvalho Chehab	inc gs:[x]
97*c9b54d6fSMauro Carvalho Chehab
98*c9b54d6fSMauro Carvalho Chehabinstead of the following operations required if there is no segment
99*c9b54d6fSMauro Carvalho Chehabregister::
100*c9b54d6fSMauro Carvalho Chehab
101*c9b54d6fSMauro Carvalho Chehab	int *y;
102*c9b54d6fSMauro Carvalho Chehab	int cpu;
103*c9b54d6fSMauro Carvalho Chehab
104*c9b54d6fSMauro Carvalho Chehab	cpu = get_cpu();
105*c9b54d6fSMauro Carvalho Chehab	y = per_cpu_ptr(&x, cpu);
106*c9b54d6fSMauro Carvalho Chehab	(*y)++;
107*c9b54d6fSMauro Carvalho Chehab	put_cpu();
108*c9b54d6fSMauro Carvalho Chehab
109*c9b54d6fSMauro Carvalho ChehabNote that these operations can only be used on per cpu data that is
110*c9b54d6fSMauro Carvalho Chehabreserved for a specific processor. Without disabling preemption in the
111*c9b54d6fSMauro Carvalho Chehabsurrounding code this_cpu_inc() will only guarantee that one of the
112*c9b54d6fSMauro Carvalho Chehabper cpu counters is correctly incremented. However, there is no
113*c9b54d6fSMauro Carvalho Chehabguarantee that the OS will not move the process directly before or
114*c9b54d6fSMauro Carvalho Chehabafter the this_cpu instruction is executed. In general this means that
115*c9b54d6fSMauro Carvalho Chehabthe value of the individual counters for each processor are
116*c9b54d6fSMauro Carvalho Chehabmeaningless. The sum of all the per cpu counters is the only value
117*c9b54d6fSMauro Carvalho Chehabthat is of interest.
118*c9b54d6fSMauro Carvalho Chehab
119*c9b54d6fSMauro Carvalho ChehabPer cpu variables are used for performance reasons. Bouncing cache
120*c9b54d6fSMauro Carvalho Chehablines can be avoided if multiple processors concurrently go through
121*c9b54d6fSMauro Carvalho Chehabthe same code paths.  Since each processor has its own per cpu
122*c9b54d6fSMauro Carvalho Chehabvariables no concurrent cache line updates take place. The price that
123*c9b54d6fSMauro Carvalho Chehabhas to be paid for this optimization is the need to add up the per cpu
124*c9b54d6fSMauro Carvalho Chehabcounters when the value of a counter is needed.
125*c9b54d6fSMauro Carvalho Chehab
126*c9b54d6fSMauro Carvalho Chehab
127*c9b54d6fSMauro Carvalho ChehabSpecial operations
128*c9b54d6fSMauro Carvalho Chehab------------------
129*c9b54d6fSMauro Carvalho Chehab
130*c9b54d6fSMauro Carvalho Chehab::
131*c9b54d6fSMauro Carvalho Chehab
132*c9b54d6fSMauro Carvalho Chehab	y = this_cpu_ptr(&x)
133*c9b54d6fSMauro Carvalho Chehab
134*c9b54d6fSMauro Carvalho ChehabTakes the offset of a per cpu variable (&x !) and returns the address
135*c9b54d6fSMauro Carvalho Chehabof the per cpu variable that belongs to the currently executing
136*c9b54d6fSMauro Carvalho Chehabprocessor.  this_cpu_ptr avoids multiple steps that the common
137*c9b54d6fSMauro Carvalho Chehabget_cpu/put_cpu sequence requires. No processor number is
138*c9b54d6fSMauro Carvalho Chehabavailable. Instead, the offset of the local per cpu area is simply
139*c9b54d6fSMauro Carvalho Chehabadded to the per cpu offset.
140*c9b54d6fSMauro Carvalho Chehab
141*c9b54d6fSMauro Carvalho ChehabNote that this operation is usually used in a code segment when
142*c9b54d6fSMauro Carvalho Chehabpreemption has been disabled. The pointer is then used to
143*c9b54d6fSMauro Carvalho Chehabaccess local per cpu data in a critical section. When preemption
144*c9b54d6fSMauro Carvalho Chehabis re-enabled this pointer is usually no longer useful since it may
145*c9b54d6fSMauro Carvalho Chehabno longer point to per cpu data of the current processor.
146*c9b54d6fSMauro Carvalho Chehab
147*c9b54d6fSMauro Carvalho Chehab
148*c9b54d6fSMauro Carvalho ChehabPer cpu variables and offsets
149*c9b54d6fSMauro Carvalho Chehab-----------------------------
150*c9b54d6fSMauro Carvalho Chehab
151*c9b54d6fSMauro Carvalho ChehabPer cpu variables have *offsets* to the beginning of the per cpu
152*c9b54d6fSMauro Carvalho Chehabarea. They do not have addresses although they look like that in the
153*c9b54d6fSMauro Carvalho Chehabcode. Offsets cannot be directly dereferenced. The offset must be
154*c9b54d6fSMauro Carvalho Chehabadded to a base pointer of a per cpu area of a processor in order to
155*c9b54d6fSMauro Carvalho Chehabform a valid address.
156*c9b54d6fSMauro Carvalho Chehab
157*c9b54d6fSMauro Carvalho ChehabTherefore the use of x or &x outside of the context of per cpu
158*c9b54d6fSMauro Carvalho Chehaboperations is invalid and will generally be treated like a NULL
159*c9b54d6fSMauro Carvalho Chehabpointer dereference.
160*c9b54d6fSMauro Carvalho Chehab
161*c9b54d6fSMauro Carvalho Chehab::
162*c9b54d6fSMauro Carvalho Chehab
163*c9b54d6fSMauro Carvalho Chehab	DEFINE_PER_CPU(int, x);
164*c9b54d6fSMauro Carvalho Chehab
165*c9b54d6fSMauro Carvalho ChehabIn the context of per cpu operations the above implies that x is a per
166*c9b54d6fSMauro Carvalho Chehabcpu variable. Most this_cpu operations take a cpu variable.
167*c9b54d6fSMauro Carvalho Chehab
168*c9b54d6fSMauro Carvalho Chehab::
169*c9b54d6fSMauro Carvalho Chehab
170*c9b54d6fSMauro Carvalho Chehab	int __percpu *p = &x;
171*c9b54d6fSMauro Carvalho Chehab
172*c9b54d6fSMauro Carvalho Chehab&x and hence p is the *offset* of a per cpu variable. this_cpu_ptr()
173*c9b54d6fSMauro Carvalho Chehabtakes the offset of a per cpu variable which makes this look a bit
174*c9b54d6fSMauro Carvalho Chehabstrange.
175*c9b54d6fSMauro Carvalho Chehab
176*c9b54d6fSMauro Carvalho Chehab
177*c9b54d6fSMauro Carvalho ChehabOperations on a field of a per cpu structure
178*c9b54d6fSMauro Carvalho Chehab--------------------------------------------
179*c9b54d6fSMauro Carvalho Chehab
180*c9b54d6fSMauro Carvalho ChehabLet's say we have a percpu structure::
181*c9b54d6fSMauro Carvalho Chehab
182*c9b54d6fSMauro Carvalho Chehab	struct s {
183*c9b54d6fSMauro Carvalho Chehab		int n,m;
184*c9b54d6fSMauro Carvalho Chehab	};
185*c9b54d6fSMauro Carvalho Chehab
186*c9b54d6fSMauro Carvalho Chehab	DEFINE_PER_CPU(struct s, p);
187*c9b54d6fSMauro Carvalho Chehab
188*c9b54d6fSMauro Carvalho Chehab
189*c9b54d6fSMauro Carvalho ChehabOperations on these fields are straightforward::
190*c9b54d6fSMauro Carvalho Chehab
191*c9b54d6fSMauro Carvalho Chehab	this_cpu_inc(p.m)
192*c9b54d6fSMauro Carvalho Chehab
193*c9b54d6fSMauro Carvalho Chehab	z = this_cpu_cmpxchg(p.m, 0, 1);
194*c9b54d6fSMauro Carvalho Chehab
195*c9b54d6fSMauro Carvalho Chehab
196*c9b54d6fSMauro Carvalho ChehabIf we have an offset to struct s::
197*c9b54d6fSMauro Carvalho Chehab
198*c9b54d6fSMauro Carvalho Chehab	struct s __percpu *ps = &p;
199*c9b54d6fSMauro Carvalho Chehab
200*c9b54d6fSMauro Carvalho Chehab	this_cpu_dec(ps->m);
201*c9b54d6fSMauro Carvalho Chehab
202*c9b54d6fSMauro Carvalho Chehab	z = this_cpu_inc_return(ps->n);
203*c9b54d6fSMauro Carvalho Chehab
204*c9b54d6fSMauro Carvalho Chehab
205*c9b54d6fSMauro Carvalho ChehabThe calculation of the pointer may require the use of this_cpu_ptr()
206*c9b54d6fSMauro Carvalho Chehabif we do not make use of this_cpu ops later to manipulate fields::
207*c9b54d6fSMauro Carvalho Chehab
208*c9b54d6fSMauro Carvalho Chehab	struct s *pp;
209*c9b54d6fSMauro Carvalho Chehab
210*c9b54d6fSMauro Carvalho Chehab	pp = this_cpu_ptr(&p);
211*c9b54d6fSMauro Carvalho Chehab
212*c9b54d6fSMauro Carvalho Chehab	pp->m--;
213*c9b54d6fSMauro Carvalho Chehab
214*c9b54d6fSMauro Carvalho Chehab	z = pp->n++;
215*c9b54d6fSMauro Carvalho Chehab
216*c9b54d6fSMauro Carvalho Chehab
217*c9b54d6fSMauro Carvalho ChehabVariants of this_cpu ops
218*c9b54d6fSMauro Carvalho Chehab------------------------
219*c9b54d6fSMauro Carvalho Chehab
220*c9b54d6fSMauro Carvalho Chehabthis_cpu ops are interrupt safe. Some architectures do not support
221*c9b54d6fSMauro Carvalho Chehabthese per cpu local operations. In that case the operation must be
222*c9b54d6fSMauro Carvalho Chehabreplaced by code that disables interrupts, then does the operations
223*c9b54d6fSMauro Carvalho Chehabthat are guaranteed to be atomic and then re-enable interrupts. Doing
224*c9b54d6fSMauro Carvalho Chehabso is expensive. If there are other reasons why the scheduler cannot
225*c9b54d6fSMauro Carvalho Chehabchange the processor we are executing on then there is no reason to
226*c9b54d6fSMauro Carvalho Chehabdisable interrupts. For that purpose the following __this_cpu operations
227*c9b54d6fSMauro Carvalho Chehabare provided.
228*c9b54d6fSMauro Carvalho Chehab
229*c9b54d6fSMauro Carvalho ChehabThese operations have no guarantee against concurrent interrupts or
230*c9b54d6fSMauro Carvalho Chehabpreemption. If a per cpu variable is not used in an interrupt context
231*c9b54d6fSMauro Carvalho Chehaband the scheduler cannot preempt, then they are safe. If any interrupts
232*c9b54d6fSMauro Carvalho Chehabstill occur while an operation is in progress and if the interrupt too
233*c9b54d6fSMauro Carvalho Chehabmodifies the variable, then RMW actions can not be guaranteed to be
234*c9b54d6fSMauro Carvalho Chehabsafe::
235*c9b54d6fSMauro Carvalho Chehab
236*c9b54d6fSMauro Carvalho Chehab	__this_cpu_read(pcp)
237*c9b54d6fSMauro Carvalho Chehab	__this_cpu_write(pcp, val)
238*c9b54d6fSMauro Carvalho Chehab	__this_cpu_add(pcp, val)
239*c9b54d6fSMauro Carvalho Chehab	__this_cpu_and(pcp, val)
240*c9b54d6fSMauro Carvalho Chehab	__this_cpu_or(pcp, val)
241*c9b54d6fSMauro Carvalho Chehab	__this_cpu_add_return(pcp, val)
242*c9b54d6fSMauro Carvalho Chehab	__this_cpu_xchg(pcp, nval)
243*c9b54d6fSMauro Carvalho Chehab	__this_cpu_cmpxchg(pcp, oval, nval)
244*c9b54d6fSMauro Carvalho Chehab	__this_cpu_sub(pcp, val)
245*c9b54d6fSMauro Carvalho Chehab	__this_cpu_inc(pcp)
246*c9b54d6fSMauro Carvalho Chehab	__this_cpu_dec(pcp)
247*c9b54d6fSMauro Carvalho Chehab	__this_cpu_sub_return(pcp, val)
248*c9b54d6fSMauro Carvalho Chehab	__this_cpu_inc_return(pcp)
249*c9b54d6fSMauro Carvalho Chehab	__this_cpu_dec_return(pcp)
250*c9b54d6fSMauro Carvalho Chehab
251*c9b54d6fSMauro Carvalho Chehab
252*c9b54d6fSMauro Carvalho ChehabWill increment x and will not fall-back to code that disables
253*c9b54d6fSMauro Carvalho Chehabinterrupts on platforms that cannot accomplish atomicity through
254*c9b54d6fSMauro Carvalho Chehabaddress relocation and a Read-Modify-Write operation in the same
255*c9b54d6fSMauro Carvalho Chehabinstruction.
256*c9b54d6fSMauro Carvalho Chehab
257*c9b54d6fSMauro Carvalho Chehab
258*c9b54d6fSMauro Carvalho Chehab&this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n)
259*c9b54d6fSMauro Carvalho Chehab--------------------------------------------
260*c9b54d6fSMauro Carvalho Chehab
261*c9b54d6fSMauro Carvalho ChehabThe first operation takes the offset and forms an address and then
262*c9b54d6fSMauro Carvalho Chehabadds the offset of the n field. This may result in two add
263*c9b54d6fSMauro Carvalho Chehabinstructions emitted by the compiler.
264*c9b54d6fSMauro Carvalho Chehab
265*c9b54d6fSMauro Carvalho ChehabThe second one first adds the two offsets and then does the
266*c9b54d6fSMauro Carvalho Chehabrelocation.  IMHO the second form looks cleaner and has an easier time
267*c9b54d6fSMauro Carvalho Chehabwith (). The second form also is consistent with the way
268*c9b54d6fSMauro Carvalho Chehabthis_cpu_read() and friends are used.
269*c9b54d6fSMauro Carvalho Chehab
270*c9b54d6fSMauro Carvalho Chehab
271*c9b54d6fSMauro Carvalho ChehabRemote access to per cpu data
272*c9b54d6fSMauro Carvalho Chehab------------------------------
273*c9b54d6fSMauro Carvalho Chehab
274*c9b54d6fSMauro Carvalho ChehabPer cpu data structures are designed to be used by one cpu exclusively.
275*c9b54d6fSMauro Carvalho ChehabIf you use the variables as intended, this_cpu_ops() are guaranteed to
276*c9b54d6fSMauro Carvalho Chehabbe "atomic" as no other CPU has access to these data structures.
277*c9b54d6fSMauro Carvalho Chehab
278*c9b54d6fSMauro Carvalho ChehabThere are special cases where you might need to access per cpu data
279*c9b54d6fSMauro Carvalho Chehabstructures remotely. It is usually safe to do a remote read access
280*c9b54d6fSMauro Carvalho Chehaband that is frequently done to summarize counters. Remote write access
281*c9b54d6fSMauro Carvalho Chehabsomething which could be problematic because this_cpu ops do not
282*c9b54d6fSMauro Carvalho Chehabhave lock semantics. A remote write may interfere with a this_cpu
283*c9b54d6fSMauro Carvalho ChehabRMW operation.
284*c9b54d6fSMauro Carvalho Chehab
285*c9b54d6fSMauro Carvalho ChehabRemote write accesses to percpu data structures are highly discouraged
286*c9b54d6fSMauro Carvalho Chehabunless absolutely necessary. Please consider using an IPI to wake up
287*c9b54d6fSMauro Carvalho Chehabthe remote CPU and perform the update to its per cpu area.
288*c9b54d6fSMauro Carvalho Chehab
289*c9b54d6fSMauro Carvalho ChehabTo access per-cpu data structure remotely, typically the per_cpu_ptr()
290*c9b54d6fSMauro Carvalho Chehabfunction is used::
291*c9b54d6fSMauro Carvalho Chehab
292*c9b54d6fSMauro Carvalho Chehab
293*c9b54d6fSMauro Carvalho Chehab	DEFINE_PER_CPU(struct data, datap);
294*c9b54d6fSMauro Carvalho Chehab
295*c9b54d6fSMauro Carvalho Chehab	struct data *p = per_cpu_ptr(&datap, cpu);
296*c9b54d6fSMauro Carvalho Chehab
297*c9b54d6fSMauro Carvalho ChehabThis makes it explicit that we are getting ready to access a percpu
298*c9b54d6fSMauro Carvalho Chehabarea remotely.
299*c9b54d6fSMauro Carvalho Chehab
300*c9b54d6fSMauro Carvalho ChehabYou can also do the following to convert the datap offset to an address::
301*c9b54d6fSMauro Carvalho Chehab
302*c9b54d6fSMauro Carvalho Chehab	struct data *p = this_cpu_ptr(&datap);
303*c9b54d6fSMauro Carvalho Chehab
304*c9b54d6fSMauro Carvalho Chehabbut, passing of pointers calculated via this_cpu_ptr to other cpus is
305*c9b54d6fSMauro Carvalho Chehabunusual and should be avoided.
306*c9b54d6fSMauro Carvalho Chehab
307*c9b54d6fSMauro Carvalho ChehabRemote access are typically only for reading the status of another cpus
308*c9b54d6fSMauro Carvalho Chehabper cpu data. Write accesses can cause unique problems due to the
309*c9b54d6fSMauro Carvalho Chehabrelaxed synchronization requirements for this_cpu operations.
310*c9b54d6fSMauro Carvalho Chehab
311*c9b54d6fSMauro Carvalho ChehabOne example that illustrates some concerns with write operations is
312*c9b54d6fSMauro Carvalho Chehabthe following scenario that occurs because two per cpu variables
313*c9b54d6fSMauro Carvalho Chehabshare a cache-line but the relaxed synchronization is applied to
314*c9b54d6fSMauro Carvalho Chehabonly one process updating the cache-line.
315*c9b54d6fSMauro Carvalho Chehab
316*c9b54d6fSMauro Carvalho ChehabConsider the following example::
317*c9b54d6fSMauro Carvalho Chehab
318*c9b54d6fSMauro Carvalho Chehab
319*c9b54d6fSMauro Carvalho Chehab	struct test {
320*c9b54d6fSMauro Carvalho Chehab		atomic_t a;
321*c9b54d6fSMauro Carvalho Chehab		int b;
322*c9b54d6fSMauro Carvalho Chehab	};
323*c9b54d6fSMauro Carvalho Chehab
324*c9b54d6fSMauro Carvalho Chehab	DEFINE_PER_CPU(struct test, onecacheline);
325*c9b54d6fSMauro Carvalho Chehab
326*c9b54d6fSMauro Carvalho ChehabThere is some concern about what would happen if the field 'a' is updated
327*c9b54d6fSMauro Carvalho Chehabremotely from one processor and the local processor would use this_cpu ops
328*c9b54d6fSMauro Carvalho Chehabto update field b. Care should be taken that such simultaneous accesses to
329*c9b54d6fSMauro Carvalho Chehabdata within the same cache line are avoided. Also costly synchronization
330*c9b54d6fSMauro Carvalho Chehabmay be necessary. IPIs are generally recommended in such scenarios instead
331*c9b54d6fSMauro Carvalho Chehabof a remote write to the per cpu area of another processor.
332*c9b54d6fSMauro Carvalho Chehab
333*c9b54d6fSMauro Carvalho ChehabEven in cases where the remote writes are rare, please bear in
334*c9b54d6fSMauro Carvalho Chehabmind that a remote write will evict the cache line from the processor
335*c9b54d6fSMauro Carvalho Chehabthat most likely will access it. If the processor wakes up and finds a
336*c9b54d6fSMauro Carvalho Chehabmissing local cache line of a per cpu area, its performance and hence
337*c9b54d6fSMauro Carvalho Chehabthe wake up times will be affected.
338