1MARKING SHARED-MEMORY ACCESSES
2==============================
3
4This document provides guidelines for marking intentionally concurrent
5normal accesses to shared memory, that is "normal" as in accesses that do
6not use read-modify-write atomic operations.  It also describes how to
7document these accesses, both with comments and with special assertions
8processed by the Kernel Concurrency Sanitizer (KCSAN).  This discussion
9builds on an earlier LWN article [1].
10
11
12ACCESS-MARKING OPTIONS
13======================
14
15The Linux kernel provides the following access-marking options:
16
171.	Plain C-language accesses (unmarked), for example, "a = b;"
18
192.	Data-race marking, for example, "data_race(a = b);"
20
213.	READ_ONCE(), for example, "a = READ_ONCE(b);"
22	The various forms of atomic_read() also fit in here.
23
244.	WRITE_ONCE(), for example, "WRITE_ONCE(a, b);"
25	The various forms of atomic_set() also fit in here.
26
27
28These may be used in combination, as shown in this admittedly improbable
29example:
30
31	WRITE_ONCE(a, b + data_race(c + d) + READ_ONCE(e));
32
33Neither plain C-language accesses nor data_race() (#1 and #2 above) place
34any sort of constraint on the compiler's choice of optimizations [2].
35In contrast, READ_ONCE() and WRITE_ONCE() (#3 and #4 above) restrict the
36compiler's use of code-motion and common-subexpression optimizations.
37Therefore, if a given access is involved in an intentional data race,
38using READ_ONCE() for loads and WRITE_ONCE() for stores is usually
39preferable to data_race(), which in turn is usually preferable to plain
40C-language accesses.
41
42KCSAN will complain about many types of data races involving plain
43C-language accesses, but marking all accesses involved in a given data
44race with one of data_race(), READ_ONCE(), or WRITE_ONCE(), will prevent
45KCSAN from complaining.  Of course, lack of KCSAN complaints does not
46imply correct code.  Therefore, please take a thoughtful approach
47when responding to KCSAN complaints.  Churning the code base with
48ill-considered additions of data_race(), READ_ONCE(), and WRITE_ONCE()
49is unhelpful.
50
51In fact, the following sections describe situations where use of
52data_race() and even plain C-language accesses is preferable to
53READ_ONCE() and WRITE_ONCE().
54
55
56Use of the data_race() Macro
57----------------------------
58
59Here are some situations where data_race() should be used instead of
60READ_ONCE() and WRITE_ONCE():
61
621.	Data-racy loads from shared variables whose values are used only
63	for diagnostic purposes.
64
652.	Data-racy reads whose values are checked against marked reload.
66
673.	Reads whose values feed into error-tolerant heuristics.
68
694.	Writes setting values that feed into error-tolerant heuristics.
70
71
72Data-Racy Reads for Approximate Diagnostics
73
74Approximate diagnostics include lockdep reports, monitoring/statistics
75(including /proc and /sys output), WARN*()/BUG*() checks whose return
76values are ignored, and other situations where reads from shared variables
77are not an integral part of the core concurrency design.
78
79In fact, use of data_race() instead READ_ONCE() for these diagnostic
80reads can enable better checking of the remaining accesses implementing
81the core concurrency design.  For example, suppose that the core design
82prevents any non-diagnostic reads from shared variable x from running
83concurrently with updates to x.  Then using plain C-language writes
84to x allows KCSAN to detect reads from x from within regions of code
85that fail to exclude the updates.  In this case, it is important to use
86data_race() for the diagnostic reads because otherwise KCSAN would give
87false-positive warnings about these diagnostic reads.
88
89In theory, plain C-language loads can also be used for this use case.
90However, in practice this will have the disadvantage of causing KCSAN
91to generate false positives because KCSAN will have no way of knowing
92that the resulting data race was intentional.
93
94
95Data-Racy Reads That Are Checked Against Marked Reload
96
97The values from some reads are not implicitly trusted.  They are instead
98fed into some operation that checks the full value against a later marked
99load from memory, which means that the occasional arbitrarily bogus value
100is not a problem.  For example, if a bogus value is fed into cmpxchg(),
101all that happens is that this cmpxchg() fails, which normally results
102in a retry.  Unless the race condition that resulted in the bogus value
103recurs, this retry will with high probability succeed, so no harm done.
104
105However, please keep in mind that a data_race() load feeding into
106a cmpxchg_relaxed() might still be subject to load fusing on some
107architectures.  Therefore, it is best to capture the return value from
108the failing cmpxchg() for the next iteration of the loop, an approach
109that provides the compiler much less scope for mischievous optimizations.
110Capturing the return value from cmpxchg() also saves a memory reference
111in many cases.
112
113In theory, plain C-language loads can also be used for this use case.
114However, in practice this will have the disadvantage of causing KCSAN
115to generate false positives because KCSAN will have no way of knowing
116that the resulting data race was intentional.
117
118
119Reads Feeding Into Error-Tolerant Heuristics
120
121Values from some reads feed into heuristics that can tolerate occasional
122errors.  Such reads can use data_race(), thus allowing KCSAN to focus on
123the other accesses to the relevant shared variables.  But please note
124that data_race() loads are subject to load fusing, which can result in
125consistent errors, which in turn are quite capable of breaking heuristics.
126Therefore use of data_race() should be limited to cases where some other
127code (such as a barrier() call) will force the occasional reload.
128
129Note that this use case requires that the heuristic be able to handle
130any possible error.  In contrast, if the heuristics might be fatally
131confused by one or more of the possible erroneous values, use READ_ONCE()
132instead of data_race().
133
134In theory, plain C-language loads can also be used for this use case.
135However, in practice this will have the disadvantage of causing KCSAN
136to generate false positives because KCSAN will have no way of knowing
137that the resulting data race was intentional.
138
139
140Writes Setting Values Feeding Into Error-Tolerant Heuristics
141
142The values read into error-tolerant heuristics come from somewhere,
143for example, from sysfs.  This means that some code in sysfs writes
144to this same variable, and these writes can also use data_race().
145After all, if the heuristic can tolerate the occasional bogus value
146due to compiler-mangled reads, it can also tolerate the occasional
147compiler-mangled write, at least assuming that the proper value is in
148place once the write completes.
149
150Plain C-language stores can also be used for this use case.  However,
151in kernels built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, this
152will have the disadvantage of causing KCSAN to generate false positives
153because KCSAN will have no way of knowing that the resulting data race
154was intentional.
155
156
157Use of Plain C-Language Accesses
158--------------------------------
159
160Here are some example situations where plain C-language accesses should
161used instead of READ_ONCE(), WRITE_ONCE(), and data_race():
162
1631.	Accesses protected by mutual exclusion, including strict locking
164	and sequence locking.
165
1662.	Initialization-time and cleanup-time accesses.	This covers a
167	wide variety of situations, including the uniprocessor phase of
168	system boot, variables to be used by not-yet-spawned kthreads,
169	structures not yet published to reference-counted or RCU-protected
170	data structures, and the cleanup side of any of these situations.
171
1723.	Per-CPU variables that are not accessed from other CPUs.
173
1744.	Private per-task variables, including on-stack variables, some
175	fields in the task_struct structure, and task-private heap data.
176
1775.	Any other loads for which there is not supposed to be a concurrent
178	store to that same variable.
179
1806.	Any other stores for which there should be neither concurrent
181	loads nor concurrent stores to that same variable.
182
183	But note that KCSAN makes two explicit exceptions to this rule
184	by default, refraining from flagging plain C-language stores:
185
186	a.	No matter what.  You can override this default by building
187		with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n.
188
189	b.	When the store writes the value already contained in
190		that variable.	You can override this default by building
191		with CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.
192
193	c.	When one of the stores is in an interrupt handler and
194		the other in the interrupted code.  You can override this
195		default by building with CONFIG_KCSAN_INTERRUPT_WATCHER=y.
196
197Note that it is important to use plain C-language accesses in these cases,
198because doing otherwise prevents KCSAN from detecting violations of your
199code's synchronization rules.
200
201
202ACCESS-DOCUMENTATION OPTIONS
203============================
204
205It is important to comment marked accesses so that people reading your
206code, yourself included, are reminded of the synchronization design.
207However, it is even more important to comment plain C-language accesses
208that are intentionally involved in data races.  Such comments are
209needed to remind people reading your code, again, yourself included,
210of how the compiler has been prevented from optimizing those accesses
211into concurrency bugs.
212
213It is also possible to tell KCSAN about your synchronization design.
214For example, ASSERT_EXCLUSIVE_ACCESS(foo) tells KCSAN that any
215concurrent access to variable foo by any other CPU is an error, even
216if that concurrent access is marked with READ_ONCE().  In addition,
217ASSERT_EXCLUSIVE_WRITER(foo) tells KCSAN that although it is OK for there
218to be concurrent reads from foo from other CPUs, it is an error for some
219other CPU to be concurrently writing to foo, even if that concurrent
220write is marked with data_race() or WRITE_ONCE().
221
222Note that although KCSAN will call out data races involving either
223ASSERT_EXCLUSIVE_ACCESS() or ASSERT_EXCLUSIVE_WRITER() on the one hand
224and data_race() writes on the other, KCSAN will not report the location
225of these data_race() writes.
226
227
228EXAMPLES
229========
230
231As noted earlier, the goal is to prevent the compiler from destroying
232your concurrent algorithm, to help the human reader, and to inform
233KCSAN of aspects of your concurrency design.  This section looks at a
234few examples showing how this can be done.
235
236
237Lock Protection With Lockless Diagnostic Access
238-----------------------------------------------
239
240For example, suppose a shared variable "foo" is read only while a
241reader-writer spinlock is read-held, written only while that same
242spinlock is write-held, except that it is also read locklessly for
243diagnostic purposes.  The code might look as follows:
244
245	int foo;
246	DEFINE_RWLOCK(foo_rwlock);
247
248	void update_foo(int newval)
249	{
250		write_lock(&foo_rwlock);
251		foo = newval;
252		do_something(newval);
253		write_unlock(&foo_rwlock);
254	}
255
256	int read_foo(void)
257	{
258		int ret;
259
260		read_lock(&foo_rwlock);
261		do_something_else();
262		ret = foo;
263		read_unlock(&foo_rwlock);
264		return ret;
265	}
266
267	void read_foo_diagnostic(void)
268	{
269		pr_info("Current value of foo: %d\n", data_race(foo));
270	}
271
272The reader-writer lock prevents the compiler from introducing concurrency
273bugs into any part of the main algorithm using foo, which means that
274the accesses to foo within both update_foo() and read_foo() can (and
275should) be plain C-language accesses.  One benefit of making them be
276plain C-language accesses is that KCSAN can detect any erroneous lockless
277reads from or updates to foo.  The data_race() in read_foo_diagnostic()
278tells KCSAN that data races are expected, and should be silently
279ignored.  This data_race() also tells the human reading the code that
280read_foo_diagnostic() might sometimes return a bogus value.
281
282However, please note that your kernel must be built with
283CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n in order for KCSAN to
284detect a buggy lockless write.  If you need KCSAN to detect such a
285write even if that write did not change the value of foo, you also
286need CONFIG_KCSAN_REPORT_VALUE_CHANGE_ONLY=n.  If you need KCSAN to
287detect such a write happening in an interrupt handler running on the
288same CPU doing the legitimate lock-protected write, you also need
289CONFIG_KCSAN_INTERRUPT_WATCHER=y.  With some or all of these Kconfig
290options set properly, KCSAN can be quite helpful, although it is not
291necessarily a full replacement for hardware watchpoints.  On the other
292hand, neither are hardware watchpoints a full replacement for KCSAN
293because it is not always easy to tell hardware watchpoint to conditionally
294trap on accesses.
295
296
297Lock-Protected Writes With Lockless Reads
298-----------------------------------------
299
300For another example, suppose a shared variable "foo" is updated only
301while holding a spinlock, but is read locklessly.  The code might look
302as follows:
303
304	int foo;
305	DEFINE_SPINLOCK(foo_lock);
306
307	void update_foo(int newval)
308	{
309		spin_lock(&foo_lock);
310		WRITE_ONCE(foo, newval);
311		ASSERT_EXCLUSIVE_WRITER(foo);
312		do_something(newval);
313		spin_unlock(&foo_wlock);
314	}
315
316	int read_foo(void)
317	{
318		do_something_else();
319		return READ_ONCE(foo);
320	}
321
322Because foo is read locklessly, all accesses are marked.  The purpose
323of the ASSERT_EXCLUSIVE_WRITER() is to allow KCSAN to check for a buggy
324concurrent lockless write.
325
326
327Lock-Protected Writes With Heuristic Lockless Reads
328---------------------------------------------------
329
330For another example, suppose that the code can normally make use of
331a per-data-structure lock, but there are times when a global lock
332is required.  These times are indicated via a global flag.  The code
333might look as follows, and is based loosely on nf_conntrack_lock(),
334nf_conntrack_all_lock(), and nf_conntrack_all_unlock():
335
336	bool global_flag;
337	DEFINE_SPINLOCK(global_lock);
338	struct foo {
339		spinlock_t f_lock;
340		int f_data;
341	};
342
343	/* All foo structures are in the following array. */
344	int nfoo;
345	struct foo *foo_array;
346
347	void do_something_locked(struct foo *fp)
348	{
349		/* This works even if data_race() returns nonsense. */
350		if (!data_race(global_flag)) {
351			spin_lock(&fp->f_lock);
352			if (!smp_load_acquire(&global_flag)) {
353				do_something(fp);
354				spin_unlock(&fp->f_lock);
355				return;
356			}
357			spin_unlock(&fp->f_lock);
358		}
359		spin_lock(&global_lock);
360		/* global_lock held, thus global flag cannot be set. */
361		spin_lock(&fp->f_lock);
362		spin_unlock(&global_lock);
363		/*
364		 * global_flag might be set here, but begin_global()
365		 * will wait for ->f_lock to be released.
366		 */
367		do_something(fp);
368		spin_unlock(&fp->f_lock);
369	}
370
371	void begin_global(void)
372	{
373		int i;
374
375		spin_lock(&global_lock);
376		WRITE_ONCE(global_flag, true);
377		for (i = 0; i < nfoo; i++) {
378			/*
379			 * Wait for pre-existing local locks.  One at
380			 * a time to avoid lockdep limitations.
381			 */
382			spin_lock(&fp->f_lock);
383			spin_unlock(&fp->f_lock);
384		}
385	}
386
387	void end_global(void)
388	{
389		smp_store_release(&global_flag, false);
390		spin_unlock(&global_lock);
391	}
392
393All code paths leading from the do_something_locked() function's first
394read from global_flag acquire a lock, so endless load fusing cannot
395happen.
396
397If the value read from global_flag is true, then global_flag is
398rechecked while holding ->f_lock, which, if global_flag is now false,
399prevents begin_global() from completing.  It is therefore safe to invoke
400do_something().
401
402Otherwise, if either value read from global_flag is true, then after
403global_lock is acquired global_flag must be false.  The acquisition of
404->f_lock will prevent any call to begin_global() from returning, which
405means that it is safe to release global_lock and invoke do_something().
406
407For this to work, only those foo structures in foo_array[] may be passed
408to do_something_locked().  The reason for this is that the synchronization
409with begin_global() relies on momentarily holding the lock of each and
410every foo structure.
411
412The smp_load_acquire() and smp_store_release() are required because
413changes to a foo structure between calls to begin_global() and
414end_global() are carried out without holding that structure's ->f_lock.
415The smp_load_acquire() and smp_store_release() ensure that the next
416invocation of do_something() from do_something_locked() will see those
417changes.
418
419
420Lockless Reads and Writes
421-------------------------
422
423For another example, suppose a shared variable "foo" is both read and
424updated locklessly.  The code might look as follows:
425
426	int foo;
427
428	int update_foo(int newval)
429	{
430		int ret;
431
432		ret = xchg(&foo, newval);
433		do_something(newval);
434		return ret;
435	}
436
437	int read_foo(void)
438	{
439		do_something_else();
440		return READ_ONCE(foo);
441	}
442
443Because foo is accessed locklessly, all accesses are marked.  It does
444not make sense to use ASSERT_EXCLUSIVE_WRITER() in this case because
445there really can be concurrent lockless writers.  KCSAN would
446flag any concurrent plain C-language reads from foo, and given
447CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=n, also any concurrent plain
448C-language writes to foo.
449
450
451Lockless Reads and Writes, But With Single-Threaded Initialization
452------------------------------------------------------------------
453
454For yet another example, suppose that foo is initialized in a
455single-threaded manner, but that a number of kthreads are then created
456that locklessly and concurrently access foo.  Some snippets of this code
457might look as follows:
458
459	int foo;
460
461	void initialize_foo(int initval, int nkthreads)
462	{
463		int i;
464
465		foo = initval;
466		ASSERT_EXCLUSIVE_ACCESS(foo);
467		for (i = 0; i < nkthreads; i++)
468			kthread_run(access_foo_concurrently, ...);
469	}
470
471	/* Called from access_foo_concurrently(). */
472	int update_foo(int newval)
473	{
474		int ret;
475
476		ret = xchg(&foo, newval);
477		do_something(newval);
478		return ret;
479	}
480
481	/* Also called from access_foo_concurrently(). */
482	int read_foo(void)
483	{
484		do_something_else();
485		return READ_ONCE(foo);
486	}
487
488The initialize_foo() uses a plain C-language write to foo because there
489are not supposed to be concurrent accesses during initialization.  The
490ASSERT_EXCLUSIVE_ACCESS() allows KCSAN to flag buggy concurrent unmarked
491reads, and the ASSERT_EXCLUSIVE_ACCESS() call further allows KCSAN to
492flag buggy concurrent writes, even if:  (1) Those writes are marked or
493(2) The kernel was built with CONFIG_KCSAN_ASSUME_PLAIN_WRITES_ATOMIC=y.
494
495
496Checking Stress-Test Race Coverage
497----------------------------------
498
499When designing stress tests it is important to ensure that race conditions
500of interest really do occur.  For example, consider the following code
501fragment:
502
503	int foo;
504
505	int update_foo(int newval)
506	{
507		return xchg(&foo, newval);
508	}
509
510	int xor_shift_foo(int shift, int mask)
511	{
512		int old, new, newold;
513
514		newold = data_race(foo); /* Checked by cmpxchg(). */
515		do {
516			old = newold;
517			new = (old << shift) ^ mask;
518			newold = cmpxchg(&foo, old, new);
519		} while (newold != old);
520		return old;
521	}
522
523	int read_foo(void)
524	{
525		return READ_ONCE(foo);
526	}
527
528If it is possible for update_foo(), xor_shift_foo(), and read_foo() to be
529invoked concurrently, the stress test should force this concurrency to
530actually happen.  KCSAN can evaluate the stress test when the above code
531is modified to read as follows:
532
533	int foo;
534
535	int update_foo(int newval)
536	{
537		ASSERT_EXCLUSIVE_ACCESS(foo);
538		return xchg(&foo, newval);
539	}
540
541	int xor_shift_foo(int shift, int mask)
542	{
543		int old, new, newold;
544
545		newold = data_race(foo); /* Checked by cmpxchg(). */
546		do {
547			old = newold;
548			new = (old << shift) ^ mask;
549			ASSERT_EXCLUSIVE_ACCESS(foo);
550			newold = cmpxchg(&foo, old, new);
551		} while (newold != old);
552		return old;
553	}
554
555
556	int read_foo(void)
557	{
558		ASSERT_EXCLUSIVE_ACCESS(foo);
559		return READ_ONCE(foo);
560	}
561
562If a given stress-test run does not result in KCSAN complaints from
563each possible pair of ASSERT_EXCLUSIVE_ACCESS() invocations, the
564stress test needs improvement.  If the stress test was to be evaluated
565on a regular basis, it would be wise to place the above instances of
566ASSERT_EXCLUSIVE_ACCESS() under #ifdef so that they did not result in
567false positives when not evaluating the stress test.
568
569
570REFERENCES
571==========
572
573[1] "Concurrency bugs should fear the big bad data-race detector (part 2)"
574    https://lwn.net/Articles/816854/
575
576[2] "Who's afraid of a big bad optimizing compiler?"
577    https://lwn.net/Articles/793253/
578