1 2On atomic types (atomic_t atomic64_t and atomic_long_t). 3 4The atomic type provides an interface to the architecture's means of atomic 5RMW operations between CPUs (atomic operations on MMIO are not supported and 6can lead to fatal traps on some platforms). 7 8API 9--- 10 11The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for 12brevity): 13 14Non-RMW ops: 15 16 atomic_read(), atomic_set() 17 atomic_read_acquire(), atomic_set_release() 18 19 20RMW atomic operations: 21 22Arithmetic: 23 24 atomic_{add,sub,inc,dec}() 25 atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}() 26 atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}() 27 28 29Bitwise: 30 31 atomic_{and,or,xor,andnot}() 32 atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}() 33 34 35Swap: 36 37 atomic_xchg{,_relaxed,_acquire,_release}() 38 atomic_cmpxchg{,_relaxed,_acquire,_release}() 39 atomic_try_cmpxchg{,_relaxed,_acquire,_release}() 40 41 42Reference count (but please see refcount_t): 43 44 atomic_add_unless(), atomic_inc_not_zero() 45 atomic_sub_and_test(), atomic_dec_and_test() 46 47 48Misc: 49 50 atomic_inc_and_test(), atomic_add_negative() 51 atomic_dec_unless_positive(), atomic_inc_unless_negative() 52 53 54Barriers: 55 56 smp_mb__{before,after}_atomic() 57 58 59 60SEMANTICS 61--------- 62 63Non-RMW ops: 64 65The non-RMW ops are (typically) regular LOADs and STOREs and are canonically 66implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and 67smp_store_release() respectively. 68 69The one detail to this is that atomic_set{}() should be observable to the RMW 70ops. That is: 71 72 C atomic-set 73 74 { 75 atomic_set(v, 1); 76 } 77 78 P1(atomic_t *v) 79 { 80 atomic_add_unless(v, 1, 0); 81 } 82 83 P2(atomic_t *v) 84 { 85 atomic_set(v, 0); 86 } 87 88 exists 89 (v=2) 90 91In this case we would expect the atomic_set() from CPU1 to either happen 92before the atomic_add_unless(), in which case that latter one would no-op, or 93_after_ in which case we'd overwrite its result. In no case is "2" a valid 94outcome. 95 96This is typically true on 'normal' platforms, where a regular competing STORE 97will invalidate a LL/SC or fail a CMPXCHG. 98 99The obvious case where this is not so is when we need to implement atomic ops 100with a lock: 101 102 CPU0 CPU1 103 104 atomic_add_unless(v, 1, 0); 105 lock(); 106 ret = READ_ONCE(v->counter); // == 1 107 atomic_set(v, 0); 108 if (ret != u) WRITE_ONCE(v->counter, 0); 109 WRITE_ONCE(v->counter, ret + 1); 110 unlock(); 111 112the typical solution is to then implement atomic_set{}() with atomic_xchg(). 113 114 115RMW ops: 116 117These come in various forms: 118 119 - plain operations without return value: atomic_{}() 120 121 - operations which return the modified value: atomic_{}_return() 122 123 these are limited to the arithmetic operations because those are 124 reversible. Bitops are irreversible and therefore the modified value 125 is of dubious utility. 126 127 - operations which return the original value: atomic_fetch_{}() 128 129 - swap operations: xchg(), cmpxchg() and try_cmpxchg() 130 131 - misc; the special purpose operations that are commonly used and would, 132 given the interface, normally be implemented using (try_)cmpxchg loops but 133 are time critical and can, (typically) on LL/SC architectures, be more 134 efficiently implemented. 135 136All these operations are SMP atomic; that is, the operations (for a single 137atomic variable) can be fully ordered and no intermediate state is lost or 138visible. 139 140 141ORDERING (go read memory-barriers.txt first) 142-------- 143 144The rule of thumb: 145 146 - non-RMW operations are unordered; 147 148 - RMW operations that have no return value are unordered; 149 150 - RMW operations that have a return value are fully ordered; 151 152 - RMW operations that are conditional are unordered on FAILURE, 153 otherwise the above rules apply. 154 155Except of course when an operation has an explicit ordering like: 156 157 {}_relaxed: unordered 158 {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE 159 {}_release: the W of the RMW (or atomic_set) is a RELEASE 160 161Where 'unordered' is against other memory locations. Address dependencies are 162not defeated. 163 164Fully ordered primitives are ordered against everything prior and everything 165subsequent. Therefore a fully ordered primitive is like having an smp_mb() 166before and an smp_mb() after the primitive. 167 168 169The barriers: 170 171 smp_mb__{before,after}_atomic() 172 173only apply to the RMW ops and can be used to augment/upgrade the ordering 174inherent to the used atomic op. These barriers provide a full smp_mb(). 175 176These helper barriers exist because architectures have varying implicit 177ordering on their SMP atomic primitives. For example our TSO architectures 178provide full ordered atomics and these barriers are no-ops. 179 180Thus: 181 182 atomic_fetch_add(); 183 184is equivalent to: 185 186 smp_mb__before_atomic(); 187 atomic_fetch_add_relaxed(); 188 smp_mb__after_atomic(); 189 190However the atomic_fetch_add() might be implemented more efficiently. 191 192Further, while something like: 193 194 smp_mb__before_atomic(); 195 atomic_dec(&X); 196 197is a 'typical' RELEASE pattern, the barrier is strictly stronger than 198a RELEASE. Similarly for something like: 199 200 atomic_inc(&X); 201 smp_mb__after_atomic(); 202 203is an ACQUIRE pattern (though very much not typical), but again the barrier is 204strictly stronger than ACQUIRE. As illustrated: 205 206 C strong-acquire 207 208 { 209 } 210 211 P1(int *x, atomic_t *y) 212 { 213 r0 = READ_ONCE(*x); 214 smp_rmb(); 215 r1 = atomic_read(y); 216 } 217 218 P2(int *x, atomic_t *y) 219 { 220 atomic_inc(y); 221 smp_mb__after_atomic(); 222 WRITE_ONCE(*x, 1); 223 } 224 225 exists 226 (r0=1 /\ r1=0) 227 228This should not happen; but a hypothetical atomic_inc_acquire() -- 229(void)atomic_fetch_inc_acquire() for instance -- would allow the outcome, 230since then: 231 232 P1 P2 233 234 t = LL.acq *y (0) 235 t++; 236 *x = 1; 237 r0 = *x (1) 238 RMB 239 r1 = *y (0) 240 SC *y, t; 241 242is allowed. 243