1.. SPDX-License-Identifier: GPL-2.0
2
3=================================================
4Using RCU hlist_nulls to protect list and objects
5=================================================
6
7This section describes how to use hlist_nulls to
8protect read-mostly linked lists and
9objects using SLAB_TYPESAFE_BY_RCU allocations.
10
11Please read the basics in Documentation/RCU/listRCU.rst
12
13Using 'nulls'
14=============
15
16Using special makers (called 'nulls') is a convenient way
17to solve following problem :
18
19A typical RCU linked list managing objects which are
20allocated with SLAB_TYPESAFE_BY_RCU kmem_cache can
21use following algos :
22
231) Lookup algo
24--------------
25
26::
27
28  rcu_read_lock()
29  begin:
30  obj = lockless_lookup(key);
31  if (obj) {
32    if (!try_get_ref(obj)) // might fail for free objects
33      goto begin;
34    /*
35    * Because a writer could delete object, and a writer could
36    * reuse these object before the RCU grace period, we
37    * must check key after getting the reference on object
38    */
39    if (obj->key != key) { // not the object we expected
40      put_ref(obj);
41      goto begin;
42    }
43  }
44  rcu_read_unlock();
45
46Beware that lockless_lookup(key) cannot use traditional hlist_for_each_entry_rcu()
47but a version with an additional memory barrier (smp_rmb())
48
49::
50
51  lockless_lookup(key)
52  {
53    struct hlist_node *node, *next;
54    for (pos = rcu_dereference((head)->first);
55        pos && ({ next = pos->next; smp_rmb(); prefetch(next); 1; }) &&
56        ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
57        pos = rcu_dereference(next))
58      if (obj->key == key)
59        return obj;
60    return NULL;
61  }
62
63And note the traditional hlist_for_each_entry_rcu() misses this smp_rmb()::
64
65  struct hlist_node *node;
66  for (pos = rcu_dereference((head)->first);
67        pos && ({ prefetch(pos->next); 1; }) &&
68        ({ tpos = hlist_entry(pos, typeof(*tpos), member); 1; });
69        pos = rcu_dereference(pos->next))
70   if (obj->key == key)
71     return obj;
72  return NULL;
73
74Quoting Corey Minyard::
75
76  "If the object is moved from one list to another list in-between the
77  time the hash is calculated and the next field is accessed, and the
78  object has moved to the end of a new list, the traversal will not
79  complete properly on the list it should have, since the object will
80  be on the end of the new list and there's not a way to tell it's on a
81  new list and restart the list traversal. I think that this can be
82  solved by pre-fetching the "next" field (with proper barriers) before
83  checking the key."
84
852) Insert algo
86--------------
87
88We need to make sure a reader cannot read the new 'obj->obj_next' value
89and previous value of 'obj->key'. Or else, an item could be deleted
90from a chain, and inserted into another chain. If new chain was empty
91before the move, 'next' pointer is NULL, and lockless reader can
92not detect it missed following items in original chain.
93
94::
95
96  /*
97  * Please note that new inserts are done at the head of list,
98  * not in the middle or end.
99  */
100  obj = kmem_cache_alloc(...);
101  lock_chain(); // typically a spin_lock()
102  obj->key = key;
103  /*
104  * we need to make sure obj->key is updated before obj->next
105  * or obj->refcnt
106  */
107  smp_wmb();
108  atomic_set(&obj->refcnt, 1);
109  hlist_add_head_rcu(&obj->obj_node, list);
110  unlock_chain(); // typically a spin_unlock()
111
112
1133) Remove algo
114--------------
115Nothing special here, we can use a standard RCU hlist deletion.
116But thanks to SLAB_TYPESAFE_BY_RCU, beware a deleted object can be reused
117very very fast (before the end of RCU grace period)
118
119::
120
121  if (put_last_reference_on(obj) {
122    lock_chain(); // typically a spin_lock()
123    hlist_del_init_rcu(&obj->obj_node);
124    unlock_chain(); // typically a spin_unlock()
125    kmem_cache_free(cachep, obj);
126  }
127
128
129
130--------------------------------------------------------------------------
131
132Avoiding extra smp_rmb()
133========================
134
135With hlist_nulls we can avoid extra smp_rmb() in lockless_lookup()
136and extra smp_wmb() in insert function.
137
138For example, if we choose to store the slot number as the 'nulls'
139end-of-list marker for each slot of the hash table, we can detect
140a race (some writer did a delete and/or a move of an object
141to another chain) checking the final 'nulls' value if
142the lookup met the end of chain. If final 'nulls' value
143is not the slot number, then we must restart the lookup at
144the beginning. If the object was moved to the same chain,
145then the reader doesn't care : It might eventually
146scan the list again without harm.
147
148
1491) lookup algo
150--------------
151
152::
153
154  head = &table[slot];
155  rcu_read_lock();
156  begin:
157  hlist_nulls_for_each_entry_rcu(obj, node, head, member) {
158    if (obj->key == key) {
159      if (!try_get_ref(obj)) // might fail for free objects
160        goto begin;
161      if (obj->key != key) { // not the object we expected
162        put_ref(obj);
163        goto begin;
164      }
165    goto out;
166  }
167  /*
168  * if the nulls value we got at the end of this lookup is
169  * not the expected one, we must restart lookup.
170  * We probably met an item that was moved to another chain.
171  */
172  if (get_nulls_value(node) != slot)
173  goto begin;
174  obj = NULL;
175
176  out:
177  rcu_read_unlock();
178
1792) Insert function
180------------------
181
182::
183
184  /*
185  * Please note that new inserts are done at the head of list,
186  * not in the middle or end.
187  */
188  obj = kmem_cache_alloc(cachep);
189  lock_chain(); // typically a spin_lock()
190  obj->key = key;
191  /*
192  * changes to obj->key must be visible before refcnt one
193  */
194  smp_wmb();
195  atomic_set(&obj->refcnt, 1);
196  /*
197  * insert obj in RCU way (readers might be traversing chain)
198  */
199  hlist_nulls_add_head_rcu(&obj->obj_node, list);
200  unlock_chain(); // typically a spin_unlock()
201