xref: /openbmc/qemu/docs/devel/rcu.rst (revision 90655d81)
1*90655d81SPeter MaydellUsing RCU (Read-Copy-Update) for synchronization
2*90655d81SPeter Maydell================================================
3*90655d81SPeter Maydell
4*90655d81SPeter MaydellRead-copy update (RCU) is a synchronization mechanism that is used to
5*90655d81SPeter Maydellprotect read-mostly data structures.  RCU is very efficient and scalable
6*90655d81SPeter Maydellon the read side (it is wait-free), and thus can make the read paths
7*90655d81SPeter Maydellextremely fast.
8*90655d81SPeter Maydell
9*90655d81SPeter MaydellRCU supports concurrency between a single writer and multiple readers,
10*90655d81SPeter Maydellthus it is not used alone.  Typically, the write-side will use a lock to
11*90655d81SPeter Maydellserialize multiple updates, but other approaches are possible (e.g.,
12*90655d81SPeter Maydellrestricting updates to a single task).  In QEMU, when a lock is used,
13*90655d81SPeter Maydellthis will often be the "iothread mutex", also known as the "big QEMU
14*90655d81SPeter Maydelllock" (BQL).  Also, restricting updates to a single task is done in
15*90655d81SPeter MaydellQEMU using the "bottom half" API.
16*90655d81SPeter Maydell
17*90655d81SPeter MaydellRCU is fundamentally a "wait-to-finish" mechanism.  The read side marks
18*90655d81SPeter Maydellsections of code with "critical sections", and the update side will wait
19*90655d81SPeter Maydellfor the execution of all *currently running* critical sections before
20*90655d81SPeter Maydellproceeding, or before asynchronously executing a callback.
21*90655d81SPeter Maydell
22*90655d81SPeter MaydellThe key point here is that only the currently running critical sections
23*90655d81SPeter Maydellare waited for; critical sections that are started **after** the beginning
24*90655d81SPeter Maydellof the wait do not extend the wait, despite running concurrently with
25*90655d81SPeter Maydellthe updater.  This is the reason why RCU is more scalable than,
26*90655d81SPeter Maydellfor example, reader-writer locks.  It is so much more scalable that
27*90655d81SPeter Maydellthe system will have a single instance of the RCU mechanism; a single
28*90655d81SPeter Maydellmechanism can be used for an arbitrary number of "things", without
29*90655d81SPeter Maydellhaving to worry about things such as contention or deadlocks.
30*90655d81SPeter Maydell
31*90655d81SPeter MaydellHow is this possible?  The basic idea is to split updates in two phases,
32*90655d81SPeter Maydell"removal" and "reclamation".  During removal, we ensure that subsequent
33*90655d81SPeter Maydellreaders will not be able to get a reference to the old data.  After
34*90655d81SPeter Maydellremoval has completed, a critical section will not be able to access
35*90655d81SPeter Maydellthe old data.  Therefore, critical sections that begin after removal
36*90655d81SPeter Maydelldo not matter; as soon as all previous critical sections have finished,
37*90655d81SPeter Maydellthere cannot be any readers who hold references to the data structure,
38*90655d81SPeter Maydelland these can now be safely reclaimed (e.g., freed or unref'ed).
39*90655d81SPeter Maydell
40*90655d81SPeter MaydellHere is a picture::
41*90655d81SPeter Maydell
42*90655d81SPeter Maydell        thread 1                  thread 2                  thread 3
43*90655d81SPeter Maydell    -------------------    ------------------------    -------------------
44*90655d81SPeter Maydell    enter RCU crit.sec.
45*90655d81SPeter Maydell           |                finish removal phase
46*90655d81SPeter Maydell           |                begin wait
47*90655d81SPeter Maydell           |                      |                    enter RCU crit.sec.
48*90655d81SPeter Maydell    exit RCU crit.sec             |                           |
49*90655d81SPeter Maydell                            complete wait                     |
50*90655d81SPeter Maydell                            begin reclamation phase           |
51*90655d81SPeter Maydell                                                       exit RCU crit.sec.
52*90655d81SPeter Maydell
53*90655d81SPeter Maydell
54*90655d81SPeter MaydellNote how thread 3 is still executing its critical section when thread 2
55*90655d81SPeter Maydellstarts reclaiming data.  This is possible, because the old version of the
56*90655d81SPeter Maydelldata structure was not accessible at the time thread 3 began executing
57*90655d81SPeter Maydellthat critical section.
58*90655d81SPeter Maydell
59*90655d81SPeter Maydell
60*90655d81SPeter MaydellRCU API
61*90655d81SPeter Maydell-------
62*90655d81SPeter Maydell
63*90655d81SPeter MaydellThe core RCU API is small:
64*90655d81SPeter Maydell
65*90655d81SPeter Maydell``void rcu_read_lock(void);``
66*90655d81SPeter Maydell        Used by a reader to inform the reclaimer that the reader is
67*90655d81SPeter Maydell        entering an RCU read-side critical section.
68*90655d81SPeter Maydell
69*90655d81SPeter Maydell``void rcu_read_unlock(void);``
70*90655d81SPeter Maydell        Used by a reader to inform the reclaimer that the reader is
71*90655d81SPeter Maydell        exiting an RCU read-side critical section.  Note that RCU
72*90655d81SPeter Maydell        read-side critical sections may be nested and/or overlapping.
73*90655d81SPeter Maydell
74*90655d81SPeter Maydell``void synchronize_rcu(void);``
75*90655d81SPeter Maydell        Blocks until all pre-existing RCU read-side critical sections
76*90655d81SPeter Maydell        on all threads have completed.  This marks the end of the removal
77*90655d81SPeter Maydell        phase and the beginning of reclamation phase.
78*90655d81SPeter Maydell
79*90655d81SPeter Maydell        Note that it would be valid for another update to come while
80*90655d81SPeter Maydell        ``synchronize_rcu`` is running.  Because of this, it is better that
81*90655d81SPeter Maydell        the updater releases any locks it may hold before calling
82*90655d81SPeter Maydell        ``synchronize_rcu``.  If this is not possible (for example, because
83*90655d81SPeter Maydell        the updater is protected by the BQL), you can use ``call_rcu``.
84*90655d81SPeter Maydell
85*90655d81SPeter Maydell``void call_rcu1(struct rcu_head * head, void (*func)(struct rcu_head *head));``
86*90655d81SPeter Maydell        This function invokes ``func(head)`` after all pre-existing RCU
87*90655d81SPeter Maydell        read-side critical sections on all threads have completed.  This
88*90655d81SPeter Maydell        marks the end of the removal phase, with func taking care
89*90655d81SPeter Maydell        asynchronously of the reclamation phase.
90*90655d81SPeter Maydell
91*90655d81SPeter Maydell        The ``foo`` struct needs to have an ``rcu_head`` structure added,
92*90655d81SPeter Maydell        perhaps as follows::
93*90655d81SPeter Maydell
94*90655d81SPeter Maydell            struct foo {
95*90655d81SPeter Maydell                struct rcu_head rcu;
96*90655d81SPeter Maydell                int a;
97*90655d81SPeter Maydell                char b;
98*90655d81SPeter Maydell                long c;
99*90655d81SPeter Maydell            };
100*90655d81SPeter Maydell
101*90655d81SPeter Maydell        so that the reclaimer function can fetch the ``struct foo`` address
102*90655d81SPeter Maydell        and free it::
103*90655d81SPeter Maydell
104*90655d81SPeter Maydell            call_rcu1(&foo.rcu, foo_reclaim);
105*90655d81SPeter Maydell
106*90655d81SPeter Maydell            void foo_reclaim(struct rcu_head *rp)
107*90655d81SPeter Maydell            {
108*90655d81SPeter Maydell                struct foo *fp = container_of(rp, struct foo, rcu);
109*90655d81SPeter Maydell                g_free(fp);
110*90655d81SPeter Maydell            }
111*90655d81SPeter Maydell
112*90655d81SPeter Maydell        ``call_rcu1`` is typically used via either the ``call_rcu`` or
113*90655d81SPeter Maydell        ``g_free_rcu`` macros, which handle the common case where the
114*90655d81SPeter Maydell        ``rcu_head`` member is the first of the struct.
115*90655d81SPeter Maydell
116*90655d81SPeter Maydell``void call_rcu(T *p, void (*func)(T *p), field-name);``
117*90655d81SPeter Maydell        If the ``struct rcu_head`` is the first field in the struct, you can
118*90655d81SPeter Maydell        use this macro instead of ``call_rcu1``.
119*90655d81SPeter Maydell
120*90655d81SPeter Maydell``void g_free_rcu(T *p, field-name);``
121*90655d81SPeter Maydell        This is a special-case version of ``call_rcu`` where the callback
122*90655d81SPeter Maydell        function is ``g_free``.
123*90655d81SPeter Maydell        In the example given in ``call_rcu1``, one could have written simply::
124*90655d81SPeter Maydell
125*90655d81SPeter Maydell            g_free_rcu(&foo, rcu);
126*90655d81SPeter Maydell
127*90655d81SPeter Maydell``typeof(*p) qatomic_rcu_read(p);``
128*90655d81SPeter Maydell        ``qatomic_rcu_read()`` is similar to ``qatomic_load_acquire()``, but
129*90655d81SPeter Maydell        it makes some assumptions on the code that calls it.  This allows a
130*90655d81SPeter Maydell        more optimized implementation.
131*90655d81SPeter Maydell
132*90655d81SPeter Maydell        ``qatomic_rcu_read`` assumes that whenever a single RCU critical
133*90655d81SPeter Maydell        section reads multiple shared data, these reads are either
134*90655d81SPeter Maydell        data-dependent or need no ordering.  This is almost always the
135*90655d81SPeter Maydell        case when using RCU, because read-side critical sections typically
136*90655d81SPeter Maydell        navigate one or more pointers (the pointers that are changed on
137*90655d81SPeter Maydell        every update) until reaching a data structure of interest,
138*90655d81SPeter Maydell        and then read from there.
139*90655d81SPeter Maydell
140*90655d81SPeter Maydell        RCU read-side critical sections must use ``qatomic_rcu_read()`` to
141*90655d81SPeter Maydell        read data, unless concurrent writes are prevented by another
142*90655d81SPeter Maydell        synchronization mechanism.
143*90655d81SPeter Maydell
144*90655d81SPeter Maydell        Furthermore, RCU read-side critical sections should traverse the
145*90655d81SPeter Maydell        data structure in a single direction, opposite to the direction
146*90655d81SPeter Maydell        in which the updater initializes it.
147*90655d81SPeter Maydell
148*90655d81SPeter Maydell``void qatomic_rcu_set(p, typeof(*p) v);``
149*90655d81SPeter Maydell        ``qatomic_rcu_set()`` is similar to ``qatomic_store_release()``,
150*90655d81SPeter Maydell        though it also makes assumptions on the code that calls it in
151*90655d81SPeter Maydell        order to allow a more optimized implementation.
152*90655d81SPeter Maydell
153*90655d81SPeter Maydell        In particular, ``qatomic_rcu_set()`` suffices for synchronization
154*90655d81SPeter Maydell        with readers, if the updater never mutates a field within a
155*90655d81SPeter Maydell        data item that is already accessible to readers.  This is the
156*90655d81SPeter Maydell        case when initializing a new copy of the RCU-protected data
157*90655d81SPeter Maydell        structure; just ensure that initialization of ``*p`` is carried out
158*90655d81SPeter Maydell        before ``qatomic_rcu_set()`` makes the data item visible to readers.
159*90655d81SPeter Maydell        If this rule is observed, writes will happen in the opposite
160*90655d81SPeter Maydell        order as reads in the RCU read-side critical sections (or if
161*90655d81SPeter Maydell        there is just one update), and there will be no need for other
162*90655d81SPeter Maydell        synchronization mechanism to coordinate the accesses.
163*90655d81SPeter Maydell
164*90655d81SPeter MaydellThe following APIs must be used before RCU is used in a thread:
165*90655d81SPeter Maydell
166*90655d81SPeter Maydell``void rcu_register_thread(void);``
167*90655d81SPeter Maydell        Mark a thread as taking part in the RCU mechanism.  Such a thread
168*90655d81SPeter Maydell        will have to report quiescent points regularly, either manually
169*90655d81SPeter Maydell        or through the ``QemuCond``/``QemuSemaphore``/``QemuEvent`` APIs.
170*90655d81SPeter Maydell
171*90655d81SPeter Maydell``void rcu_unregister_thread(void);``
172*90655d81SPeter Maydell        Mark a thread as not taking part anymore in the RCU mechanism.
173*90655d81SPeter Maydell        It is not a problem if such a thread reports quiescent points,
174*90655d81SPeter Maydell        either manually or by using the
175*90655d81SPeter Maydell        ``QemuCond``/``QemuSemaphore``/``QemuEvent`` APIs.
176*90655d81SPeter Maydell
177*90655d81SPeter MaydellNote that these APIs are relatively heavyweight, and should **not** be
178*90655d81SPeter Maydellnested.
179*90655d81SPeter Maydell
180*90655d81SPeter MaydellConvenience macros
181*90655d81SPeter Maydell------------------
182*90655d81SPeter Maydell
183*90655d81SPeter MaydellTwo macros are provided that automatically release the read lock at the
184*90655d81SPeter Maydellend of the scope.
185*90655d81SPeter Maydell
186*90655d81SPeter Maydell``RCU_READ_LOCK_GUARD()``
187*90655d81SPeter Maydell         Takes the lock and will release it at the end of the block it's
188*90655d81SPeter Maydell         used in.
189*90655d81SPeter Maydell
190*90655d81SPeter Maydell``WITH_RCU_READ_LOCK_GUARD()  { code }``
191*90655d81SPeter Maydell         Is used at the head of a block to protect the code within the block.
192*90655d81SPeter Maydell
193*90655d81SPeter MaydellNote that a ``goto`` out of the guarded block will also drop the lock.
194*90655d81SPeter Maydell
195*90655d81SPeter MaydellDifferences with Linux
196*90655d81SPeter Maydell----------------------
197*90655d81SPeter Maydell
198*90655d81SPeter Maydell- Waiting on a mutex is possible, though discouraged, within an RCU critical
199*90655d81SPeter Maydell  section.  This is because spinlocks are rarely (if ever) used in userspace
200*90655d81SPeter Maydell  programming; not allowing this would prevent upgrading an RCU read-side
201*90655d81SPeter Maydell  critical section to become an updater.
202*90655d81SPeter Maydell
203*90655d81SPeter Maydell- ``qatomic_rcu_read`` and ``qatomic_rcu_set`` replace ``rcu_dereference`` and
204*90655d81SPeter Maydell  ``rcu_assign_pointer``.  They take a **pointer** to the variable being accessed.
205*90655d81SPeter Maydell
206*90655d81SPeter Maydell- ``call_rcu`` is a macro that has an extra argument (the name of the first
207*90655d81SPeter Maydell  field in the struct, which must be a struct ``rcu_head``), and expects the
208*90655d81SPeter Maydell  type of the callback's argument to be the type of the first argument.
209*90655d81SPeter Maydell  ``call_rcu1`` is the same as Linux's ``call_rcu``.
210*90655d81SPeter Maydell
211*90655d81SPeter Maydell
212*90655d81SPeter MaydellRCU Patterns
213*90655d81SPeter Maydell------------
214*90655d81SPeter Maydell
215*90655d81SPeter MaydellMany patterns using read-writer locks translate directly to RCU, with
216*90655d81SPeter Maydellthe advantages of higher scalability and deadlock immunity.
217*90655d81SPeter Maydell
218*90655d81SPeter MaydellIn general, RCU can be used whenever it is possible to create a new
219*90655d81SPeter Maydell"version" of a data structure every time the updater runs.  This may
220*90655d81SPeter Maydellsound like a very strict restriction, however:
221*90655d81SPeter Maydell
222*90655d81SPeter Maydell- the updater does not mean "everything that writes to a data structure",
223*90655d81SPeter Maydell  but rather "everything that involves a reclamation step".  See the
224*90655d81SPeter Maydell  array example below
225*90655d81SPeter Maydell
226*90655d81SPeter Maydell- in some cases, creating a new version of a data structure may actually
227*90655d81SPeter Maydell  be very cheap.  For example, modifying the "next" pointer of a singly
228*90655d81SPeter Maydell  linked list is effectively creating a new version of the list.
229*90655d81SPeter Maydell
230*90655d81SPeter MaydellHere are some frequently-used RCU idioms that are worth noting.
231*90655d81SPeter Maydell
232*90655d81SPeter Maydell
233*90655d81SPeter MaydellRCU list processing
234*90655d81SPeter Maydell^^^^^^^^^^^^^^^^^^^
235*90655d81SPeter Maydell
236*90655d81SPeter MaydellTBD (not yet used in QEMU)
237*90655d81SPeter Maydell
238*90655d81SPeter Maydell
239*90655d81SPeter MaydellRCU reference counting
240*90655d81SPeter Maydell^^^^^^^^^^^^^^^^^^^^^^
241*90655d81SPeter Maydell
242*90655d81SPeter MaydellBecause grace periods are not allowed to complete while there is an RCU
243*90655d81SPeter Maydellread-side critical section in progress, the RCU read-side primitives
244*90655d81SPeter Maydellmay be used as a restricted reference-counting mechanism.  For example,
245*90655d81SPeter Maydellconsider the following code fragment::
246*90655d81SPeter Maydell
247*90655d81SPeter Maydell    rcu_read_lock();
248*90655d81SPeter Maydell    p = qatomic_rcu_read(&foo);
249*90655d81SPeter Maydell    /* do something with p. */
250*90655d81SPeter Maydell    rcu_read_unlock();
251*90655d81SPeter Maydell
252*90655d81SPeter MaydellThe RCU read-side critical section ensures that the value of ``p`` remains
253*90655d81SPeter Maydellvalid until after the ``rcu_read_unlock()``.  In some sense, it is acquiring
254*90655d81SPeter Maydella reference to ``p`` that is later released when the critical section ends.
255*90655d81SPeter MaydellThe write side looks simply like this (with appropriate locking)::
256*90655d81SPeter Maydell
257*90655d81SPeter Maydell    qemu_mutex_lock(&foo_mutex);
258*90655d81SPeter Maydell    old = foo;
259*90655d81SPeter Maydell    qatomic_rcu_set(&foo, new);
260*90655d81SPeter Maydell    qemu_mutex_unlock(&foo_mutex);
261*90655d81SPeter Maydell    synchronize_rcu();
262*90655d81SPeter Maydell    free(old);
263*90655d81SPeter Maydell
264*90655d81SPeter MaydellIf the processing cannot be done purely within the critical section, it
265*90655d81SPeter Maydellis possible to combine this idiom with a "real" reference count::
266*90655d81SPeter Maydell
267*90655d81SPeter Maydell    rcu_read_lock();
268*90655d81SPeter Maydell    p = qatomic_rcu_read(&foo);
269*90655d81SPeter Maydell    foo_ref(p);
270*90655d81SPeter Maydell    rcu_read_unlock();
271*90655d81SPeter Maydell    /* do something with p. */
272*90655d81SPeter Maydell    foo_unref(p);
273*90655d81SPeter Maydell
274*90655d81SPeter MaydellThe write side can be like this::
275*90655d81SPeter Maydell
276*90655d81SPeter Maydell    qemu_mutex_lock(&foo_mutex);
277*90655d81SPeter Maydell    old = foo;
278*90655d81SPeter Maydell    qatomic_rcu_set(&foo, new);
279*90655d81SPeter Maydell    qemu_mutex_unlock(&foo_mutex);
280*90655d81SPeter Maydell    synchronize_rcu();
281*90655d81SPeter Maydell    foo_unref(old);
282*90655d81SPeter Maydell
283*90655d81SPeter Maydellor with ``call_rcu``::
284*90655d81SPeter Maydell
285*90655d81SPeter Maydell    qemu_mutex_lock(&foo_mutex);
286*90655d81SPeter Maydell    old = foo;
287*90655d81SPeter Maydell    qatomic_rcu_set(&foo, new);
288*90655d81SPeter Maydell    qemu_mutex_unlock(&foo_mutex);
289*90655d81SPeter Maydell    call_rcu(foo_unref, old, rcu);
290*90655d81SPeter Maydell
291*90655d81SPeter MaydellIn both cases, the write side only performs removal.  Reclamation
292*90655d81SPeter Maydellhappens when the last reference to a ``foo`` object is dropped.
293*90655d81SPeter MaydellUsing ``synchronize_rcu()`` is undesirably expensive, because the
294*90655d81SPeter Maydelllast reference may be dropped on the read side.  Hence you can
295*90655d81SPeter Maydelluse ``call_rcu()`` instead::
296*90655d81SPeter Maydell
297*90655d81SPeter Maydell     foo_unref(struct foo *p) {
298*90655d81SPeter Maydell        if (qatomic_fetch_dec(&p->refcount) == 1) {
299*90655d81SPeter Maydell            call_rcu(foo_destroy, p, rcu);
300*90655d81SPeter Maydell        }
301*90655d81SPeter Maydell    }
302*90655d81SPeter Maydell
303*90655d81SPeter Maydell
304*90655d81SPeter MaydellNote that the same idioms would be possible with reader/writer
305*90655d81SPeter Maydelllocks::
306*90655d81SPeter Maydell
307*90655d81SPeter Maydell    read_lock(&foo_rwlock);         write_mutex_lock(&foo_rwlock);
308*90655d81SPeter Maydell    p = foo;                        p = foo;
309*90655d81SPeter Maydell    /* do something with p. */      foo = new;
310*90655d81SPeter Maydell    read_unlock(&foo_rwlock);       free(p);
311*90655d81SPeter Maydell                                    write_mutex_unlock(&foo_rwlock);
312*90655d81SPeter Maydell                                    free(p);
313*90655d81SPeter Maydell
314*90655d81SPeter Maydell    ------------------------------------------------------------------
315*90655d81SPeter Maydell
316*90655d81SPeter Maydell    read_lock(&foo_rwlock);         write_mutex_lock(&foo_rwlock);
317*90655d81SPeter Maydell    p = foo;                        old = foo;
318*90655d81SPeter Maydell    foo_ref(p);                     foo = new;
319*90655d81SPeter Maydell    read_unlock(&foo_rwlock);       foo_unref(old);
320*90655d81SPeter Maydell    /* do something with p. */      write_mutex_unlock(&foo_rwlock);
321*90655d81SPeter Maydell    read_lock(&foo_rwlock);
322*90655d81SPeter Maydell    foo_unref(p);
323*90655d81SPeter Maydell    read_unlock(&foo_rwlock);
324*90655d81SPeter Maydell
325*90655d81SPeter Maydell``foo_unref`` could use a mechanism such as bottom halves to move deallocation
326*90655d81SPeter Maydellout of the write-side critical section.
327*90655d81SPeter Maydell
328*90655d81SPeter Maydell
329*90655d81SPeter MaydellRCU resizable arrays
330*90655d81SPeter Maydell^^^^^^^^^^^^^^^^^^^^
331*90655d81SPeter Maydell
332*90655d81SPeter MaydellResizable arrays can be used with RCU.  The expensive RCU synchronization
333*90655d81SPeter Maydell(or ``call_rcu``) only needs to take place when the array is resized.
334*90655d81SPeter MaydellThe two items to take care of are:
335*90655d81SPeter Maydell
336*90655d81SPeter Maydell- ensuring that the old version of the array is available between removal
337*90655d81SPeter Maydell  and reclamation;
338*90655d81SPeter Maydell
339*90655d81SPeter Maydell- avoiding mismatches in the read side between the array data and the
340*90655d81SPeter Maydell  array size.
341*90655d81SPeter Maydell
342*90655d81SPeter MaydellThe first problem is avoided simply by not using ``realloc``.  Instead,
343*90655d81SPeter Maydelleach resize will allocate a new array and copy the old data into it.
344*90655d81SPeter MaydellThe second problem would arise if the size and the data pointers were
345*90655d81SPeter Maydelltwo members of a larger struct::
346*90655d81SPeter Maydell
347*90655d81SPeter Maydell    struct mystuff {
348*90655d81SPeter Maydell        ...
349*90655d81SPeter Maydell        int data_size;
350*90655d81SPeter Maydell        int data_alloc;
351*90655d81SPeter Maydell        T   *data;
352*90655d81SPeter Maydell        ...
353*90655d81SPeter Maydell    };
354*90655d81SPeter Maydell
355*90655d81SPeter MaydellInstead, we store the size of the array with the array itself::
356*90655d81SPeter Maydell
357*90655d81SPeter Maydell    struct arr {
358*90655d81SPeter Maydell        int size;
359*90655d81SPeter Maydell        int alloc;
360*90655d81SPeter Maydell        T   data[];
361*90655d81SPeter Maydell    };
362*90655d81SPeter Maydell    struct arr *global_array;
363*90655d81SPeter Maydell
364*90655d81SPeter Maydell    read side:
365*90655d81SPeter Maydell        rcu_read_lock();
366*90655d81SPeter Maydell        struct arr *array = qatomic_rcu_read(&global_array);
367*90655d81SPeter Maydell        x = i < array->size ? array->data[i] : -1;
368*90655d81SPeter Maydell        rcu_read_unlock();
369*90655d81SPeter Maydell        return x;
370*90655d81SPeter Maydell
371*90655d81SPeter Maydell    write side (running under a lock):
372*90655d81SPeter Maydell        if (global_array->size == global_array->alloc) {
373*90655d81SPeter Maydell            /* Creating a new version.  */
374*90655d81SPeter Maydell            new_array = g_malloc(sizeof(struct arr) +
375*90655d81SPeter Maydell                                 global_array->alloc * 2 * sizeof(T));
376*90655d81SPeter Maydell            new_array->size = global_array->size;
377*90655d81SPeter Maydell            new_array->alloc = global_array->alloc * 2;
378*90655d81SPeter Maydell            memcpy(new_array->data, global_array->data,
379*90655d81SPeter Maydell                   global_array->alloc * sizeof(T));
380*90655d81SPeter Maydell
381*90655d81SPeter Maydell            /* Removal phase.  */
382*90655d81SPeter Maydell            old_array = global_array;
383*90655d81SPeter Maydell            qatomic_rcu_set(&global_array, new_array);
384*90655d81SPeter Maydell            synchronize_rcu();
385*90655d81SPeter Maydell
386*90655d81SPeter Maydell            /* Reclamation phase.  */
387*90655d81SPeter Maydell            free(old_array);
388*90655d81SPeter Maydell        }
389*90655d81SPeter Maydell
390*90655d81SPeter Maydell
391*90655d81SPeter MaydellReferences
392*90655d81SPeter Maydell----------
393*90655d81SPeter Maydell
394*90655d81SPeter Maydell* The `Linux kernel RCU documentation <https://docs.kernel.org/RCU/>`__
395