Lines Matching +full:- +full:- +full:retry +full:- +full:all +full:- +full:errors

2 Wound/Wait Deadlock-Proof Mutex Design
5 Please read mutex-design.rst first, as it applies to wait/wound mutexes too.
7 Motivation for WW-Mutexes
8 -------------------------
18 the same order in all contexts. That is directly under control of
30 from a global counter. In case of deadlock while locking all the buffers
33 younger task) unlocks all of the buffers that it has already locked, and then
37 and the deadlock handling approach is called Wait-Die. The name is based on
41 and dies. Hence Wait-Die.
42 There is also another algorithm called Wound-Wait:
46 transaction. Hence Wound-Wait.
48 However, the Wound-Wait algorithm is typically stated to generate fewer backoffs
49 compared to Wait-Die, but is, on the other hand, associated with more work than
50 Wait-Die when recovering from a backoff. Wound-Wait is also a preemptive
54 Wound-Wait transaction is considered preempted when it dies (returning
55 -EDEADLK) following a wound.
58 --------
72 class also specifies what algorithm to use, Wound-Wait or Wait-Die.
79 killed its transaction after having dropped all already acquired locks.
82 From a simple semantics point-of-view the _slow functions are not strictly
84 contending lock (after having dropped all other already acquired locks) will
85 work correctly. After all if no other ww mutex has been acquired yet there's
87 prematurely return -EDEADLK. The advantage of the _slow functions is in
90 - ww_mutex_lock has a __must_check int return type, whereas ww_mutex_lock_slow
94 - When full debugging is enabled ww_mutex_lock_slow checks that all acquired
96 block on the contending lock (preventing spinning through the -EDEADLK
107 Of course, all the usual variants for handling wake-ups due to signals are also
111 -----
113 The algorithm (Wait-Die vs Wound-Wait) is chosen by using either
114 DEFINE_WW_CLASS() (Wound-Wait) or DEFINE_WD_CLASS() (Wait-Die)
115 As a rough rule of thumb, use Wound-Wait iff you
134 Method 1, using a list in execbuf->buffers that's not allowed to be reordered.
136 Furthermore the lock helper can use propagate the -EALREADY return code back to
149 retry:
151 if (entry->obj == res_obj) {
155 ret = ww_mutex_lock(&entry->obj->lock, ctx);
167 ww_mutex_unlock(&entry->obj->lock);
170 ww_mutex_unlock(&res_obj->lock);
172 if (ret == -EDEADLK) {
173 /* we lost out in a seqno race, lock and retry.. */
174 ww_mutex_lock_slow(&contended_entry->obj->lock, ctx);
175 res_obj = contended_entry->obj;
176 goto retry;
183 Method 2, using a list in execbuf->buffers that can be reordered. Same semantics
184 of duplicate entry detection using -EALREADY as method 1 above. But the
185 list-reordering allows for a bit more idiomatic code::
194 ret = ww_mutex_lock(&entry->obj->lock, ctx);
199 ww_mutex_unlock(&entry2->obj->lock);
201 if (ret != -EDEADLK) {
206 /* we lost out in a seqno race, lock and retry.. */
207 ww_mutex_lock_slow(&entry->obj->lock, ctx);
211 * buf->next to the first unlocked entry,
214 list_del(&entry->head);
215 list_add(&entry->head, list);
230 ww_mutex_unlock(&entry->obj->lock);
235 Method 3 is useful if the list of objects is constructed ad-hoc and not upfront,
237 and edges can only be changed when holding the locks of all involved nodes. w/w
240 - They can handle lock-acquisition in any order which allows us to start walking
243 - Due to the -EALREADY return code signalling that a given objects is already
244 held there's no need for additional book-keeping to break cycles in the graph
250 - Since the list of objects is dynamically constructed (and might very well be
251 different when retrying due to hitting the -EDEADLK die condition) there's
254 - On the other hand the dynamic object list construction also means that the -EALREADY return
260 method #3 below. The backoff/retry procedure will be a bit more involved, since
261 when the dynamic locking step hits -EDEADLK we also need to unlock all the
266 -EALREADY. Of course this would be different when using the _interruptible
283 list_del(&entry->locked_list);
284 ww_mutex_unlock(entry->ww_mutex)
294 retry:
295 /* re-init loop start state */
300 ret = ww_mutex_lock(obj->ww_mutex, ctx);
301 if (ret == -EALREADY) {
305 if (ret == -EDEADLK) {
309 list_add(&entry->locked_list, list);
310 goto retry;
314 list_add_tail(&entry->locked_list, list);
333 ----------------------
346 (2) For Wait-Die, among waiters with contexts, only the first one can have
347 other locks acquired already (ctx->acquired > 0). Note that this waiter
350 The Wound-Wait preemption is implemented with a lazy-preemption scheme:
371 Some of the errors which will be warned about:
372 - Forgetting to call ww_acquire_fini or ww_acquire_init.
373 - Attempting to lock more mutexes after ww_acquire_done.
374 - Attempting to lock the wrong mutex after -EDEADLK and
375 unlocking all mutexes.
376 - Attempting to lock the right mutex after -EDEADLK,
377 before unlocking all mutexes.
379 - Calling ww_mutex_lock_slow before -EDEADLK was returned.
381 - Unlocking mutexes with the wrong unlock function.
382 - Calling one of the ww_acquire_* twice on the same context.
383 - Using a different ww_class for the mutex than for the ww_acquire_ctx.
384 - Normal lockdep errors that can result in deadlocks.
386 Some of the lockdep errors that can result in deadlocks:
387 - Calling ww_acquire_init to initialize a second ww_acquire_ctx before
389 - 'normal' deadlocks that can occur.