/openbmc/linux/Documentation/filesystems/ |
H A D | directory-locking.rst | 7 kinds of locks - per-inode (->i_rwsem) and per-filesystem 11 always acquire the locks in order by increasing address. We'll call 16 1) read access. Locking rules: caller locks directory we are accessing. 22 3) object removal. Locking rules: caller locks parent, finds victim, 23 locks victim and calls the method. Locks are exclusive. 25 4) rename() that is _not_ cross-directory. Locking rules: caller locks 29 Take the locks that need to be taken, in inode pointer order if need 34 After the locks had been taken, call the method. All locks are exclusive. 43 All locks are exclusive. 83 change until rename acquires all locks. (Proof: other cross-directory [all …]
|
/openbmc/qemu/scripts/ |
H A D | analyse-locks-simpletrace.py | 14 "A simpletrace Analyser for checking locks." 17 self.locks = 0 24 self.mutex_records[mutex] = {"locks": 0, 35 self.locks += 1 37 rec["locks"] += 1 74 print ("Total locks: %d, locked: %d, unlocked: %d" % 75 (analyser.locks, analyser.locked, analyser.unlocks)) 79 key=lambda k_v: k_v[1]["locks"]): 80 print ("Lock: %#x locks: %d, locked: %d, unlocked: %d" % 81 (key, val["locks"], val["locked"], val["unlocked"])) [all …]
|
/openbmc/linux/Documentation/locking/ |
H A D | robust-futex-ABI.rst | 9 futexes, for kernel assist of cleanup of held locks on task exit. 12 linked list in user space, where it can be updated efficiently as locks 19 2) internal kernel code at exit, to handle any listed locks held 32 to do so, then improperly listed locks will not be cleaned up on exit, 34 waiting on the same locks. 88 specified 'offset'. Should a thread die while holding any such locks, 89 the kernel will walk this list, mark any such locks with a bit 106 robust_futexes used by that thread. The thread should link those locks 108 other links between the locks, such as the reverse side of a double 111 By keeping its locks linked this way, on a list starting with a 'head' [all …]
|
H A D | lockdep-design.rst | 11 The basic object the validator operates upon is a 'class' of locks. 13 A class of locks is a group of locks that are logically the same with 14 respect to locking rules, even if the locks may have multiple (possibly 24 perspective, the two locks (L1 and L2) are not necessarily related; that 111 Unused locks (e.g., mutexes) cannot be part of the cause of an error. 143 Furthermore, two locks can not be taken in inverse order:: 149 deadlock - as attempts to acquire the two locks form a circle which 153 operations; the validator will still find whether these locks can be 170 any rule violation between the new lock and any of the held locks. 188 could interrupt _any_ of the irq-unsafe or hardirq-unsafe locks, which [all …]
|
H A D | pi-futex.rst | 32 Firstly, sharing locks between multiple tasks is a common programming 46 short-held locks: for example, a highprio audio playback thread is 51 So once we accept that synchronization objects (locks) are an 53 apps have a very fair expectation of being able to use locks, we've got 58 inheritance only apply to kernel-space locks. But user-space locks are 64 locks (such as futex-based pthread mutexes) is priority inheritance: 80 normal futex-based locks: a 0 value means unlocked, and a value==TID
|
H A D | mutex-design.rst | 15 or similar theoretical text books. Mutexes are sleeping locks which 27 and implemented in kernel/locking/mutex.c. These locks use an atomic variable 69 While formally kernel mutexes are sleepable locks, it is path (ii) that 86 - Memory areas where held locks reside must not be freed. 98 list of all locks held in the system, printout of them. 100 - Detects self-recursing locks and prints out all relevant info. 102 locks and tasks (and only those tasks). 143 locks in the kernel. E.g: on x86-64 it is 32 bytes, where 'struct semaphore'
|
H A D | locktypes.rst | 15 - Sleeping locks 16 - CPU local locks 17 - Spinning locks 26 Sleeping locks 29 Sleeping locks can only be acquired in preemptible task context. 34 versions of these primitives. In short, don't acquire sleeping locks from 46 On PREEMPT_RT kernels, these lock types are converted to sleeping locks: 53 CPU local locks 65 Spinning locks 71 On non-PREEMPT_RT kernels, these lock types are also spinning locks: [all …]
|
H A D | robust-futexes.rst | 11 what futexes are: normal futexes are special types of locks that in the 45 (and in most cases there is none, futexes being fast lightweight locks) 90 robust locks that userspace is holding (maintained by glibc) - which 94 locks to be cleaned up? 101 walks the list [not trusting it], and marks all locks that are owned by 133 - no registration of individual locks is needed: robust mutexes don't 152 million (!) held locks, using the new method [on a 2GHz CPU]: 162 (1 million held locks are unheard of - we expect at most a handful of 163 locks to be held at a time. Nevertheless it's nice to know that this
|
/openbmc/linux/kernel/locking/ |
H A D | test-ww_mutex.c | 384 struct ww_mutex *locks; member 422 struct ww_mutex *locks = stress->locks; in stress_inorder_work() local 441 err = ww_mutex_lock(&locks[order[n]], &ctx); in stress_inorder_work() 449 ww_mutex_unlock(&locks[order[contended]]); in stress_inorder_work() 452 ww_mutex_unlock(&locks[order[n]]); in stress_inorder_work() 455 ww_mutex_lock_slow(&locks[order[contended]], &ctx); in stress_inorder_work() 479 LIST_HEAD(locks); in stress_reorder_work() 494 ll->lock = &stress->locks[order[n]]; in stress_reorder_work() 495 list_add(&ll->link, &locks); in stress_reorder_work() 503 list_for_each_entry(ll, &locks, link) { in stress_reorder_work() [all …]
|
H A D | lock_events_list.h | 59 LOCK_EVENT(rwsem_opt_lock) /* # of opt-acquired write locks */ 62 LOCK_EVENT(rwsem_rlock) /* # of read locks acquired */ 63 LOCK_EVENT(rwsem_rlock_steal) /* # of read locks by lock stealing */ 64 LOCK_EVENT(rwsem_rlock_fast) /* # of fast read locks acquired */ 67 LOCK_EVENT(rwsem_wlock) /* # of write locks acquired */
|
H A D | lockdep_proc.c | 297 * All irq-safe locks may nest inside irq-unsafe locks, in lockdep_stats_show() 337 seq_printf(m, " hardirq-safe locks: %11lu\n", in lockdep_stats_show() 339 seq_printf(m, " hardirq-unsafe locks: %11lu\n", in lockdep_stats_show() 341 seq_printf(m, " softirq-safe locks: %11lu\n", in lockdep_stats_show() 343 seq_printf(m, " softirq-unsafe locks: %11lu\n", in lockdep_stats_show() 345 seq_printf(m, " irq-safe locks: %11lu\n", in lockdep_stats_show() 347 seq_printf(m, " irq-unsafe locks: %11lu\n", in lockdep_stats_show() 350 seq_printf(m, " hardirq-read-safe locks: %11lu\n", in lockdep_stats_show() 352 seq_printf(m, " hardirq-read-unsafe locks: %11lu\n", in lockdep_stats_show() 354 seq_printf(m, " softirq-read-safe locks: %11lu\n", in lockdep_stats_show() [all …]
|
/openbmc/openbmc-test-automation/openpower/ext_interfaces/ |
H A D | test_lock_management.robot | 29 [Documentation] Acquire and release different read locks. 63 [Documentation] Acquire and release read and write locks after reboot. 77 [Documentation] Acquire and release read, write locks in loop. 122 Verify Release Of Valid Locks 123 [Documentation] Release all valid locks. 125 [Template] Acquire And Release Multiple Locks 183 [Documentation] Failed to release locks from another session. 192 [Documentation] Acquire lock and after reboot the locks are removed as no persistency 299 Get Empty Lock Records For Session Where No Locks Acquired 300 [Documentation] If session does not acquire locks then get lock should return [all …]
|
/openbmc/linux/include/drm/ |
H A D | drm_modeset_lock.h | 38 * @locked: list of held locks 42 * Each thread competing for a set of locks must use one acquire 52 * drm_modeset_backoff() which drops locks and slow-locks the 64 * list of held locks (drm_modeset_lock) 152 * DRM_MODESET_LOCK_ALL_BEGIN - Helper to acquire modeset locks 158 * Use these macros to simplify grabbing all modeset locks using a local 162 * Any code run between BEGIN and END will be holding the modeset locks. 167 * Drivers can acquire additional modeset locks. If any lock acquisition 185 * DRM_MODESET_LOCK_ALL_END - Helper to release and cleanup modeset locks 198 * successfully acquire the locks, ret will be whatever your code sets it to. If [all …]
|
/openbmc/linux/tools/testing/selftests/filelock/ |
H A D | ofdlocks.c | 15 fl->l_pid = 0; // needed for OFD locks in lock_set() 27 fl->l_pid = 0; // needed for OFD locks in lock_get() 59 /* Make sure read locks do not conflict on different fds. */ in main() 67 ksft_print_msg("[FAIL] read locks conflicted\n"); in main() 70 /* Make sure read/write locks do conflict on different fds. */ in main() 79 ("[SUCCESS] read and write locks conflicted\n"); in main() 82 ("[SUCCESS] read and write locks not conflicted\n"); in main() 121 /* Get info about the lock on second fd - no locks on it. */ in main()
|
/openbmc/linux/drivers/gpu/drm/ |
H A D | drm_modeset_lock.c | 63 * where all modeset locks need to be taken through drm_modeset_lock_all_ctx(). 71 * On top of these per-object locks using &ww_mutex there's also an overall 75 * Finally there's a bunch of dedicated locks to protect drm core internal 129 * drm_modeset_lock_all - take all modeset locks 132 * This function takes all modeset locks, suitable where a more fine-grained 133 * scheme isn't (yet) implemented. Locks must be dropped by calling the 174 * We hold the locks now, so it is safe to stash the acquisition in drm_modeset_lock_all() 184 * drm_modeset_unlock_all - drop all modeset locks 187 * This function drops all modeset locks taken by a previous call to the 216 * drm_warn_on_modeset_not_all_locked - check that all modeset locks are locked [all …]
|
/openbmc/linux/fs/lockd/ |
H A D | svcsubs.c | 161 * Delete a file after having released all locks, blocks and shares 204 * Loop over all locks on the given file and perform the specified 260 * Quick check whether there are still any locks, blocks or 314 /* Traverse locks, blocks and shares of this file in nlm_traverse_files() 335 * Release file. If there are no more remote locks on this file, 339 * contortions because the code in fs/locks.c creates, deletes and 340 * splits locks without notification. Our only way is to walk the 352 /* If there are no more locks etc, delete the file */ in nlm_release_file() 401 /* we are destroying locks even though the client in nlmsvc_is_client() 435 "lockd: couldn't remove all locks held by %s\n", in nlmsvc_free_host_resources() [all …]
|
/openbmc/linux/lib/ |
H A D | bucket_locks.c | 10 * the number of locks per CPU to allocate. The size is rounded up 14 int __alloc_bucket_spinlocks(spinlock_t **locks, unsigned int *locks_mask, in __alloc_bucket_spinlocks() argument 43 *locks = tlocks; in __alloc_bucket_spinlocks() 50 void free_bucket_spinlocks(spinlock_t *locks) in free_bucket_spinlocks() argument 52 kvfree(locks); in free_bucket_spinlocks()
|
/openbmc/linux/Documentation/gpu/ |
H A D | vgaarbiter.rst | 39 Close a user instance. Release locks made by the user 44 "<card_ID>,decodes=<io_state>,owns=<io_state>,locks=<io_state> (ic,mc)" 50 "locks" indicates what is locked by this card. If the card is 61 acquires locks on target ("none" is an invalid io_state) 63 non-blocking acquire locks on target (returns EBUSY if 66 release locks on target 68 release all locks on target held by this user (not implemented 81 Note about locks: 83 The driver keeps track of which user has which locks on which card. It 87 Currently, a max of 16 cards can have locks simultaneously issued from
|
/openbmc/linux/drivers/md/ |
H A D | dm-bio-prison-v2.h | 73 * Shared locks have a bio associated with them. 103 * Locks a cell. No associated bio. Exclusive locks get priority. These 104 * locks constrain whether the io locks are granted according to level. 106 * Shared locks will still be granted if the lock_level is > (not = to) the 141 * There may be shared locks still held at this point even if you quiesced
|
/openbmc/linux/drivers/hwspinlock/ |
H A D | sun6i_hwspinlock.c | 135 * to 0x4 represent 32, 64, 128 and 256 locks in sun6i_hwspinlock_probe() 136 * but later datasheets (H5, H6) say 00, 01, 10, 11 represent 32, 64, 128 and 256 locks, in sun6i_hwspinlock_probe() 137 * but that would mean H5 and H6 have 64 locks, while their datasheets talk about 32 locks in sun6i_hwspinlock_probe() 138 * all the time, not a single mentioning of 64 locks in sun6i_hwspinlock_probe() 143 * this is the reason 0x1 is considered being 32 locks and bit 30 is taken into account in sun6i_hwspinlock_probe() 144 * verified on H2+ (datasheet 0x1 = 32 locks) and H5 (datasheet 01 = 64 locks) in sun6i_hwspinlock_probe()
|
/openbmc/linux/arch/x86/include/asm/ |
H A D | spinlock.h | 19 * These are fair FIFO ticket locks, which support up to 2^16 CPUs. 35 * can "mix" irq-safe locks - any writer needs to get a 37 * read-locks. 39 * On x86, we implement read-write locks using the generic qrwlock with
|
/openbmc/linux/drivers/scsi/ |
H A D | scsi_devinfo.c | 61 {"Aashima", "IMAGERY 2400SP", "1.03", BLIST_NOLUN}, /* locks up */ 62 {"CHINON", "CD-ROM CDS-431", "H42", BLIST_NOLUN}, /* locks up */ 63 {"CHINON", "CD-ROM CDS-535", "Q14", BLIST_NOLUN}, /* locks up */ 64 {"DENON", "DRD-25X", "V", BLIST_NOLUN}, /* locks up */ 67 {"IBM", "2104-DU3", NULL, BLIST_NOLUN}, /* locks up */ 68 {"IBM", "2104-TU3", NULL, BLIST_NOLUN}, /* locks up */ 69 {"IMS", "CDD521/10", "2.06", BLIST_NOLUN}, /* locks up */ 70 {"MAXTOR", "XT-3280", "PR02", BLIST_NOLUN}, /* locks up */ 71 {"MAXTOR", "XT-4380S", "B3C", BLIST_NOLUN}, /* locks up */ 72 {"MAXTOR", "MXT-1240S", "I1.2", BLIST_NOLUN}, /* locks up */ [all …]
|
/openbmc/linux/fs/ |
H A D | locks.c | 3 * linux/fs/locks.c 5 * We implement four types of file locks: BSD locks, posix locks, open 6 * file description locks, and leases. For details about BSD locks, 17 * Waiting and applied locks are all kept in trees whose properties are: 126 * The global file_lock_list is only used for displaying /proc/locks, so we 145 * We hash locks by lockowner in order to optimize searching for the lock a 149 * buckets when we have more lockowners holding locks, but that's a little 161 * requests (in contrast to those that are acting as records of acquired locks). 223 pr_warn("Leaked locks on dev=0x%x:0x%x ino=0x%lx:\n", in locks_check_ctx_lists() 586 /* Check if two locks overlap each other. [all …]
|
/openbmc/linux/tools/memory-model/ |
H A D | linux-kernel.bell | 46 unmatched-locks = Rcu-lock \ domain(matched) 48 and unmatched = unmatched-locks | unmatched-unlocks 50 and unmatched-locks-to-unlocks = 51 [unmatched-locks] ; po ; [unmatched-unlocks] 52 and matched = matched | (unmatched-locks-to-unlocks \
|
/openbmc/linux/tools/perf/pmu-events/arch/x86/icelakex/ |
H A D | uncore-cache.json | 5524 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5533 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5549 …pecified by the subevent. Does not include addressless requests such as locks and interrupts. : … 5558 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5566 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5575 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5584 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5593 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5611 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", 5620 … specified by the subevent. Does not include addressless requests such as locks and interrupts.", [all …]
|