Home
last modified time | relevance | path

Searched full:readers (Results 1 – 25 of 401) sorted by relevance

12345678910>>...17

/openbmc/linux/kernel/locking/
H A Drwbase_rt.c8 * 2) Remove the reader BIAS to force readers into the slow path
9 * 3) Wait until all readers have left the critical section
14 * 2) Set the reader BIAS, so readers can use the fast path again
15 * 3) Unlock rtmutex, to release blocked readers
34 * active readers. A blocked writer would force all newly incoming readers
45 * The lock/unlock of readers can run in fast paths: lock and unlock are only
58 * Increment reader count, if sem->readers < 0, i.e. READER_BIAS is in rwbase_read_trylock()
61 for (r = atomic_read(&rwb->readers); r < 0;) { in rwbase_read_trylock()
62 if (likely(atomic_try_cmpxchg_acquire(&rwb->readers, &r, r + 1))) in rwbase_read_trylock()
122 atomic_inc(&rwb->readers); in __rwbase_read_lock()
[all …]
H A Dpercpu-rwsem.c60 * Conversely, any readers that increment their sem->read_count after in __percpu_down_read_trylock()
113 * We use EXCLUSIVE for both readers and writers to preserve FIFO order,
114 * and play games with the return value to allow waking multiple readers.
116 * Specifically, we wake readers until we've woken a single writer, or until a
138 return !reader; /* wake (readers until) 1 writer */ in percpu_rwsem_wake_function()
204 * newly arriving readers increment a given counter, they will immediately
230 /* Notify readers to take the slow path. */ in percpu_down_write()
235 * Having sem->block set makes new readers block. in percpu_down_write()
248 /* Wait for all active readers to complete. */ in percpu_down_write()
262 * that new readers might fail to see the results of this writer's in percpu_up_write()
H A Drwsem.c38 * - Bit 0: RWSEM_READER_OWNED - The rwsem is owned by readers
55 * is involved. Ideally we would like to track all the readers that own
109 * 1) rwsem_mark_wake() for readers -- set, clear
296 * The lock is owned by readers when
301 * Having some reader bits set is not enough to guarantee a readers owned
302 * lock as the readers may be in the process of backing out from the count
350 RWSEM_WAKE_READERS, /* Wake readers only */
362 * Magic number to batch-wakeup waiting readers, even when writers are
409 * Implies rwsem_del_waiter() for all woken readers.
433 * Readers, on the other hand, will block as they in rwsem_mark_wake()
[all …]
H A Dqrwlock.c24 * Readers come here when they cannot get the lock without waiting in queued_read_lock_slowpath()
28 * Readers in interrupt context will get the lock immediately in queued_read_lock_slowpath()
80 /* Set the waiting flag to notify readers that a writer is pending */ in queued_write_lock_slowpath()
83 /* When no more readers or writers, set the locked flag */ in queued_write_lock_slowpath()
/openbmc/linux/Documentation/RCU/
H A Dchecklist.rst30 One final exception is where RCU readers are used to prevent
40 RCU does allow *readers* to run (almost) naked, but *writers* must
85 The whole point of RCU is to permit readers to run without
86 any locks or atomic operations. This means that readers will
99 locks (that are acquired by both readers and writers)
100 that guard per-element state. Fields that the readers
106 c. Make updates appear atomic to readers. For example,
110 appear to be atomic to RCU readers, nor will sequences
118 d. Carefully order the updates and the reads so that readers
138 a. Readers must maintain proper ordering of their memory
[all …]
H A Drcu.rst10 must be long enough that any readers accessing the item being deleted have
21 The advantage of RCU's two-part approach is that RCU readers need
26 in read-mostly situations. The fact that RCU readers need not
30 if the RCU readers give no indication when they are done?
32 Just as with spinlocks, RCU readers are not permitted to
42 same effect, but require that the readers manipulate CPU-local
H A DwhatisRCU.rst56 Section 1, though most readers will profit by reading this section at
79 new versions of these data items), and can run concurrently with readers.
81 readers is the semantics of modern CPUs guarantee that readers will see
85 removal phase. Because reclaiming data items can disrupt any readers
87 not start until readers no longer hold references to those data items.
91 reclamation phase until all readers active during the removal phase have
93 callback that is invoked after they finish. Only readers that are active
101 readers cannot gain a reference to it.
103 b. Wait for all previous readers to complete their RCU read-side
106 c. At this point, there cannot be any readers who hold references
[all …]
H A Dlockdep.rst43 invoked by both RCU readers and updaters.
47 is invoked by both RCU-bh readers and updaters.
51 is invoked by both RCU-sched readers and updaters.
55 is invoked by both SRCU readers and updaters.
/openbmc/linux/include/linux/
H A Drwbase_rt.h12 atomic_t readers; member
18 .readers = ATOMIC_INIT(READER_BIAS), \
25 atomic_set(&(rwbase)->readers, READER_BIAS); \
31 return atomic_read(&rwb->readers) != READER_BIAS; in rw_base_is_locked()
36 return atomic_read(&rwb->readers) > 0; in rw_base_is_contended()
H A Drcu_sync.h16 /* Structure to mediate between updaters and fastpath-using readers. */
26 * rcu_sync_is_idle() - Are readers permitted to use their fastpaths?
29 * Returns true if readers are permitted to use their fastpaths. Must be
/openbmc/linux/kernel/rcu/
H A Dsync.c28 * rcu_sync_enter_start - Force readers onto slow path for multiple updates
58 * If it is called by rcu_sync_enter() it signals that all the readers were
67 * readers back onto their fastpaths (after a grace period). If both
70 * rcu_sync_exit(). Otherwise, set all state back to idle so that readers
107 * rcu_sync_enter() - Force readers onto slowpath
110 * This function is used by updaters who need readers to make use of
113 * tells readers to stay off their fastpaths. A later call to
159 * rcu_sync_exit() - Allow readers back onto fast path after grace period
163 * now allow readers to make use of their fastpaths after a grace period
165 * calls to rcu_sync_is_idle() will return true, which tells readers that
/openbmc/linux/Documentation/locking/
H A Dlockdep-design.rst405 spin_lock() or write_lock()), non-recursive readers (i.e. shared lockers, like
406 down_read()) and recursive readers (recursive shared lockers, like rcu_read_lock()).
410 r: stands for non-recursive readers.
411 R: stands for recursive readers.
412 S: stands for all readers (non-recursive + recursive), as both are shared lockers.
413 N: stands for writers and non-recursive readers, as both are not recursive.
417 Recursive readers, as their name indicates, are the lockers allowed to acquire
421 While non-recursive readers will cause a self deadlock if trying to acquire inside
424 The difference between recursive readers and non-recursive readers is because:
425 recursive readers get blocked only by a write lock *holder*, while non-recursive
[all …]
H A Dseqlock.rst9 lockless readers (read-only retry loops), and no writer starvation. They
23 is odd and indicates to the readers that an update is in progress. At
25 even again which lets readers make progress.
153 from interruption by readers. This is typically the case when the read
195 1. Normal Sequence readers which never block a writer but they must
206 2. Locking readers which will wait if a writer or another locking reader
218 according to a passed marker. This is used to avoid lockless readers
H A Dlocktypes.rst95 readers.
135 rw_semaphore is a multiple readers and single writer lock mechanism.
141 exist special-purpose interfaces that allow non-owner release for readers.
151 readers, a preempted low-priority reader will continue holding its lock,
152 thus starving even high-priority writers. In contrast, because readers
155 writer from starving readers.
299 rwlock_t is a multiple readers and single writer lock mechanism.
314 readers, a preempted low-priority reader will continue holding its lock,
315 thus starving even high-priority writers. In contrast, because readers
318 preventing that writer from starving readers.
/openbmc/linux/drivers/misc/ibmasm/
H A Devent.c30 list_for_each_entry(reader, &sp->event_buffer->readers, node) in wake_up_event_readers()
39 * event readers.
40 * There is no reader marker in the buffer, therefore readers are
73 * Called by event readers (initiated from user space through the file
123 list_add(&reader->node, &sp->event_buffer->readers); in ibmasm_event_reader_register()
153 INIT_LIST_HEAD(&buffer->readers); in ibmasm_event_buffer_init()
/openbmc/qemu/include/block/
H A Dgraph-lock.h31 * readers, mostly coroutines running in other AioContext thus other threads.
34 * graph, and readers (all other coroutines running in various AioContext),
42 * The readers (coroutines in multiple AioContext) are free to
54 * so that the writer is always aware of all readers.
92 * This list is used to obtain the total number of readers
120 * all readers that are waiting.
137 * readers currently running, or waits until the current
152 * Read terminated, decrease the count of readers in the current aiocontext.
182 * so that incoming readers will pause.
/openbmc/qemu/block/
H A Dgraph-lock.c41 * The count of readers must remain correct, so the AioContext's
47 /* Queue of readers waiting for the writer to finish */
51 /* How many readers are currently reading the graph. */
104 /* shouldn't overflow unless there are 2^31 readers */ in reader_count()
122 * Wait by allowing other coroutine (and possible readers) to continue. in bdrv_graph_wrlock()
136 * to other threads. That way no more readers can sneak in after we've in bdrv_graph_wrlock()
215 * Then the writer will set has_writer to 0 and wake up all readers, in bdrv_graph_co_rdlock()
219 * then it will set has_writer to 0 and wake up all other readers. in bdrv_graph_co_rdlock()
/openbmc/linux/drivers/misc/cardreader/
H A DKconfig9 Alcor Micro card readers support access to many types of memory cards,
20 Realtek card readers support access to many types of memory cards,
29 Select this option to get support for Realtek USB 2.0 card readers
/openbmc/linux/fs/btrfs/
H A Dlocking.c115 * - try-lock semantics for readers and writers
325 * if there are pending readers no new writers would be allowed to come in and
331 atomic_set(&lock->readers, 0); in btrfs_drew_lock_init()
340 if (atomic_read(&lock->readers)) in btrfs_drew_try_write_lock()
345 /* Ensure writers count is updated before we check for pending readers */ in btrfs_drew_try_write_lock()
347 if (atomic_read(&lock->readers)) { in btrfs_drew_try_write_lock()
360 wait_event(lock->pending_writers, !atomic_read(&lock->readers)); in btrfs_drew_write_lock()
372 atomic_inc(&lock->readers); in btrfs_drew_read_lock()
391 if (atomic_dec_and_test(&lock->readers)) in btrfs_drew_read_unlock()
/openbmc/linux/arch/x86/include/asm/
H A Dspinlock.h30 * Read-write spinlocks, allowing multiple readers
33 * NOTE! it is quite common to have readers in interrupts
36 * irq-safe write-lock, but readers can get non-irqsafe
/openbmc/linux/drivers/hid/
H A Dhid-roccat.c18 * It is inspired by hidraw, but uses only one circular buffer for all readers.
47 struct list_head readers; member
48 /* protects modifications of readers list */
52 * circular_buffer has one writer and multiple readers with their own
191 list_add_tail(&reader->node, &device->readers); in roccat_open()
239 * roccat_report_event() - output data to readers
270 list_for_each_entry(reader, &device->readers, node) { in roccat_report_event()
339 INIT_LIST_HEAD(&device->readers); in roccat_connect()
/openbmc/openbmc/meta-openembedded/meta-oe/recipes-graphics/fbida/files/
H A Dfbida-gcc10.patch21 #include "readers.h"
29 --- fbida-2.14/readers.c.org 2020-03-15 17:01:18.692683597 +0100
30 +++ fbida-2.14/readers.c 2020-03-15 16:57:19.141632384 +0100
33 #include "readers.h"
/openbmc/linux/arch/sh/include/asm/
H A Dspinlock-cas.h44 * Read-write spinlocks, allowing multiple readers but only one writer.
46 * NOTE! it is quite common to have readers in interrupts but no interrupt
48 * needs to get a irq-safe write-lock, but readers can get non-irqsafe
/openbmc/phosphor-webui/app/common/styles/elements/
H A Dpaginate.scss48 /* screen readers only */
88 /* screen readers only */
100 /* screen readers only */
126 /* screen readers only */
/openbmc/qemu/tests/unit/
H A Dtest-rcu-list.c4 * usage: rcuq_test <readers> <duration>
316 printf("%s: %d readers; 1 updater; nodes read: " \ in rcu_qtest()
351 int duration = 0, readers = 0; in main() local
371 readers = strtoul(argv[2], NULL, 0); in main()
373 if (duration && readers) { in main()
374 rcu_qtest(argv[0], duration, readers); in main()

12345678910>>...17