/openbmc/linux/mm/ |
H A D | zpool.c | 101 * the requested module, if needed, but there is no guarantee the module will 141 * Implementations must guarantee this to be thread-safe. 192 * Implementations must guarantee this to be thread-safe, 214 * Implementations must guarantee this to be thread-safe. 230 * Implementations must guarantee this to be thread-safe. 251 * Implementations must guarantee this to be thread-safe. 266 * This frees previously allocated memory. This does not guarantee 270 * Implementations must guarantee this to be thread-safe,
|
/openbmc/linux/rust/kernel/ |
H A D | allocator.rs | 25 // The alignment requirement exceeds the slab guarantee, thus try to enlarge the size in krealloc_aligned() 26 // to use the "power-of-two" size/alignment guarantee (see comments in `kmalloc()` for in krealloc_aligned() 30 // `layout.align()`, so `next_power_of_two` gives enough alignment guarantee. in krealloc_aligned()
|
H A D | build_assert.rs | 8 /// If the compiler or optimizer cannot guarantee that `build_error!` can never 36 /// will panic. If the compiler or optimizer cannot guarantee the condition will
|
H A D | types.rs | 184 // The type invariants guarantee that `unwrap` will succeed. in deref() 191 // The type invariants guarantee that `unwrap` will succeed. in deref_mut() 340 // INVARIANT: The safety requirements guarantee that the new instance now owns the in from_raw() 361 // SAFETY: The type invariants guarantee that the object is valid. in deref() 376 // SAFETY: The type invariants guarantee that the `ARef` owns the reference we're about to in drop()
|
/openbmc/linux/kernel/sched/ |
H A D | membarrier.c | 20 * order to enforce the guarantee that any writes occurring on CPU0 before 42 * and r2 == 0. This violates the guarantee that membarrier() is 56 * order to enforce the guarantee that any writes occurring on CPU1 before 77 * the guarantee that membarrier() is supposed to provide. 181 * A sync_core() would provide this guarantee, but in ipi_sync_core() 214 * guarantee that no memory access following registration is reordered in ipi_sync_rq_state() 224 * guarantee that no memory access prior to exec is reordered after in membarrier_exec_mmap() 443 * mm and in the current runqueue to guarantee that no memory in sync_runqueues_membarrier_state()
|
/openbmc/linux/include/linux/ |
H A D | rbtree_latch.h | 9 * lockless lookups; we cannot guarantee they return a correct result. 21 * However, while we have the guarantee that there is at all times one stable 22 * copy, this does not guarantee an iteration will not observe modifications. 61 * guarantee on which of the elements matching the key is found. See
|
H A D | types.h | 222 * The alignment is required to guarantee that bit 0 of @next will be 226 * This guarantee is important for few reasons: 229 * which encode PageTail() in bit 0. The guarantee is needed to avoid
|
H A D | u64_stats_sync.h | 26 * 4) If reader fetches several counters, there is no guarantee the whole values 47 * snapshot for each variable (but no guarantee on several ones)
|
/openbmc/linux/arch/x86/include/asm/vdso/ |
H A D | gettimeofday.h | 204 * Note: The kernel and hypervisor must guarantee that cpu ID in vread_pvclock() 208 * preemption, it cannot guarantee that per-CPU pvclock time in vread_pvclock() 214 * guarantee than we get with a normal seqlock. in vread_pvclock() 216 * On Xen, we don't appear to have that guarantee, but Xen still in vread_pvclock()
|
/openbmc/linux/kernel/printk/ |
H A D | printk_ringbuffer.c | 455 * Guarantee the state is loaded before copying the descriptor in desc_read() 487 * 1. Guarantee the descriptor content is loaded before re-checking in desc_read() 503 * 2. Guarantee the record data is loaded before re-checking the in desc_read() 677 * 1. Guarantee the block ID loaded in in data_push_tail() 704 * 2. Guarantee the descriptor state loaded in in data_push_tail() 744 * Guarantee any descriptor states that have transitioned to in data_push_tail() 829 * Guarantee any descriptor states that have transitioned to in desc_push_tail() 839 * Guarantee the last state load from desc_read() is before in desc_push_tail() 891 * Guarantee the head ID is read before reading the tail ID. in desc_reserve() 925 * 1. Guarantee the tail ID is read before validating the in desc_reserve() [all …]
|
/openbmc/linux/Documentation/locking/ |
H A D | spinlocks.rst | 19 spinlock itself will guarantee the global lock, so it will guarantee that 117 guarantee the same kind of exclusive access, and it will be much faster.
|
/openbmc/linux/rust/alloc/vec/ |
H A D | is_zero.rs | 120 // `Option<num::NonZeroU32>` and similar have a representation guarantee that 121 // they're the same size as the corresponding `u32` type, as well as a guarantee 189 // SAFETY: This is *not* a stable layout guarantee, but
|
/openbmc/linux/Documentation/core-api/ |
H A D | refcount-vs-atomic.rst | 84 Memory ordering guarantee changes: 97 Memory ordering guarantee changes: 108 Memory ordering guarantee changes:
|
/openbmc/qemu/util/ |
H A D | cpuinfo-i386.c | 66 * guarantee that the 16-byte memory operations performed in cpuinfo_init() 76 * AMD has provided an even stronger guarantee that processors in cpuinfo_init()
|
/openbmc/linux/Documentation/driver-api/ |
H A D | reset.rst | 87 Exclusive resets on the other hand guarantee direct control. 99 is no guarantee that calling reset_control_assert() on a shared reset control 152 The reset control API does not guarantee the order in which the individual
|
/openbmc/linux/tools/memory-model/Documentation/ |
H A D | ordering.txt | 101 with void return types) do not guarantee any ordering whatsoever. Nor do 106 operations such as atomic_read() do not guarantee full ordering, and 130 such as atomic_inc() and atomic_dec() guarantee no ordering whatsoever. 150 atomic_inc() implementations do not guarantee full ordering, thus 278 from "x" instead of writing to it. Then an smp_wmb() could not guarantee 501 and further do not guarantee "atomic" access. For example, the compiler
|
/openbmc/linux/Documentation/driver-api/usb/ |
H A D | anchors.rst | 55 Therefore no guarantee is made that the URBs have been unlinked when 82 destinations in one anchor you have no guarantee the chronologically
|
/openbmc/linux/rust/ |
H A D | helpers.c | 12 * guarantee codegen will be performed for a non-inline function either. 17 * All symbols are exported as GPL-only to guarantee no GPL-only feature is
|
/openbmc/linux/arch/arc/include/asm/ |
H A D | futex.h | 82 preempt_disable(); /* to guarantee atomic r-m-w of futex op */ in arch_futex_atomic_op_inuser() 131 preempt_disable(); /* to guarantee atomic r-m-w of futex op */ in futex_atomic_cmpxchg_inatomic()
|
/openbmc/linux/Documentation/ |
H A D | memory-barriers.txt | 332 of the standard containing this guarantee is Section 3.14, which 382 A write memory barrier gives a guarantee that all the STORE operations 440 A read barrier is an address-dependency barrier plus a guarantee that all 457 A general memory barrier gives a guarantee that all the LOAD and STORE 528 There are certain things that the Linux kernel memory barriers do not guarantee: 530 (*) There is no guarantee that any of the memory accesses specified before a 535 (*) There is no guarantee that issuing a memory barrier on one CPU will have 540 (*) There is no guarantee that a CPU will see the correct order of effects 545 (*) There is no guarantee that some intervening piece of off-the-CPU 890 However, they do -not- guarantee any other sort of ordering: [all …]
|
/openbmc/openbmc/meta-openembedded/meta-oe/licenses/ |
H A D | Kilgard | 4 provided without guarantee or warrantee expressed or implied. This
|
/openbmc/openbmc/poky/meta/files/common-licenses/ |
H A D | SAX-PD-2.0 | 6 SAX comes with NO WARRANTY or guarantee of fitness for any
|
/openbmc/openbmc/poky/bitbake/lib/bs4/ |
H A D | formatter.py | 19 * 'minimal' - Only make the substitutions necessary to guarantee 26 * 'minimal' - Only make the substitutions necessary to guarantee
|
/openbmc/linux/Documentation/arch/sh/ |
H A D | booting.rst | 8 guarantee any particular initial register state, kernels built to
|
/openbmc/linux/Documentation/filesystems/ |
H A D | xfs-delayed-logging-design.rst | 17 guarantee forwards progress for long running transactions with finite initial 51 followed to guarantee forwards progress and prevent deadlocks. 119 there is no guarantee of how much of the operation reached stale storage. Hence 121 the high level operation must use intents and deferred operations to guarantee 130 xfs_trans_commit() does not guarantee that the modification has been committed 154 provide a forwards progress guarantee so that no modification ever stalls 160 A transaction reservation provides a guarantee that there is physical log space 178 for the transaction that is calculated at mount time. We must guarantee that the 416 It should be noted that this does not change the guarantee that log recovery 892 guarantee which context the pin count is associated with. This is because of [all …]
|