Lines Matching +full:address +full:- +full:aligned
8 * See the COPYING file in the top-level directory.
15 #include "qemu/host-utils.h"
28 MO_SIGN = 0x08, /* Sign-extended, otherwise zero-extended. */
49 * do_unaligned_access hook if the guest address is not aligned.
51 * Some architectures (e.g. ARMv8) need the address which is aligned
53 * Some architectures (e.g. SPARCv9) need an address which is aligned,
59 * - unaligned access permitted (MO_UNALN).
60 * - an alignment to the size of an access (MO_ALIGN);
61 * - an alignment to a specified size, which may be more or less than
77 * MO_ATOM_IFALIGN: the operation must be single-copy atomic if it
78 * is aligned; if unaligned there is no atomicity.
80 * be a pair of half-sized operations which are packed together
81 * for convenience, with single-copy atomicity on each half if
82 * the half is aligned.
83 * This is the atomicity e.g. of Arm pre-FEAT_LSE2 LDP.
84 * MO_ATOM_WITHIN16: the operation is single-copy atomic, even if it
85 * is unaligned, so long as it does not cross a 16-byte boundary;
86 * if it crosses a 16-byte boundary there is no atomicity.
88 * MO_ATOM_WITHIN16_PAIR: the entire operation is single-copy atomic,
89 * if it happens to be within a 16-byte boundary, otherwise it
90 * devolves to a pair of half-sized MO_ATOM_WITHIN16 operations.
91 * Depending on alignment, one or both will be single-copy atomic.
93 * MO_ATOM_SUBALIGN: the operation is single-copy atomic by parts
94 * by the alignment. E.g. if the address is 0 mod 4, then each
95 * 4-byte subobject is single-copy atomic.
99 * Note the default (i.e. 0) value is single-copy atomic to the
100 * size of the operation, if aligned. This retains the behaviour
162 assert((size & (size - 1)) == 0 && size >= 1 && size <= 8); in size_memop()
206 size = size ? size - 1 : 0; in memop_atomicity_bits()