9f4bf4ba | 03-Jun-2019 |
Peter Xu <peterx@redhat.com> |
kvm: Persistent per kvmslot dirty bitmap
When synchronizing dirty bitmap from kernel KVM we do it in a per-kvmslot fashion and we allocate the userspace bitmap for each of the ioctl. This patch ins
kvm: Persistent per kvmslot dirty bitmap
When synchronizing dirty bitmap from kernel KVM we do it in a per-kvmslot fashion and we allocate the userspace bitmap for each of the ioctl. This patch instead make the bitmap cache be persistent then we don't need to g_malloc0() every time.
More importantly, the cached per-kvmslot dirty bitmap will be further used when we want to add support for the KVM_CLEAR_DIRTY_LOG and this cached bitmap will be used to guarantee we won't clear any unknown dirty bits otherwise that can be a severe data loss issue for migration code.
Signed-off-by: Peter Xu <peterx@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20190603065056.25211-9-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
52ba13f0 | 09-Jul-2019 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Release mmap_lock on translation fault
Turn helper_retaddr into a multi-state flag that may now also indicate when we're performing a read on behalf of the translator. In this case, release the
tcg: Release mmap_lock on translation fault
Turn helper_retaddr into a multi-state flag that may now also indicate when we're performing a read on behalf of the translator. In this case, release the mmap_lock before the longjmp back to the main cpu loop, and thereby avoid a failing assert therein.
Fixes: https://bugs.launchpad.net/qemu/+bug/1832353 Tested-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
4601f8d1 | 25-Apr-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Do unaligned store recursion to outermost function
This is less tricky than for loads, because we always fall back to single byte stores to implement unaligned stores.
Signed-off-by: Richar
cputlb: Do unaligned store recursion to outermost function
This is less tricky than for loads, because we always fall back to single byte stores to implement unaligned stores.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
show more ...
|
2dd92606 | 25-Apr-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Do unaligned load recursion to outermost function
If we attempt to recurse from load_helper back to load_helper, even via intermediary, we do not get all of the constants expanded away as de
cputlb: Do unaligned load recursion to outermost function
If we attempt to recurse from load_helper back to load_helper, even via intermediary, we do not get all of the constants expanded away as desired.
But if we recurse back to the original helper (or a shim that has a consistent function signature), the operands are folded away as desired.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
show more ...
|
fc1bc777 | 25-Apr-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Drop attribute flatten
Going to approach this problem via __attribute__((always_inline)) instead, but full conversion will take several steps.
Signed-off-by: Richard Henderson <richard.hend
cputlb: Drop attribute flatten
Going to approach this problem via __attribute__((always_inline)) instead, but full conversion will take several steps.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
show more ...
|
f1be3696 | 25-Apr-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Move TLB_RECHECK handling into load/store_helper
Having this in io_readx/io_writex meant that we forgot to re-compute index after tlb_fill. It also means we can use the normal aligned memor
cputlb: Move TLB_RECHECK handling into load/store_helper
Having this in io_readx/io_writex meant that we forgot to re-compute index after tlb_fill. It also means we can use the normal aligned memory load path. It also fixes a bug in that we had cached a use of index across a tlb_fill.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
show more ...
|