59e96ac6 | 26-Aug-2019 |
David Hildenbrand <david@redhat.com> |
tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
Factor it out into common code. Similar to the !CONFIG_USER_ONLY variant, let's not allow to cross page boundaries.
Signed-off-by: Dav
tcg: Factor out CONFIG_USER_ONLY probe_write() from s390x code
Factor it out into common code. Similar to the !CONFIG_USER_ONLY variant, let's not allow to cross page boundaries.
Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20190826075112.25637-4-david@redhat.com> [rth: Move cpu & cc variables inside if block.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
03a98189 | 23-Aug-2019 |
David Hildenbrand <david@redhat.com> |
tcg: Check for watchpoints in probe_write()
Let size > 0 indicate a promise to write to those bytes. Check for write watchpoints in the probed range.
Suggested-by: Richard Henderson <richard.hender
tcg: Check for watchpoints in probe_write()
Let size > 0 indicate a promise to write to those bytes. Check for write watchpoints in the probed range.
Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20190823100741.9621-10-david@redhat.com> [rth: Recompute index after tlb_fill; check TLB_WATCHPOINT.] Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
50b107c5 | 24-Aug-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside of the I/O subsystem, is fundamentally broken. We do not have the helper return address
cputlb: Handle watchpoints via TLB_WATCHPOINT
The raising of exceptions from check_watchpoint, buried inside of the I/O subsystem, is fundamentally broken. We do not have the helper return address with which we can unwind guest state.
Replace PHYS_SECTION_WATCH and io_mem_watch with TLB_WATCHPOINT. Move the call to cpu_check_watchpoint into the cputlb helpers where we do have the helper return address.
This allows watchpoints on RAM to bypass the full i/o access path.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
5787585d | 28-Aug-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Remove double-alignment in store_helper
We have already aligned page2 to the start of the next page. There is no reason to do that a second time.
Reviewed-by: Philippe Mathieu-Daudé <philmd
cputlb: Remove double-alignment in store_helper
We have already aligned page2 to the start of the next page. There is no reason to do that a second time.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
8f7cd2ad | 28-Aug-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Fix size operand for tlb_fill on unaligned store
We are currently passing the size of the full write to the tlb_fill for the second page. Instead pass the real size of the write to that pag
cputlb: Fix size operand for tlb_fill on unaligned store
We are currently passing the size of the full write to the tlb_fill for the second page. Instead pass the real size of the write to that page.
This argument is unused within all tlb_fill, except to be logged via tracing, so in practice this makes no difference.
But in a moment we'll need the value of size2 for watchpoints, and if we've computed the value we might as well use it.
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
30d7e098 | 23-Aug-2019 |
Richard Henderson <richard.henderson@linaro.org> |
cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.
Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit that would immediate set
cputlb: Fold TLB_RECHECK into TLB_INVALID_MASK
We had two different mechanisms to force a recheck of the tlb.
Before TLB_RECHECK was introduced, we had a PAGE_WRITE_INV bit that would immediate set TLB_INVALID_MASK, which automatically means that a second check of the tlb entry fails.
We can use the same mechanism to handle small pages. Conserve TLB_* bits by removing TLB_RECHECK.
Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
a26fc6f5 | 23-Aug-2019 |
Tony Nguyen <tony.nguyen@bt.com> |
cputlb: Byte swap memory transaction attribute
Notice new attribute, byte swap, and force the transaction through the memory slow path.
Required by architectures that can invert endianness of memor
cputlb: Byte swap memory transaction attribute
Notice new attribute, byte swap, and force the transaction through the memory slow path.
Required by architectures that can invert endianness of memory transaction, e.g. SPARC64 has the Invert Endian TTE bit.
Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <2a10a1f1c00a894af1212c8f68ef09c2966023c1.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
9bf825bf | 23-Aug-2019 |
Tony Nguyen <tony.nguyen@bt.com> |
memory: Single byte swap along the I/O path
Now that MemOp has been pushed down into the memory API, and callers are encoding endianness, we can collapse byte swaps along the I/O path into the accel
memory: Single byte swap along the I/O path
Now that MemOp has been pushed down into the memory API, and callers are encoding endianness, we can collapse byte swaps along the I/O path into the accelerator and target independent adjust_endianness.
Collapsing byte swaps along the I/O path enables additional endian inversion logic, e.g. SPARC64 Invert Endian TTE bit, with redundant byte swaps cancelling out.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <911ff31af11922a9afba9b7ce128af8b8b80f316.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
be5c4787 | 23-Aug-2019 |
Tony Nguyen <tony.nguyen@bt.com> |
cputlb: Replace size and endian operands for MemOp
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former.
Signed-off-by: Tony Nguyen <tony.nguyen@bt.com>
cputlb: Replace size and endian operands for MemOp
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former.
Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <755b7104410956b743e1f1e9c34ab87db113360f.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
d5d680ca | 23-Aug-2019 |
Tony Nguyen <tony.nguyen@bt.com> |
memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former.
Call memory_region_dispatch_{read|write} with endiannes
memory: Access MemoryRegion with endianness
Preparation for collapsing the two byte swaps adjust_endianness and handle_bswap into the former.
Call memory_region_dispatch_{read|write} with endianness encoded into the "MemOp op" operand.
This patch does not change any behaviour as memory_region_dispatch_{read|write} is yet to handle the endianness.
Once it does handle endianness, callers with byte swaps can collapse them into adjust_endianness.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Message-Id: <8066ab3eb037c0388dfadfe53c5118429dd1de3a.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
4cbb198e | 23-Aug-2019 |
Tony Nguyen <tony.nguyen@bt.com> |
cputlb: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all
cputlb: Access MemoryRegion with MemOp
The memory_region_dispatch_{read|write} operand "unsigned size" is being converted into a "MemOp op".
Convert interfaces by using no-op size_memop.
After all interfaces are converted, size_memop will be implemented and the memory_region_dispatch_{read|write} operand "unsigned size" will be converted into a "MemOp op".
As size_memop is a no-op, this patch does not change any behaviour.
Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <c4571c76467ade83660970f7ef9d7292297f1908.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
14776ab5 | 23-Aug-2019 |
Tony Nguyen <tony.nguyen@bt.com> |
tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and handle_bswap, along the I/O path.
Target dependant attributes are conditional
tcg: TCGMemOp is now accelerator independent MemOp
Preparation for collapsing the two byte swaps, adjust_endianness and handle_bswap, along the I/O path.
Target dependant attributes are conditionalized upon NEED_CPU_H.
Signed-off-by: Tony Nguyen <tony.nguyen@bt.com> Acked-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Acked-by: Cornelia Huck <cohuck@redhat.com> Message-Id: <81d9cd7d7f5aaadfa772d6c48ecee834e9cf7882.1566466906.git.tony.nguyen@bt.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
358f6348 | 28-Aug-2019 |
Emilio G. Cota <cota@braap.org> |
atomic_template: fix indentation in GEN_ATOMIC_HELPER
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henders
atomic_template: fix indentation in GEN_ATOMIC_HELPER
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-id: 20190828165307.18321-8-alex.bennee@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
2e5b09fd | 09-Jul-2019 |
Markus Armbruster <armbru@redhat.com> |
hw/core: Move cpu.c, cpu.h from qom/ to hw/core/
Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbr
hw/core: Move cpu.c, cpu.h from qom/ to hw/core/
Suggested-by: Daniel P. Berrangé <berrange@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190709152053.16670-2-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> [Rebased onto merge commit 95a9457fd44; missed instances of qom/cpu.h in comments replaced]
show more ...
|
9e9b10c6 | 25-Jul-2019 |
Pavel Dovgalyuk <pavel.dovgaluk@gmail.com> |
icount: remove unnecessary gen_io_end calls
Prior patch resets can_do_io flag at the TB entry. Therefore there is no need in resetting this flag at the end of the block. This patch removes redundant
icount: remove unnecessary gen_io_end calls
Prior patch resets can_do_io flag at the TB entry. Therefore there is no need in resetting this flag at the end of the block. This patch removes redundant gen_io_end calls.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Message-Id: <156404429499.18669.13404064982854123855.stgit@pasha-Precision-3630-Tower> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@gmail.com>
show more ...
|
ba3e7926 | 25-Jul-2019 |
Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> |
icount: clean up cpu_can_io at the entry to the block
Most of IO instructions can be executed only at the end of the block in icount mode. Therefore translator can set cpu_can_io flag when translati
icount: clean up cpu_can_io at the entry to the block
Most of IO instructions can be executed only at the end of the block in icount mode. Therefore translator can set cpu_can_io flag when translating the last instruction. But when the blocks are chained, then this flag is not reset and may remain set at the beginning of the next block. This patch resets the flag at the entry of any translation block, making I/O operations impossible by default.
Signed-off-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru>
--
v2 changes: - reset can_do_io at the start of every TB (suggested by Paolo Bonzini) Message-Id: <156404428943.18669.15747009371169578935.stgit@pasha-Precision-3630-Tower>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
54d31236 | 12-Aug-2019 |
Markus Armbruster <armbru@redhat.com> |
sysemu: Split sysemu/runstate.h off sysemu/sysemu.h
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related to the system-emulator. Evidence:
* It's included widely: in my "build ev
sysemu: Split sysemu/runstate.h off sysemu/sysemu.h
sysemu/sysemu.h is a rather unfocused dumping ground for stuff related to the system-emulator. Evidence:
* It's included widely: in my "build everything" tree, changing sysemu/sysemu.h still triggers a recompile of some 1100 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h, down from 5400 due to the previous two commits).
* It pulls in more than a dozen additional headers.
Split stuff related to run state management into its own header sysemu/runstate.h.
Touching sysemu/sysemu.h now recompiles some 850 objects. qemu/uuid.h also drops from 1100 to 850, and qapi/qapi-types-run-state.h from 4400 to 4200. Touching new sysemu/runstate.h recompiles some 500 objects.
Since I'm touching MAINTAINERS to add sysemu/runstate.h anyway, also add qemu/main-loop.h.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-30-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> [Unbreak OS-X build]
show more ...
|
46517dd4 | 12-Aug-2019 |
Markus Armbruster <armbru@redhat.com> |
Include sysemu/sysemu.h a lot less
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on
Include sysemu/sysemu.h a lot less
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h).
hw/qdev-core.h includes sysemu/sysemu.h since recent commit e965ffa70a "qdev: add qdev_add_vm_change_state_handler()". This is a bad idea: hw/qdev-core.h is widely included.
Move the declaration of qdev_add_vm_change_state_handler() to sysemu/sysemu.h, and drop the problematic include from hw/qdev-core.h.
Touching sysemu/sysemu.h now recompiles some 1800 objects. qemu/uuid.h also drops from 5400 to 1800. A few more headers show smaller improvement: qemu/notify.h drops from 5600 to 5200, qemu/timer.h from 5600 to 4500, and qapi/qapi-types-run-state.h from 5500 to 5000.
Cc: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-Id: <20190812052359.30071-28-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
show more ...
|
d5938f29 | 12-Aug-2019 |
Markus Armbruster <armbru@redhat.com> |
Clean up inclusion of sysemu/sysemu.h
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend
Clean up inclusion of sysemu/sysemu.h
In my "build everything" tree, changing sysemu/sysemu.h triggers a recompile of some 5400 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h).
Almost a third of its inclusions are actually superfluous. Delete them. Downgrade two more to qapi/qapi-types-run-state.h, and move one from char/serial.h to char/serial.c.
hw/semihosting/config.c, monitor/monitor.c, qdev-monitor.c, and stubs/semihost.c define variables declared in sysemu/sysemu.h without including it. The compiler is cool with that, but include it anyway.
This doesn't reduce actual use much, as it's still included into widely included headers. The next commit will tackle that.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-27-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
show more ...
|
db725815 | 12-Aug-2019 |
Markus Armbruster <armbru@redhat.com> |
Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu
Include qemu/main-loop.h less
In my "build everything" tree, changing qemu/main-loop.h triggers a recompile of some 5600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h). It includes block/aio.h, which in turn includes qemu/event_notifier.h, qemu/notify.h, qemu/processor.h, qemu/qsp.h, qemu/queue.h, qemu/thread-posix.h, qemu/thread.h, qemu/timer.h, and a few more.
Include qemu/main-loop.h only where it's needed. Touching it now recompiles only some 1700 objects. For block/aio.h and qemu/event_notifier.h, these numbers drop from 5600 to 2800. For the others, they shrink only slightly.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Message-Id: <20190812052359.30071-21-armbru@redhat.com> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
show more ...
|
650d103d | 12-Aug-2019 |
Markus Armbruster <armbru@redhat.com> |
Include hw/hw.h exactly where needed
In my "build everything" tree, changing hw/hw.h triggers a recompile of some 2600 out of 6600 objects (not counting tests and objects that don't depend on qemu/o
Include hw/hw.h exactly where needed
In my "build everything" tree, changing hw/hw.h triggers a recompile of some 2600 out of 6600 objects (not counting tests and objects that don't depend on qemu/osdep.h).
The previous commits have left only the declaration of hw_error() in hw/hw.h. This permits dropping most of its inclusions. Touching it now recompiles less than 200 objects.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Alistair Francis <alistair.francis@wdc.com> Message-Id: <20190812052359.30071-19-armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com>
show more ...
|
6a0acfff | 12-Aug-2019 |
Markus Armbruster <armbru@redhat.com> |
Clean up inclusion of exec/cpu-common.h
migration/qemu-file.h neglects to include it even though it needs ram_addr_t. Fix that. Drop a few superfluous inclusions elsewhere.
Signed-off-by: Markus
Clean up inclusion of exec/cpu-common.h
migration/qemu-file.h neglects to include it even though it needs ram_addr_t. Fix that. Drop a few superfluous inclusions elsewhere.
Signed-off-by: Markus Armbruster <armbru@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com> Tested-by: Philippe Mathieu-Daudé <philmd@redhat.com> Message-Id: <20190812052359.30071-14-armbru@redhat.com>
show more ...
|
8072aae3 | 13-Jun-2019 |
Alexey Kardashevskiy <aik@ozlabs.ru> |
hmp: Print if memory section is registered with an accelerator
This adds an accelerator name to the "into mtree -f" to tell the user if a particular memory section is registered with the accelerator
hmp: Print if memory section is registered with an accelerator
This adds an accelerator name to the "into mtree -f" to tell the user if a particular memory section is registered with the accelerator; the primary user for this is KVM and such information is useful for debugging purposes.
This adds a has_memory() callback to the accelerator class allowing any accelerator to have a label in that memory tree dump.
Since memory sections are passed to memory listeners and get registered in accelerators (rather than memory regions), this only prints new labels for flatviews attached to the system address space.
An example: Root memory region: system 0000000000000000-0000002fffffffff (prio 0, ram): /objects/mem0 kvm 0000003000000000-0000005fffffffff (prio 0, ram): /objects/mem1 kvm 0000200000000020-000020000000003f (prio 1, i/o): virtio-pci 0000200080000000-000020008000003f (prio 0, i/o): capabilities
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru> Message-Id: <20190614015237.82463-1-aik@ozlabs.ru> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
ff4aa114 | 03-Jun-2019 |
Peter Xu <peterx@redhat.com> |
kvm: Support KVM_CLEAR_DIRTY_LOG
Firstly detect the interface using KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 and mark it. When failed to enable the new feature we'll fall back to the old sync.
Provide th
kvm: Support KVM_CLEAR_DIRTY_LOG
Firstly detect the interface using KVM_CAP_MANUAL_DIRTY_LOG_PROTECT2 and mark it. When failed to enable the new feature we'll fall back to the old sync.
Provide the log_clear() hook for the memory listeners for both address spaces of KVM (normal system memory, and SMM) and deliever the clear message to kernel.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-11-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|
36adac49 | 03-Jun-2019 |
Peter Xu <peterx@redhat.com> |
kvm: Introduce slots lock for memory listener
Introduce KVMMemoryListener.slots_lock to protect the slots inside the kvm memory listener. Currently it is close to useless because all the KVM code p
kvm: Introduce slots lock for memory listener
Introduce KVMMemoryListener.slots_lock to protect the slots inside the kvm memory listener. Currently it is close to useless because all the KVM code path now is always protected by the BQL. But it'll start to make sense in follow up patches where we might do remote dirty bitmap clear and also we'll update the per-slot cached dirty bitmap even without the BQL. So let's prepare for it.
We can also use per-slot lock for above reason but it seems to be an overkill. Let's just use this bigger one (which covers all the slots of a single address space) but anyway this lock is still much smaller than the BQL.
Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Signed-off-by: Peter Xu <peterx@redhat.com> Message-Id: <20190603065056.25211-10-peterx@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
show more ...
|