a688e73b | 25-Jun-2018 |
Emilio G. Cota <cota@braap.org> |
translate-all: fix locking of TBs whose two pages share the same physical page
Commit 0b5c91f ("translate-all: use per-page locking in !user-mode", 2018-06-15) introduced per-page locking. It assume
translate-all: fix locking of TBs whose two pages share the same physical page
Commit 0b5c91f ("translate-all: use per-page locking in !user-mode", 2018-06-15) introduced per-page locking. It assumed that the physical pages corresponding to a TB (at most two pages) are always distinct, which is wrong. For instance, an xtensa test provided by Max Filippov is broken by the commit, since the test maps two virtual pages to the same physical page:
virt1: 7fff, virt2: 8000 phys1 6000fff, phys2 6000000
Fix it by removing the assumption from page_lock_pair. If the two physical page addresses are equal, we only lock the PageDesc once. Note that the two callers of page_lock_pair, namely page_unlock_tb and tb_link_page, are also updated so that we do not try to unlock the same PageDesc twice.
Fixes: 0b5c91f74f3c83a36f37740969df8c775c997e69 Reported-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1529944302-14186-1-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
c40d4792 | 02-Jul-2018 |
Paolo Bonzini <pbonzini@redhat.com> |
tcg: simplify !CONFIG_TCG handling of tb_invalidate_*
There is no need for a stub, since tb_invalidate_phys_addr can be excised altogether when TCG is disabled. This is a bit cleaner since it avoid
tcg: simplify !CONFIG_TCG handling of tb_invalidate_*
There is no need for a stub, since tb_invalidate_phys_addr can be excised altogether when TCG is disabled. This is a bit cleaner since it avoids using code that is clearly specific to user-mode emulation (it calls mmap_lock/unlock) for the !CONFIG_TCG case.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
646f34fa | 29-Jun-2018 |
Philippe Mathieu-Daudé <f4bug@amsat.org> |
tcg: Fix --disable-tcg build breakage
Fix the --disable-tcg breakage introduced by 8bca9a03ec60d:
$ configure --disable-tcg [...] $ make -C i386-softmmu exec.o make: Entering direct
tcg: Fix --disable-tcg build breakage
Fix the --disable-tcg breakage introduced by 8bca9a03ec60d:
$ configure --disable-tcg [...] $ make -C i386-softmmu exec.o make: Entering directory 'i386-softmmu' CC exec.o In file included from source/qemu/exec.c:62:0: source/qemu/include/exec/ram_addr.h:96:6: error: conflicting types for ‘tb_invalidate_phys_range’ void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end); ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from source/qemu/exec.c:24:0: source/qemu/include/exec/exec-all.h:309:6: note: previous declaration of ‘tb_invalidate_phys_range’ was here void tb_invalidate_phys_range(target_ulong start, target_ulong end); ^~~~~~~~~~~~~~~~~~~~~~~~ source/qemu/exec.c:1043:6: error: conflicting types for ‘tb_invalidate_phys_addr’ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs) ^~~~~~~~~~~~~~~~~~~~~~~ In file included from source/qemu/exec.c:24:0: source/qemu/include/exec/exec-all.h:308:6: note: previous declaration of ‘tb_invalidate_phys_addr’ was here void tb_invalidate_phys_addr(target_ulong addr); ^~~~~~~~~~~~~~~~~~~~~~~ make: *** [source/qemu/rules.mak:69: exec.o] Error 1 make: Leaving directory 'i386-softmmu'
Tested to build x86_64-softmmu and i386-softmmu targets.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180629200710.27626-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
2266d443 | 22-Jun-2018 |
Michael S. Tsirkin <mst@redhat.com> |
i386/cpu: make -cpu host support monitor/mwait
When guest CPU PM is enabled, and with -cpu host, expose the host CPU MWAIT leaf in the CPUID so guest can make good PM decisions.
Note: the result is
i386/cpu: make -cpu host support monitor/mwait
When guest CPU PM is enabled, and with -cpu host, expose the host CPU MWAIT leaf in the CPUID so guest can make good PM decisions.
Note: the result is 100% CPU utilization reported by host as host no longer knows that the CPU is halted.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20180622192148.178309-3-mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
6c090d4a | 16-May-2018 |
Shannon Zhao <zhaoshenglong@huawei.com> |
kvm: Delete the slot if and only if the KVM_MEM_READONLY flag is changed
According to KVM commit 75d61fbc, it needs to delete the slot before changing the KVM_MEM_READONLY flag. But QEMU commit 235e
kvm: Delete the slot if and only if the KVM_MEM_READONLY flag is changed
According to KVM commit 75d61fbc, it needs to delete the slot before changing the KVM_MEM_READONLY flag. But QEMU commit 235e8982 only check whether KVM_MEM_READONLY flag is set instead of changing. It doesn't need to delete the slot if the KVM_MEM_READONLY flag is not changed.
This fixes a issue that migrating a VM at the OVMF startup stage and VM is executing the codes in rom. Between the deleting and adding the slot in kvm_set_user_memory_region, there is a chance that guest access rom and trap to KVM, then KVM can't find the corresponding memslot. While KVM (on ARM) injects an abort to guest due to the broken hva, then guest will get stuck.
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com> Message-Id: <1526462314-19720-1-git-send-email-zhaoshenglong@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
8bca9a03 | 30-May-2018 |
Paolo Bonzini <pbonzini@redhat.com> |
move public invalidate APIs out of translate-all.{c,h}, clean up
Place them in exec.c, exec-all.h and ram_addr.h. This removes knowledge of translate-all.h (which is an internal header) from severa
move public invalidate APIs out of translate-all.{c,h}, clean up
Place them in exec.c, exec-all.h and ram_addr.h. This removes knowledge of translate-all.h (which is an internal header) from several files outside accel/tcg and removes knowledge of AddressSpace from translate-all.c (as it only operates on ram_addr_t).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
f18793b0 | 14-Jun-2018 |
Stefan Hajnoczi <stefanha@redhat.com> |
compiler: add a sizeof_field() macro
Determining the size of a field is useful when you don't have a struct variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, whic
compiler: add a sizeof_field() macro
Determining the size of a field is useful when you don't have a struct variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, which is similar to typeof_field(). Existing instances are updated to use the macro.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20180614164431.29305-1-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
d071f4cd | 22-May-2018 |
Emilio G. Cota <cota@braap.org> |
trace: enable tracing of TCG atomics
We do not trace guest atomic accesses. Fix it.
Tested with a modified atomic_add-bench so that it executes a deterministic number of instructions, i.e. fixed se
trace: enable tracing of TCG atomics
We do not trace guest atomic accesses. Fix it.
Tested with a modified atomic_add-bench so that it executes a deterministic number of instructions, i.e. fixed seeding, no threading and fixed number of loop iterations instead of running for a certain time.
Before: - With parallel_cpus = false (no clone syscall so it is never set to true): 220070 memory accesses - With parallel_cpus = true (hard-coded): 212105 memory accesses <-- we're not tracing the atomics!
After: 220070 memory accesses regardless of parallel_cpus.
Signed-off-by: Emilio G. Cota <cota@braap.org> Message-id: 1527028012-21888-6-git-send-email-cota@braap.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
55df6fcf | 26-Jun-2018 |
Peter Maydell <peter.maydell@linaro.org> |
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
Add support for MMU protection regions that are smaller than TARGET_PAGE_SIZE. We do this by marking the TLB entry for those pages w
tcg: Support MMU protection regions smaller than TARGET_PAGE_SIZE
Add support for MMU protection regions that are smaller than TARGET_PAGE_SIZE. We do this by marking the TLB entry for those pages with a flag TLB_RECHECK. This flag causes us to always take the slow-path for accesses. In the slow path we can then special case them to always call tlb_fill() again, so we have the correct information for the exact address being accessed.
This change allows us to handle reading and writing from small regions; we cannot deal with execution from the small region.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180620130619.11362-2-peter.maydell@linaro.org
show more ...
|
f28d0dfd | 22-Jun-2018 |
Emilio G. Cota <cota@braap.org> |
tcg: fix --disable-tcg build breakage
Fix the --disable-tcg breakage introduced by tb_lock's removal by relying on the fact that tcg_enabled() is set to 0 at compile-time under --disable-tcg.
While
tcg: fix --disable-tcg build breakage
Fix the --disable-tcg breakage introduced by tb_lock's removal by relying on the fact that tcg_enabled() is set to 0 at compile-time under --disable-tcg.
While at it, add further asserts to fix builds that enable both --disable-tcg and --enable-debug, which were broken even before tb_lock's removal.
Tested to build x86_64-softmmu and i386-softmmu targets.
Reported-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
0ac20318 | 04-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
tcg: remove tb_lock
Use mmap_lock in user-mode to protect TCG state and the page descriptors. In !user-mode, each vCPU has its own TCG state, so no locks needed. Per-page locks are used to protect t
tcg: remove tb_lock
Use mmap_lock in user-mode to protect TCG state and the page descriptors. In !user-mode, each vCPU has its own TCG state, so no locks needed. Per-page locks are used to protect the page descriptors.
Per-TB locks are used in both modes to protect TB jumps.
Some notes:
- tb_lock is removed from notdirty_mem_write by passing a locked page_collection to tb_invalidate_phys_page_fast.
- tcg_tb_lookup/remove/insert/etc have their own internal lock(s), so there is no need to further serialize access to them.
- do_tb_flush is run in a safe async context, meaning no other vCPU threads are running. Therefore acquiring mmap_lock there is just to please tools such as thread sanitizer.
- Not visible in the diff, but tb_invalidate_phys_page already has an assert_memory_lock.
- cpu_io_recompile is !user-only, so no mmap_lock there.
- Added mmap_unlock()'s before all siglongjmp's that could be called in user-mode while mmap_lock is held. + Added an assert for !have_mmap_lock() after returning from the longjmp in cpu_exec, just like we do in cpu_exec_step_atomic.
Performance numbers before/after:
Host: AMD Opteron(tm) Processor 6376
ubuntu 17.04 ppc64 bootup+shutdown time
700 +-+--+----+------+------------+-----------+------------*--+-+ | + + + + + *B | | before ***B*** ** * | |tb lock removal ###D### *** | 600 +-+ *** +-+ | ** # | | *B* #D | | *** * ## | 500 +-+ *** ### +-+ | * *** ### | | *B* # ## | | ** * #D# | 400 +-+ ** ## +-+ | ** ### | | ** ## | | ** # ## | 300 +-+ * B* #D# +-+ | B *** ### | | * ** #### | | * *** ### | 200 +-+ B *B #D# +-+ | #B* * ## # | | #* ## | | + D##D# + + + + | 100 +-+--+----+------+------------+-----------+------------+--+-+ 1 8 16 Guest CPUs 48 64 png: https://imgur.com/HwmBHXe
debian jessie aarch64 bootup+shutdown time
90 +-+--+-----+-----+------------+------------+------------+--+-+ | + + + + + + | | before ***B*** B | 80 +tb lock removal ###D### **D +-+ | **### | | **## | 70 +-+ ** # +-+ | ** ## | | ** # | 60 +-+ *B ## +-+ | ** ## | | *** #D | 50 +-+ *** ## +-+ | * ** ### | | **B* ### | 40 +-+ **** # ## +-+ | **** #D# | | ***B** ### | 30 +-+ B***B** #### +-+ | B * * # ### | | B ###D# | 20 +-+ D ##D## +-+ | D# | | + + + + + + | 10 +-+--+-----+-----+------------+------------+------------+--+-+ 1 8 16 Guest CPUs 48 64 png: https://imgur.com/iGpGFtv
The gains are high for 4-8 CPUs. Beyond that point, however, unrelated lock contention significantly hurts scalability.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
705ad1ff | 05-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: remove tb_lock mention from cpu_restore_state_from_tb
tb_lock was needed when the function did retranslation. However, since fca8a500d519 ("tcg: Save insn data and use it in cpu_resto
translate-all: remove tb_lock mention from cpu_restore_state_from_tb
tb_lock was needed when the function did retranslation. However, since fca8a500d519 ("tcg: Save insn data and use it in cpu_restore_state_from_tb") we don't do retranslation.
Get rid of the comment.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
b7542f7f | 04-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
cputlb: remove tb_lock from tlb_flush functions
The acquisition of tb_lock was added when the async tlb_flush was introduced in e3b9ca810 ("cputlb: introduce tlb_flush_* async work.")
tb_lock was t
cputlb: remove tb_lock from tlb_flush functions
The acquisition of tb_lock was added when the async tlb_flush was introduced in e3b9ca810 ("cputlb: introduce tlb_flush_* async work.")
tb_lock was there to allow us to do memset() on the tb_jmp_cache's. However, since f3ced3c5928 ("tcg: consistently access cpu->tb_jmp_cache atomically") all accesses to tb_jmp_cache are atomic, so tb_lock is not needed here. Get rid of it.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
194125e3 | 02-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: protect TB jumps with a per-destination-TB lock
This applies to both user-mode and !user-mode emulation.
Instead of relying on a global lock, protect the list of incoming jumps with
translate-all: protect TB jumps with a per-destination-TB lock
This applies to both user-mode and !user-mode emulation.
Instead of relying on a global lock, protect the list of incoming jumps with tb->jmp_lock. This lock also protects tb->cflags, so update all tb->cflags readers outside tb->jmp_lock to use atomic reads via tb_cflags().
In order to find the destination TB (and therefore its jmp_lock) from the origin TB, we introduce tb->jmp_dest[].
I considered not using a linked list of jumps, which simplifies code and makes the struct smaller. However, it unnecessarily increases memory usage, which results in a performance decrease. See for instance these numbers booting+shutting down debian-arm: Time (s) Rel. err (%) Abs. err (s) Rel. slowdown (%) ------------------------------------------------------------------------------ before 20.88 0.74 0.154512 0. after 20.81 0.38 0.079078 -0.33524904 GTree 21.02 0.28 0.058856 0.67049808 GHashTable + xxhash 21.63 1.08 0.233604 3.5919540
Using a hash table or a binary tree to keep track of the jumps doesn't really pay off, not only due to the increased memory usage, but also because most TBs have only 0 or 1 jumps to them. The maximum number of jumps when booting debian-arm that I measured is 35, but as we can see in the histogram below a TB with that many incoming jumps is extremely rare; the average TB has 0.80 incoming jumps.
n_jumps: 379208; avg jumps/tb: 0.801099 dist: [0.0,1.0)|▄█▁▁▁▁▁▁▁▁▁▁▁ ▁▁▁▁▁▁ ▁▁▁ ▁▁▁ ▁|[34.0,35.0]
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
95590e24 | 01-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: discard TB when tb_link_page returns an existing matching TB
Use the recently-gained QHT feature of returning the matching TB if it already exists. This allows us to get rid of the lo
translate-all: discard TB when tb_link_page returns an existing matching TB
Use the recently-gained QHT feature of returning the matching TB if it already exists. This allows us to get rid of the lookup we perform right after acquiring tb_lock.
Suggested-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
faa9372c | 22-Feb-2018 |
Emilio G. Cota <cota@braap.org> |
translate-all: introduce assert_no_pages_locked
The appended adds assertions to make sure we do not longjmp with page locks held. Note that user-mode has nothing to check, since page_locks are !user
translate-all: introduce assert_no_pages_locked
The appended adds assertions to make sure we do not longjmp with page locks held. Note that user-mode has nothing to check, since page_locks are !user-mode only.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
6d9abf85 | 05-Apr-2018 |
Emilio G. Cota <cota@braap.org> |
translate-all: add page_locked assertions
This is only compiled under CONFIG_DEBUG_TCG to avoid bloating the binary.
In user-mode, assert_page_locked is equivalent to assert_mmap_lock.
Note: There
translate-all: add page_locked assertions
This is only compiled under CONFIG_DEBUG_TCG to avoid bloating the binary.
In user-mode, assert_page_locked is equivalent to assert_mmap_lock.
Note: There are some tb_lock assertions left that will be removed by later patches.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
0b5c91f7 | 26-Jul-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: use per-page locking in !user-mode
Groundwork for supporting parallel TCG generation.
Instead of using a global lock (tb_lock) to protect changes to pages, use fine-grained, per-page
translate-all: use per-page locking in !user-mode
Groundwork for supporting parallel TCG generation.
Instead of using a global lock (tb_lock) to protect changes to pages, use fine-grained, per-page locks in !user-mode. User-mode stays with mmap_lock.
Sometimes changes need to happen atomically on more than one page (e.g. when a TB that spans across two pages is added/invalidated, or when a range of pages is invalidated). We therefore introduce struct page_collection, which helps us keep track of a set of pages that have been locked in the appropriate locking order (i.e. by ascending page index).
This commit first introduces the structs and the function helpers, to then convert the calling code to use per-page locking. Note that tb_lock is not removed yet.
While at it, rename tb_alloc_page to tb_page_add, which pairs with tb_page_remove.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
45c73de5 | 05-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: move tb_invalidate_phys_page_range up in the file
This greatly simplifies next commit's diff.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <
translate-all: move tb_invalidate_phys_page_range up in the file
This greatly simplifies next commit's diff.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
ae5486e2 | 05-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: work page-by-page in tb_invalidate_phys_range_1
So that we pass a same-page range to tb_invalidate_phys_page_range, instead of always passing an end address that could be on a differe
translate-all: work page-by-page in tb_invalidate_phys_range_1
So that we pass a same-page range to tb_invalidate_phys_page_range, instead of always passing an end address that could be on a different page.
As discussed with Peter Maydell on the list [1], tb_invalidate_phys_page_range doesn't actually do much with 'end', which explains why we have never hit a bug despite going against what the comment on top of tb_invalidate_phys_page_range requires:
> * Invalidate all TBs which intersect with the target physical address range > * [start;end[. NOTE: start and end must refer to the *same* physical page.
The appended honours the comment, which avoids confusion.
While at it, rework the loop into a for loop, which is less error prone (e.g. "continue" won't result in an infinite loop).
[1] https://lists.gnu.org/archive/html/qemu-devel/2017-07/msg09165.html
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
94da9aec | 29-Jul-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: remove hole in PageDesc
Groundwork for supporting parallel TCG generation.
Move the hole to the end of the struct, so that a u32 field can be added there without bloating the struct.
translate-all: remove hole in PageDesc
Groundwork for supporting parallel TCG generation.
Move the hole to the end of the struct, so that a u32 field can be added there without bloating the struct.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
78722ed0 | 26-Jul-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: make l1_map lockless
Groundwork for supporting parallel TCG generation.
We never remove entries from the radix tree, so we can use cmpxchg to implement lockless insertions.
Reviewed
translate-all: make l1_map lockless
Groundwork for supporting parallel TCG generation.
We never remove entries from the radix tree, so we can use cmpxchg to implement lockless insertions.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
1e05197f | 03-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: iterate over TBs in a page with PAGE_FOR_EACH_TB
This commit does several things, but to avoid churn I merged them all into the same commit. To wit:
- Use uintptr_t instead of Transl
translate-all: iterate over TBs in a page with PAGE_FOR_EACH_TB
This commit does several things, but to avoid churn I merged them all into the same commit. To wit:
- Use uintptr_t instead of TranslationBlock * for the list of TBs in a page. Just like we did in (c37e6d7e "tcg: Use uintptr_t type for jmp_list_{next|first} fields of TB"), the rationale is the same: these are tagged pointers, not pointers. So use a more appropriate type.
- Only check the least significant bit of the tagged pointers. Masking with 3/~3 is unnecessary and confusing.
- Introduce the TB_FOR_EACH_TAGGED macro, and use it to define PAGE_FOR_EACH_TB, which improves readability. Note that TB_FOR_EACH_TAGGED will gain another user in a subsequent patch.
- Update tb_page_remove to use PAGE_FOR_EACH_TB. In case there is a bug and we attempt to remove a TB that is not in the list, instead of segfaulting (since the list is NULL-terminated) we will reach g_assert_not_reached().
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
128ed227 | 01-Aug-2017 |
Emilio G. Cota <cota@braap.org> |
tcg: move tb_ctx.tb_phys_invalidate_count to tcg_ctx
Thereby making it per-TCGContext. Once we remove tb_lock, this will avoid an atomic increment every time a TB is invalidated.
Reviewed-by: Richa
tcg: move tb_ctx.tb_phys_invalidate_count to tcg_ctx
Thereby making it per-TCGContext. Once we remove tb_lock, this will avoid an atomic increment every time a TB is invalidated.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
be2cdc5e | 26-Jul-2017 |
Emilio G. Cota <cota@braap.org> |
tcg: track TBs with per-region BST's
This paves the way for enabling scalable parallel generation of TCG code.
Instead of tracking TBs with a single binary search tree (BST), use a BST for each TCG
tcg: track TBs with per-region BST's
This paves the way for enabling scalable parallel generation of TCG code.
Instead of tracking TBs with a single binary search tree (BST), use a BST for each TCG region, protecting it with a lock. This is as scalable as it gets, since each TCG thread operates on a separate region.
The core of this change is the introduction of struct tcg_region_tree, which contains a pointer to a GTree and an associated lock to serialize accesses to it. We then allocate an array of tcg_region_tree's, adding the appropriate padding to avoid false sharing based on qemu_dcache_linesize.
Given a tc_ptr, we first find the corresponding region_tree. This is done by special-casing the first and last regions first, since they might be of size != region.size; otherwise we just divide the offset by region.stride. I was worried about this division (several dozen cycles of latency), but profiling shows that this is not a fast path. Note that region.stride is not required to be a power of two; it is only required to be a multiple of the host's page size.
Note that with this design we can also provide consistent snapshots about all region trees at once; for instance, tcg_tb_foreach acquires/releases all region_tree locks before/after iterating over them. For this reason we now drop tb_lock in dump_exec_info().
As an alternative I considered implementing a concurrent BST, but this can be tricky to get right, offers no consistent snapshots of the BST, and performance and scalability-wise I don't think it could ever beat having separate GTrees, given that our workload is insert-mostly (all concurrent BST designs I've seen focus, understandably, on making lookups fast, which comes at the expense of convoluted, non-wait-free insertions/removals).
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|