61a3f8f6 | 11-Sep-2017 |
Philippe Mathieu-Daudé <f4bug@amsat.org> |
accel/tcg: move tcg-runtime to accel/tcg/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170911213328.9701-4-f4bug@amsat.org
accel/tcg: move tcg-runtime to accel/tcg/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170911213328.9701-4-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
58410666 | 11-Sep-2017 |
Philippe Mathieu-Daudé <f4bug@amsat.org> |
accel/tcg: move user-exec to accel/tcg/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170911213328.9701-3-f4bug@amsat.org>
accel/tcg: move user-exec to accel/tcg/
Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170911213328.9701-3-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
da1849c1 | 11-Sep-2017 |
Thomas Huth <thuth@redhat.com> |
accel/tcg: move softmmu_template.h to accel/tcg/
The header is only used by accel/tcg/cputlb.c so we can move it to the accel/tcg/ folder, too.
Signed-off-by: Thomas Huth <thuth@redhat.com> [PMD: r
accel/tcg: move softmmu_template.h to accel/tcg/
The header is only used by accel/tcg/cputlb.c so we can move it to the accel/tcg/ folder, too.
Signed-off-by: Thomas Huth <thuth@redhat.com> [PMD: reword commit title to match series] Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-Id: <20170911213328.9701-2-f4bug@amsat.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
57a26946 | 30-Jul-2017 |
Richard Henderson <rth@twiddle.net> |
tcg: Infrastructure for managing constant pools
A new shared header tcg-pool.inc.c adds new_pool_label, for registering a tcg_target_ulong to be emitted after the generated code, plus relocation dat
tcg: Infrastructure for managing constant pools
A new shared header tcg-pool.inc.c adds new_pool_label, for registering a tcg_target_ulong to be emitted after the generated code, plus relocation data to install a pointer to the data.
A new pointer is added to the TCGContext, so that we dump the constant pool as data, not code.
Signed-off-by: Richard Henderson <rth@twiddle.net>
show more ...
|
a8583393 | 01-Aug-2017 |
Richard Henderson <rth@twiddle.net> |
tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.h
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump boolean test. Replace the tb_set_jmp_target1 ifdef with an uncond
tcg: Move USE_DIRECT_JUMP discriminator to tcg/cpu/tcg-target.h
Replace the USE_DIRECT_JUMP ifdef with a TCG_TARGET_HAS_direct_jump boolean test. Replace the tb_set_jmp_target1 ifdef with an unconditional function tb_target_set_jmp_target.
While we're touching all backends, add a parameter for tb->tc_ptr; we're going to need it shortly for some backends.
Move tb_set_jmp_target and tb_add_jump from exec-all.h to cpu-exec.c.
This opens the possibility for TCG_TARGET_HAS_direct_jump to be a runtime decision -- based on host cpu capabilities, the size of code_gen_buffer, or a future debugging switch.
Signed-off-by: Richard Henderson <rth@twiddle.net>
show more ...
|
bb2e0039 | 14-Jul-2017 |
Lluís Vilanova <vilanova@ac.upc.edu> |
tcg: Add generic translation framework
Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Message-Id: <150002073981.22386.9870422422367410100.stgit@frig
tcg: Add generic translation framework
Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Message-Id: <150002073981.22386.9870422422367410100.stgit@frigg.lan> [rth: Moved max_insns adjustment from tb_start to init_disas_context. Removed pc_next return from translate_insn. Removed tcg_check_temp_count from generic loop. Moved gen_io_end to exactly match gen_io_start. Use qemu_log instead of error_report for temporary leaks. Moved TB size/icount assignments before disas_log.] Signed-off-by: Richard Henderson <rth@twiddle.net>
show more ...
|
04e3aabd | 04-Sep-2017 |
Peter Maydell <peter.maydell@linaro.org> |
cputlb: Support generating CPU exceptions on memory transaction failures
Call the new cpu_transaction_failed() hook at the places where CPU generated code interacts with the memory system: io_readx
cputlb: Support generating CPU exceptions on memory transaction failures
Call the new cpu_transaction_failed() hook at the places where CPU generated code interacts with the memory system: io_readx() io_writex() get_page_addr_code()
Any access from C code (eg via cpu_physical_memory_rw(), address_space_rw(), ld/st_*_phys()) will *not* trigger CPU exceptions via cpu_transaction_failed(). Handling for transactions failures for this kind of call should be done by using a function which returns a MemTxResult and treating the failure case appropriately in the calling code.
In an ideal world we would not generate CPU exceptions for instruction fetch failures in get_page_addr_code() but instead wait until the code translation process tried a load and it failed; however that change would require too great a restructuring and redesign to attempt at this point.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Edgar E. Iglesias <edgar.iglesias@xilinx.com>
show more ...
|
88c725c7 | 07-Jul-2017 |
Cornelia Huck <cohuck@redhat.com> |
kvm: remove hard dependency on pci
The msi routing code in kvm calls some pci functions: provide some stubs to enable builds without pci.
Also, to make this more obvious, guard them via a pci_avail
kvm: remove hard dependency on pci
The msi routing code in kvm calls some pci functions: provide some stubs to enable builds without pci.
Also, to make this more obvious, guard them via a pci_available boolean (which also can be reused in other places).
Fixes: e1d4fb2de ("kvm-irqchip: x86: add msi route notify fn") Fixes: 767a554a0 ("kvm-all: Pass requester ID to MSI routing functions") Reviewed-by: Pierre Morel <pmorel@linux.vnet.ibm.com> Reviewed-by: Thomas Huth <thuth@redhat.com> Acked-by: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Cornelia Huck <cohuck@redhat.com>
show more ...
|
d73f0247 | 17-Jul-2017 |
Laurent Vivier <lvivier@redhat.com> |
accel: cleanup error output
Only emit "XXX accelerator not found", if there are not further accelerators listed. eg
accel=kvm:tcg
doesn't print a "KVM accelerator not found" warning when it fal
accel: cleanup error output
Only emit "XXX accelerator not found", if there are not further accelerators listed. eg
accel=kvm:tcg
doesn't print a "KVM accelerator not found" warning when it falls back to tcg, but a
accel=kvm
prints a warning, since no fallback is given.
Suggested-by: Daniel P. Berrange <berrange@redhat.com> Suggested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Laurent Vivier <lvivier@redhat.com> Message-Id: <20170717144527.24534-1-lvivier@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
8908eb1a | 31-Jul-2017 |
Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> |
trace-events: fix code style: print 0x before hex numbers
The only exception are groups of numers separated by symbols '.', ' ', ':', '/', like 'ab.09.7d'.
This patch is made by the following:
> f
trace-events: fix code style: print 0x before hex numbers
The only exception are groups of numers separated by symbols '.', ' ', ':', '/', like 'ab.09.7d'.
This patch is made by the following:
> find . -name trace-events | xargs python script.py
where script.py is the following python script: ========================= #!/usr/bin/env python
import sys import re import fileinput
rhex = '%[-+ *.0-9]*(?:[hljztL]|ll|hh)?(?:x|X|"\s*PRI[xX][^"]*"?)' rgroup = re.compile('((?:' + rhex + '[.:/ ])+' + rhex + ')') rbad = re.compile('(?<!0x)' + rhex)
files = sys.argv[1:]
for fname in files: for line in fileinput.input(fname, inplace=True): arr = re.split(rgroup, line) for i in range(0, len(arr), 2): arr[i] = re.sub(rbad, '0x\g<0>', arr[i])
sys.stdout.write(''.join(arr)) =========================
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Acked-by: Cornelia Huck <cohuck@redhat.com> Message-id: 20170731160135.12101-5-vsementsov@virtuozzo.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
9c489ea6 | 14-Jul-2017 |
Lluís Vilanova <vilanova@ac.upc.edu> |
tcg: Pass generic CPUState to gen_intermediate_code()
Needed to implement a target-agnostic gen_intermediate_code() in the future.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-b
tcg: Pass generic CPUState to gen_intermediate_code()
Needed to implement a target-agnostic gen_intermediate_code() in the future.
Reviewed-by: David Gibson <david@gibson.dropbear.id.au> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Alex Benneé <alex.benee@linaro.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Message-Id: <150002025498.22386.18051908483085660588.stgit@frigg.lan> Signed-off-by: Richard Henderson <rth@twiddle.net>
show more ...
|
61a67f71 | 04-Jul-2017 |
Lluís Vilanova <vilanova@ac.upc.edu> |
exec: [tcg] Use different TBs according to the vCPU's dynamic tracing state
Every vCPU now uses a separate set of TBs for each set of dynamic tracing event state values. Each set of TBs can be used
exec: [tcg] Use different TBs according to the vCPU's dynamic tracing state
Every vCPU now uses a separate set of TBs for each set of dynamic tracing event state values. Each set of TBs can be used by any number of vCPUs to maximize TB reuse when vCPUs have the same tracing state.
This feature is later used by tracetool to optimize tracing of guest code events.
The maximum number of TB sets is defined as 2^E, where E is the number of events that have the 'vcpu' property (their state is stored in CPUState->trace_dstate).
For this to work, a change on the dynamic tracing state of a vCPU will force it to flush its virtual TB cache (which is only indexed by address), and fall back to the physical TB cache (which now contains the vCPU's dynamic tracing state as part of the hashing function).
Signed-off-by: Lluís Vilanova <vilanova@ac.upc.edu> Reviewed-by: Richard Henderson <rth@twiddle.net> Reviewed-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-id: 149915775266.6295.10060144081246467690.stgit@frigg.lan Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
d40d3da0 | 09-Jul-2017 |
Emilio G. Cota <cota@braap.org> |
translate-all: remove redundant !tcg_enabled check in dump_exec_info
This check is redundant because it is already performed by the only caller of dump_exec_info -- the caller was updated by b7da97e
translate-all: remove redundant !tcg_enabled check in dump_exec_info
This check is redundant because it is already performed by the only caller of dump_exec_info -- the caller was updated by b7da97eef ("monitor: Check whether TCG is enabled before running the "info jit" code").
Checking twice wouldn't necessarily be too bad, but here the check also returns with tb_lock held. So we can either do the check before tb_lock is acquired, or just get rid of it. Given that it is redundant, I am going for the latter option.
Signed-off-by: Emilio G. Cota <cota@braap.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
5d721b78 | 11-Jul-2017 |
Alexander Graf <agraf@suse.de> |
ARM: KVM: Enable in-kernel timers with user space gic
When running with KVM enabled, you can choose between emulating the gic in kernel or user space. If the kernel supports in-kernel virtualization
ARM: KVM: Enable in-kernel timers with user space gic
When running with KVM enabled, you can choose between emulating the gic in kernel or user space. If the kernel supports in-kernel virtualization of the interrupt controller, it will default to that. If not, if will default to user space emulation.
Unfortunately when running in user mode gic emulation, we miss out on interrupt events which are only available from kernel space, such as the timer. This patch leverages the new kernel/user space pending line synchronization for timer events. It does not handle PMU events yet.
Signed-off-by: Alexander Graf <agraf@suse.de> Reviewed-by: Andrew Jones <drjones@redhat.com> Message-id: 1498577737-130264-1-git-send-email-agraf@suse.de Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
b68686bd | 30-Jun-2017 |
Pranith Kumar <bobby.prani@gmail.com> |
tcg/aarch64: Use ADRP+ADD to compute target address
We use ADRP+ADD to compute the target address for goto_tb. This patch introduces the NOP instruction which is used to align the above instruction
tcg/aarch64: Use ADRP+ADD to compute target address
We use ADRP+ADD to compute the target address for goto_tb. This patch introduces the NOP instruction which is used to align the above instruction pair so that we can use one atomic instruction to patch the destination offsets.
CC: Alex Bennée <alex.bennee@linaro.org> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Pranith Kumar <bobby.prani@gmail.com> Message-Id: <20170630143614.31059-2-bobby.prani@gmail.com> Signed-off-by: Richard Henderson <rth@twiddle.net>
show more ...
|
e4b4b642 | 03-Jul-2017 |
Yang Zhong <yang.zhong@intel.com> |
tcg: add the CONFIG_TCG into Makefiles
Add the CONFIG_TCG for frontend and backend's files in the related Makefiles.
Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <p
tcg: add the CONFIG_TCG into Makefiles
Add the CONFIG_TCG for frontend and backend's files in the related Makefiles.
Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
a574cf9b | 03-Jul-2017 |
Yang Zhong <yang.zhong@intel.com> |
tcg: add the tcg-stub.c file into accel/stubs/
If tcg is disabled, the functions in tcg-stub.c file will be called. This file is target-independent file, do not include any platform related stub fun
tcg: add the tcg-stub.c file into accel/stubs/
If tcg is disabled, the functions in tcg-stub.c file will be called. This file is target-independent file, do not include any platform related stub functions into this file.
Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
f0d14a95 | 17-Sep-2012 |
Paolo Bonzini <pbonzini@redhat.com> |
monitor: disable "info jit" and "info opcount" if !TCG
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
8e2b7299 | 03-Jul-2017 |
Yang Zhong <yang.zhong@intel.com> |
tcg: make tcg_allowed global
Change the tcg_enabled() and make sure user build still enable tcg even x86 softmmu disable tcg.
Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo B
tcg: make tcg_allowed global
Change the tcg_enabled() and make sure user build still enable tcg even x86 softmmu disable tcg.
Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
290dae46 | 04-Jul-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
cpu: move interrupt handling out of translate-common.c
translate-common.c will not be available anymore with --disable-tcg, so we cannot leave cpu_interrupt_handler there.
Move the TCG-specific han
cpu: move interrupt handling out of translate-common.c
translate-common.c will not be available anymore with --disable-tcg, so we cannot leave cpu_interrupt_handler there.
Move the TCG-specific handler to accel/tcg/tcg-all.c, and adopt KVM's handler as the default one, since it works just as well for Xen and qtest.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
a0be0c58 | 03-Jul-2017 |
Yang Zhong <yang.zhong@intel.com> |
tcg: move page_size_init() function
translate-all.c will be disabled if tcg is disabled in the build, so page_size_init() function and related variables will be moved to exec.c file.
Signed-off-by:
tcg: move page_size_init() function
translate-all.c will be disabled if tcg is disabled in the build, so page_size_init() function and related variables will be moved to exec.c file.
Signed-off-by: Yang Zhong <yang.zhong@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
8b3ae692 | 03-Jul-2017 |
Paolo Bonzini <pbonzini@redhat.com> |
vl: convert -tb-size to qemu_strtoul
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> |
2cd53943 | 26-Jun-2017 |
Thomas Huth <thuth@redhat.com> |
cpu: Introduce a wrapper for tlb_flush() that can be used in common code
Commit 1f5c00cfdb8114c ("qom/cpu: move tlb_flush to cpu_common_reset") moved the call to tlb_flush() from the target-specific
cpu: Introduce a wrapper for tlb_flush() that can be used in common code
Commit 1f5c00cfdb8114c ("qom/cpu: move tlb_flush to cpu_common_reset") moved the call to tlb_flush() from the target-specific reset handlers into the common code qom/cpu.c file, and protected the call with "#ifdef CONFIG_SOFTMMU" to avoid that it is called for linux-user only targets. But since qom/cpu.c is common code, CONFIG_SOFTMMU is *never* defined here, so the tlb_flush() was simply never executed anymore. Fix it by introducing a wrapper for tlb_flush() in a file that is re-compiled for each target, i.e. in translate-all.c.
Fixes: 1f5c00cfdb8114c1e3a13426588ceb64f82c9ddb Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Thomas Huth <thuth@redhat.com> Message-Id: <1498454578-18709-5-git-send-email-thuth@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
99f31832 | 18-Jun-2017 |
Sergio Andres Gomez Del Real <sergio.g.delreal@gmail.com> |
vcpu_dirty: share the same field in CPUState for all accelerators
This patch simply replaces the separate boolean field in CPUState that kvm, hax (and upcoming hvf) have for keeping track of vcpu di
vcpu_dirty: share the same field in CPUState for all accelerators
This patch simply replaces the separate boolean field in CPUState that kvm, hax (and upcoming hvf) have for keeping track of vcpu dirtiness with a single shared field.
Signed-off-by: Sergio Andres Gomez Del Real <Sergio.G.DelReal@gmail.com> Message-Id: <20170618191101.3457-1-Sergio.G.DelReal@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
f3ced3c5 | 14-Jun-2017 |
Emilio G. Cota <cota@braap.org> |
tcg: consistently access cpu->tb_jmp_cache atomically
Some code paths can lead to atomic accesses racing with memset() on cpu->tb_jmp_cache, which can result in torn reads/writes and is undefined be
tcg: consistently access cpu->tb_jmp_cache atomically
Some code paths can lead to atomic accesses racing with memset() on cpu->tb_jmp_cache, which can result in torn reads/writes and is undefined behaviour in C11.
These torn accesses are unlikely to show up as bugs, but from code inspection they seem possible. For example, tb_phys_invalidate does: /* remove the TB from the hash list */ h = tb_jmp_cache_hash_func(tb->pc); CPU_FOREACH(cpu) { if (atomic_read(&cpu->tb_jmp_cache[h]) == tb) { atomic_set(&cpu->tb_jmp_cache[h], NULL); } } Here atomic_set might race with a concurrent memset (such as the ones scheduled via "unsafe" async work, e.g. tlb_flush_page) and therefore we might end up with a torn pointer (or who knows what, because we are under undefined behaviour).
This patch converts parallel accesses to cpu->tb_jmp_cache to use atomic primitives, thereby bringing these accesses back to defined behaviour. The price to pay is to potentially execute more instructions when clearing cpu->tb_jmp_cache, but given how infrequently they happen and the small size of the cache, the performance impact I have measured is within noise range when booting debian-arm.
Note that under "safe async" work (e.g. do_tb_flush) we could use memset because no other vcpus are running. However I'm keeping these accesses atomic as well to keep things simple and to avoid confusing analysis tools such as ThreadSanitizer.
Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1497486973-25845-1-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <rth@twiddle.net>
show more ...
|