20cb6ae4 | 14-Aug-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
Now that all the callers can handle get_page_addr_code() returning -1, remove all the code which tries to handle executio
accel/tcg: Return -1 for execution from MMIO regions in get_page_addr_code()
Now that all the callers can handle get_page_addr_code() returning -1, remove all the code which tries to handle execution from MMIO regions or small-MMU-region RAM areas. This will mean that we can correctly execute from these areas, rather than ending up either aborting QEMU or delivering an incorrect guest exception.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Cédric Le Goater <clg@kaod.org> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180710160013.26559-6-peter.maydell@linaro.org
show more ...
|
9739e376 | 14-Aug-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
If get_page_addr_code() returns -1, this indicates that there is no RAM page we can read a full TB from. Instead we must cr
accel/tcg: tb_gen_code(): Create single-insn TB for execution from non-RAM
If get_page_addr_code() returns -1, this indicates that there is no RAM page we can read a full TB from. Instead we must create a TB which contains a single instruction and which we do not cache, so it is executed only once.
Since this means we can now have TBs which are not in any page list, we also need to make tb_phys_invalidate() handle them (by not trying to remove them from a nonexistent page list).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Tested-by: Cédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-5-peter.maydell@linaro.org
show more ...
|
c360a0fd | 14-Aug-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
When we support execution from non-RAM MMIO regions, get_page_addr_code() will return -1 to indicate that there is no RAM
accel/tcg: Handle get_page_addr_code() returning -1 in tb_check_watchpoint()
When we support execution from non-RAM MMIO regions, get_page_addr_code() will return -1 to indicate that there is no RAM at the requested address. Handle this in tb_check_watchpoint() -- if the exception happened for a PC which doesn't correspond to RAM then there is no need to invalidate any TBs, because the one-instruction TB will not have been cached.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Cédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-4-peter.maydell@linaro.org
show more ...
|
7252f2de | 14-Aug-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
When we support execution from non-RAM MMIO regions, get_page_addr_code() will return -1 to indicate that there is no RAM at
accel/tcg: Handle get_page_addr_code() returning -1 in hashtable lookups
When we support execution from non-RAM MMIO regions, get_page_addr_code() will return -1 to indicate that there is no RAM at the requested address. Handle this in the cpu-exec TB hashtable lookup code, treating it as "no match found".
Note that the call to get_page_addr_code() in tb_lookup_cmp() needs no changes -- a return of -1 will already correctly result in the function returning false.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Emilio G. Cota <cota@braap.org> Tested-by: Cédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-3-peter.maydell@linaro.org
show more ...
|
dbea78a4 | 14-Aug-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Pass read access type through to io_readx()
The io_readx() function needs to know whether the load it is doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it can pass the right valu
accel/tcg: Pass read access type through to io_readx()
The io_readx() function needs to know whether the load it is doing is an MMU_DATA_LOAD or an MMU_INST_FETCH, so that it can pass the right value to the cpu_transaction_failed() function. Plumb this information through from the softmmu code.
This is currently not often going to give the wrong answer, because usually instruction fetches go via get_page_addr_code(). However once we switch over to handling execution from non-RAM by creating single-insn TBs, the path for an insn fetch to generate a bus error will be through cpu_ld*_code() and io_readx(), so without this change we will generate a d-side fault when we should generate an i-side fault.
We also have to pass the access type via a CPU struct global down to unassigned_mem_read(), for the benefit of the targets which still use the cpu_unassigned_access() hook (m68k, mips, sparc, xtensa).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Tested-by: Cédric Le Goater <clg@kaod.org> Message-id: 20180710160013.26559-2-peter.maydell@linaro.org
show more ...
|
59b5552f | 17-Jul-2018 |
Peter Maydell <peter.maydell@linaro.org> |
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
Bug fixes.
# gpg: Signature made Tue 17 Jul 2018 16:06:07 BST # gpg: using RSA key BFFBD25F78C7AE83 # gp
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
Bug fixes.
# gpg: Signature made Tue 17 Jul 2018 16:06:07 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: Document command line options with single dash opts: remove redundant check for NULL parameter i386: only parse the initrd_filename once for multiboot modules i386: fix regression parsing multiboot initrd modules virtio-scsi: fix hotplug ->reset() vs event race qdev: add HotplugHandler->post_plug() callback hw/char/serial: retry write if EAGAIN PC Chipset: Improve serial divisor calculation vhost-user-test: added proper TestServer *dest initialization in test_migrate() hyperv: ensure VP index equal to QEMU cpu_index hyperv: rename vcpu_id to vp_index accel: Fix typo and grammar in comment dump: add kernel_gs_base to QEMU CPU state
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
3474c98a | 13-Jul-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Assert that tlb fill gave us a valid TLB entry
In commit 4b1a3e1e34ad97 we added a check for whether the TLB entry we had following a tlb_fill had the INVALID bit set. This could happen
accel/tcg: Assert that tlb fill gave us a valid TLB entry
In commit 4b1a3e1e34ad97 we added a check for whether the TLB entry we had following a tlb_fill had the INVALID bit set. This could happen in some circumstances because a stale or wrong TLB entry was pulled out of the victim cache. However, after commit 68fea038553039e (which prevents stale entries being in the victim cache) and the previous commit (which ensures we don't incorrectly hit in the victim cache)) this should never be possible.
Drop the check on TLB_INVALID_MASK from the "is this a TLB_RECHECK?" condition, and instead assert that the tlb fill procedure has given us a valid TLB entry (or longjumped out with a guest exception).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180713141636.18665-3-peter.maydell@linaro.org
show more ...
|
b493ccf1 | 13-Jul-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Use correct test when looking in victim TLB for code
In get_page_addr_code(), we were incorrectly looking in the victim TLB for an entry which matched the target address for reads, not fo
accel/tcg: Use correct test when looking in victim TLB for code
In get_page_addr_code(), we were incorrectly looking in the victim TLB for an entry which matched the target address for reads, not for code accesses. This meant that we could hit on a victim TLB entry that indicated that the address was readable but not executable, and incorrectly bypass the call to tlb_fill() which should generate the guest MMU exception. Fix this bug.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20180713141636.18665-2-peter.maydell@linaro.org
show more ...
|
696c7066 | 12-Jul-2018 |
Stefan Weil <sw@weilnetz.de> |
accel: Fix typo and grammar in comment
The typo was found by codespell.
Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20180712194454.26765-1-sw@weilnetz.de> Signed-off-by: Paolo Bonzini
accel: Fix typo and grammar in comment
The typo was found by codespell.
Signed-off-by: Stefan Weil <sw@weilnetz.de> Message-Id: <20180712194454.26765-1-sw@weilnetz.de> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
ec7eb2ae | 05-Jul-2018 |
Emilio G. Cota <cota@braap.org> |
translate-all: honour CF_NOCACHE in tb_gen_code
This fixes a record-replay regression introduced by 95590e2 ("translate-all: discard TB when tb_link_page returns an existing matching TB", 2018-06-15
translate-all: honour CF_NOCACHE in tb_gen_code
This fixes a record-replay regression introduced by 95590e2 ("translate-all: discard TB when tb_link_page returns an existing matching TB", 2018-06-15). The problem is that code using CF_NOCACHE assumes that the TB returned from tb_gen_code is always a newly-generated one. This assumption, however, was broken in the aforementioned commit.
Fix it by honouring CF_NOCACHE, so that tb_gen_code always returns a newly-generated TB when CF_NOCACHE is passed to it. Do this by avoiding the TB hash table if CF_NOCACHE is set.
Reported-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Tested-by: Pavel Dovgalyuk <Pavel.Dovgaluk@ispras.ru> Signed-off-by: Emilio G. Cota <cota@braap.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Alistair Francis <alistair.francis@wdc.com> Message-id: 1530806837-5416-1-git-send-email-cota@braap.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
e8c85894 | 02-Jul-2018 |
Peter Maydell <peter.maydell@linaro.org> |
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* IEC units series (Philippe) * Hyper-V PV TLB flush (Vitaly) * git archive detection (Daniel) * host serial passthrough
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* IEC units series (Philippe) * Hyper-V PV TLB flush (Vitaly) * git archive detection (Daniel) * host serial passthrough fix (David) * NPT support for SVM emulation (Jan) * x86 "info mem" and "info tlb" fix (Doug)
# gpg: Signature made Mon 02 Jul 2018 16:18:21 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (50 commits) tcg: simplify !CONFIG_TCG handling of tb_invalidate_* i386/monitor.c: make addresses canonical for "info mem" and "info tlb" target-i386: Add NPT support serial: Open non-block bsd-user: Use the IEC binary prefix definitions linux-user: Use the IEC binary prefix definitions tests/crypto: Use the IEC binary prefix definitions vl: Use the IEC binary prefix definitions monitor: Use the IEC binary prefix definitions cutils: Do not include "qemu/units.h" directly hw/rdma: Use the IEC binary prefix definitions hw/virtio: Use the IEC binary prefix definitions hw/vfio: Use the IEC binary prefix definitions hw/sd: Use the IEC binary prefix definitions hw/usb: Use the IEC binary prefix definitions hw/net: Use the IEC binary prefix definitions hw/i386: Use the IEC binary prefix definitions hw/ppc: Use the IEC binary prefix definitions hw/mips: Use the IEC binary prefix definitions hw/mips/r4k: Constify params_size ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
68fea038 | 29-Jun-2018 |
Richard Henderson <richard.henderson@linaro.org> |
accel/tcg: Avoid caching overwritten tlb entries
When installing a TLB entry, remove any cached version of the same page in the VTLB. If the existing TLB entry matches, do not copy into the VTLB, b
accel/tcg: Avoid caching overwritten tlb entries
When installing a TLB entry, remove any cached version of the same page in the VTLB. If the existing TLB entry matches, do not copy into the VTLB, but overwrite it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
4b1a3e1e | 29-Jun-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Don't treat invalid TLB entries as needing recheck
In get_page_addr_code() when we check whether the TLB entry is marked as TLB_RECHECK, we should not go down that code path if the TLB en
accel/tcg: Don't treat invalid TLB entries as needing recheck
In get_page_addr_code() when we check whether the TLB entry is marked as TLB_RECHECK, we should not go down that code path if the TLB entry is not valid at all (ie the TLB_INVALID bit is set).
Tested-by: Laurent Vivier <laurent@vivier.eu> Reported-by: Laurent Vivier <laurent@vivier.eu> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180629161731.16239-1-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
e4c967a7 | 29-Jun-2018 |
Peter Maydell <peter.maydell@linaro.org> |
accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code()
In commit 71b9a45330fe220d1 we changed the condition we use to determine whether we need to refill the TLB in get_page_addr_code
accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code()
In commit 71b9a45330fe220d1 we changed the condition we use to determine whether we need to refill the TLB in get_page_addr_code() to if (unlikely(env->tlb_table[mmu_idx][index].addr_code != (addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)))) {
This isn't the right check (it will falsely fail if the input addr happens to have the low bit corresponding to TLB_INVALID_MASK set, for instance). Replace it with a use of the new tlb_hit() function, which is the correct test.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180629162122.19376-3-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
334692bc | 29-Jun-2018 |
Peter Maydell <peter.maydell@linaro.org> |
tcg: Define and use new tlb_hit() and tlb_hit_page() functions
The condition to check whether an address has hit against a particular TLB entry is not completely trivial. We do this in various place
tcg: Define and use new tlb_hit() and tlb_hit_page() functions
The condition to check whether an address has hit against a particular TLB entry is not completely trivial. We do this in various places, and in fact in one place (get_page_addr_code()) we have got the condition wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline functions (one for a known-page-aligned address and one for an arbitrary address), and use them in all the places where we had the condition correct.
This is a no-behaviour-change patch; we leave fixing the buggy code in get_page_addr_code() to a subsequent patch.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-Id: <20180629162122.19376-2-peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
a688e73b | 25-Jun-2018 |
Emilio G. Cota <cota@braap.org> |
translate-all: fix locking of TBs whose two pages share the same physical page
Commit 0b5c91f ("translate-all: use per-page locking in !user-mode", 2018-06-15) introduced per-page locking. It assume
translate-all: fix locking of TBs whose two pages share the same physical page
Commit 0b5c91f ("translate-all: use per-page locking in !user-mode", 2018-06-15) introduced per-page locking. It assumed that the physical pages corresponding to a TB (at most two pages) are always distinct, which is wrong. For instance, an xtensa test provided by Max Filippov is broken by the commit, since the test maps two virtual pages to the same physical page:
virt1: 7fff, virt2: 8000 phys1 6000fff, phys2 6000000
Fix it by removing the assumption from page_lock_pair. If the two physical page addresses are equal, we only lock the PageDesc once. Note that the two callers of page_lock_pair, namely page_unlock_tb and tb_link_page, are also updated so that we do not try to unlock the same PageDesc twice.
Fixes: 0b5c91f74f3c83a36f37740969df8c775c997e69 Reported-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Max Filippov <jcmvbkbc@gmail.com> Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Emilio G. Cota <cota@braap.org> Message-Id: <1529944302-14186-1-git-send-email-cota@braap.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
c40d4792 | 02-Jul-2018 |
Paolo Bonzini <pbonzini@redhat.com> |
tcg: simplify !CONFIG_TCG handling of tb_invalidate_*
There is no need for a stub, since tb_invalidate_phys_addr can be excised altogether when TCG is disabled. This is a bit cleaner since it avoid
tcg: simplify !CONFIG_TCG handling of tb_invalidate_*
There is no need for a stub, since tb_invalidate_phys_addr can be excised altogether when TCG is disabled. This is a bit cleaner since it avoids using code that is clearly specific to user-mode emulation (it calls mmap_lock/unlock) for the !CONFIG_TCG case.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
646f34fa | 29-Jun-2018 |
Philippe Mathieu-Daudé <f4bug@amsat.org> |
tcg: Fix --disable-tcg build breakage
Fix the --disable-tcg breakage introduced by 8bca9a03ec60d:
$ configure --disable-tcg [...] $ make -C i386-softmmu exec.o make: Entering direct
tcg: Fix --disable-tcg build breakage
Fix the --disable-tcg breakage introduced by 8bca9a03ec60d:
$ configure --disable-tcg [...] $ make -C i386-softmmu exec.o make: Entering directory 'i386-softmmu' CC exec.o In file included from source/qemu/exec.c:62:0: source/qemu/include/exec/ram_addr.h:96:6: error: conflicting types for ‘tb_invalidate_phys_range’ void tb_invalidate_phys_range(ram_addr_t start, ram_addr_t end); ^~~~~~~~~~~~~~~~~~~~~~~~ In file included from source/qemu/exec.c:24:0: source/qemu/include/exec/exec-all.h:309:6: note: previous declaration of ‘tb_invalidate_phys_range’ was here void tb_invalidate_phys_range(target_ulong start, target_ulong end); ^~~~~~~~~~~~~~~~~~~~~~~~ source/qemu/exec.c:1043:6: error: conflicting types for ‘tb_invalidate_phys_addr’ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr, MemTxAttrs attrs) ^~~~~~~~~~~~~~~~~~~~~~~ In file included from source/qemu/exec.c:24:0: source/qemu/include/exec/exec-all.h:308:6: note: previous declaration of ‘tb_invalidate_phys_addr’ was here void tb_invalidate_phys_addr(target_ulong addr); ^~~~~~~~~~~~~~~~~~~~~~~ make: *** [source/qemu/rules.mak:69: exec.o] Error 1 make: Leaving directory 'i386-softmmu'
Tested to build x86_64-softmmu and i386-softmmu targets.
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Message-id: 20180629200710.27626-1-f4bug@amsat.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
109b2504 | 29-Jun-2018 |
Peter Maydell <peter.maydell@linaro.org> |
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* "info mtree" improvements (Alexey) * fake VPD block limits for SCSI passthrough (Daniel Barboza) * chardev and main lo
Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into staging
* "info mtree" improvements (Alexey) * fake VPD block limits for SCSI passthrough (Daniel Barboza) * chardev and main loop fixes (Daniel Berrangé, Sergio, Stefan) * help fixes (Eduardo) * pc-dimm refactoring (David) * tests improvements and fixes (Emilio, Thomas) * SVM emulation fixes (Jan) * MemoryRegionCache fix (Eric) * WHPX improvements (Justin) * ESP cleanup (Mark) * -overcommit option (Michael) * qemu-pr-helper fixes (me) * "info pic" improvements for x86 (Peter) * x86 TCG emulation fixes (Richard) * KVM slot handling fix (Shannon) * Next round of deprecation (Thomas) * Windows dump format support (Viktor)
# gpg: Signature made Fri 29 Jun 2018 12:03:05 BST # gpg: using RSA key BFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* remotes/bonzini/tags/for-upstream: (60 commits) tests/boot-serial: Do not delete the output file in case of errors hw/scsi: add VPD Block Limits emulation hw/scsi: centralize SG_IO calls into single function hw/scsi: cleanups before VPD BL emulation dump: add Windows live system dump dump: add fallback KDBG using in Windows dump dump: use system context in Windows dump dump: add Windows dump format to dump-guest-memory i386/cpu: make -cpu host support monitor/mwait kvm: support -overcommit cpu-pm=on|off hmp: obsolete "info ioapic" ioapic: support "info irq" ioapic: some proper indents when dump info ioapic: support "info pic" doc: another fix to "info pic" target-i386: Mark cpu_vmexit noreturn target-i386: Allow interrupt injection after STGI target-i386: Add NMI interception to SVM memory/hmp: Print owners/parents in "info mtree" WHPX: register for unrecognized MSR exits ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
2266d443 | 22-Jun-2018 |
Michael S. Tsirkin <mst@redhat.com> |
i386/cpu: make -cpu host support monitor/mwait
When guest CPU PM is enabled, and with -cpu host, expose the host CPU MWAIT leaf in the CPUID so guest can make good PM decisions.
Note: the result is
i386/cpu: make -cpu host support monitor/mwait
When guest CPU PM is enabled, and with -cpu host, expose the host CPU MWAIT leaf in the CPUID so guest can make good PM decisions.
Note: the result is 100% CPU utilization reported by host as host no longer knows that the CPU is halted.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Eduardo Habkost <ehabkost@redhat.com> Message-Id: <20180622192148.178309-3-mst@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
6c090d4a | 16-May-2018 |
Shannon Zhao <zhaoshenglong@huawei.com> |
kvm: Delete the slot if and only if the KVM_MEM_READONLY flag is changed
According to KVM commit 75d61fbc, it needs to delete the slot before changing the KVM_MEM_READONLY flag. But QEMU commit 235e
kvm: Delete the slot if and only if the KVM_MEM_READONLY flag is changed
According to KVM commit 75d61fbc, it needs to delete the slot before changing the KVM_MEM_READONLY flag. But QEMU commit 235e8982 only check whether KVM_MEM_READONLY flag is set instead of changing. It doesn't need to delete the slot if the KVM_MEM_READONLY flag is not changed.
This fixes a issue that migrating a VM at the OVMF startup stage and VM is executing the codes in rom. Between the deleting and adding the slot in kvm_set_user_memory_region, there is a chance that guest access rom and trap to KVM, then KVM can't find the corresponding memslot. While KVM (on ARM) injects an abort to guest due to the broken hva, then guest will get stuck.
Signed-off-by: Shannon Zhao <zhaoshenglong@huawei.com> Message-Id: <1526462314-19720-1-git-send-email-zhaoshenglong@huawei.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
8bca9a03 | 30-May-2018 |
Paolo Bonzini <pbonzini@redhat.com> |
move public invalidate APIs out of translate-all.{c,h}, clean up
Place them in exec.c, exec-all.h and ram_addr.h. This removes knowledge of translate-all.h (which is an internal header) from severa
move public invalidate APIs out of translate-all.{c,h}, clean up
Place them in exec.c, exec-all.h and ram_addr.h. This removes knowledge of translate-all.h (which is an internal header) from several files outside accel/tcg and removes knowledge of AddressSpace from translate-all.c (as it only operates on ram_addr_t).
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
7106a87d | 28-Jun-2018 |
Peter Maydell <peter.maydell@linaro.org> |
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Pull request
* Gracefully handle Linux AIO init failure
# gpg: Signature made Wed 27 Jun 2018 15:48:28 BST # g
Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into staging
Pull request
* Gracefully handle Linux AIO init failure
# gpg: Signature made Wed 27 Jun 2018 15:48:28 BST # gpg: using RSA key 9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8
* remotes/stefanha/tags/block-pull-request: linux-aio: properly bubble up errors from initialization compiler: add a sizeof_field() macro
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
f18793b0 | 14-Jun-2018 |
Stefan Hajnoczi <stefanha@redhat.com> |
compiler: add a sizeof_field() macro
Determining the size of a field is useful when you don't have a struct variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, whic
compiler: add a sizeof_field() macro
Determining the size of a field is useful when you don't have a struct variable handy. Open-coding this is ugly.
This patch adds the sizeof_field() macro, which is similar to typeof_field(). Existing instances are updated to use the macro.
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Message-id: 20180614164431.29305-1-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|
d071f4cd | 22-May-2018 |
Emilio G. Cota <cota@braap.org> |
trace: enable tracing of TCG atomics
We do not trace guest atomic accesses. Fix it.
Tested with a modified atomic_add-bench so that it executes a deterministic number of instructions, i.e. fixed se
trace: enable tracing of TCG atomics
We do not trace guest atomic accesses. Fix it.
Tested with a modified atomic_add-bench so that it executes a deterministic number of instructions, i.e. fixed seeding, no threading and fixed number of loop iterations instead of running for a certain time.
Before: - With parallel_cpus = false (no clone syscall so it is never set to true): 220070 memory accesses - With parallel_cpus = true (hard-coded): 212105 memory accesses <-- we're not tracing the atomics!
After: 220070 memory accesses regardless of parallel_cpus.
Signed-off-by: Emilio G. Cota <cota@braap.org> Message-id: 1527028012-21888-6-git-send-email-cota@braap.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
show more ...
|