Revision tags: v9.2.0, v9.1.2 |
|
#
f7ff24a6 |
| 06-Nov-2024 |
Alexander Graf <graf@amazon.com> |
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That functi
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That function then determined whether real mode or paging is active based on whether the pg_mode variable was 0.
Unfortunately pg_mode is 0 in two situations:
1) Paging is disabled (real mode) 2) Paging is in 2-level paging mode (32bit without PAE)
That means the walker now assumed that 2-level paging mode was real mode, breaking NetBSD as well as Windows XP.
To fix that, this patch adds a new PG flag to pg_mode which indicates whether paging is active at all and uses that to determine whether we are in real mode or not.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2654 Fixes: b56617bbcb4 ("target/i386: Walk NPT in guest real mode") Fixes: 01bfc2e2959 (commit b56617bbcb4 in stable-9.1.x series) Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Link: https://lore.kernel.org/r/20241106154329.67218-1-graf@amazon.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 8fa11a4df344f58375eb26b3b65004345f21ef37) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
f0cfd067 |
| 09-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* i386: fix -M isapc with ubsan * i386: add sha512, sm3, sm4 feature bits * eif: fix Coverity issues * i386/hvf: x2APIC suppo
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* i386: fix -M isapc with ubsan * i386: add sha512, sm3, sm4 feature bits * eif: fix Coverity issues * i386/hvf: x2APIC support * i386/hvf: fixes * i386/tcg: fix 2-stage page walk * eif: fix coverity issues * rust: fix subproject warnings with new rust, avoid useless cmake fallback
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmcvEHYUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroNn4AgAl+GaD/fHHU+9TCyKRg1Ux/iTSkqh # PBs76H2w879TDeuPkKZlnYqc7n85rlh1cJwQz01X79OFEeXP6oHiI9Q6qyflSxF0 # V+DrJhZc1CtZBChx9ZUMWUAWjYJFFjNwYA7/LLuLl6RfOm8bIJUWIhDjliJ4Bcea # 5VI13OtTvYvVurRLUBXWU0inh9KLHIw4RlNgi8Pmb2wNXkPxENpWjsGqWH0jlKS5 # ZUNgTPx/eY5MDwKoAyif2gsdfJlxGxgkpz3Mic4EGE9cw1cRASI3tKb3KH61hNTE # K21UI0+/+kv27cPnpZzYMDSkrJs7PEgVJ/70NRmAJySA76IG3XSsb5+xZg== # =pI4/ # -----END PGP SIGNATURE----- # gpg: Signature made Sat 09 Nov 2024 07:34:14 GMT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: hw/i386/pc: Don't try to init PCI NICs if there is no PCI bus rust: qemu-api-macros: always process subprojects before dependencies i386/hvf: Removes duplicate/shadowed variables in hvf_vcpu_exec i386/hvf: Raise exception on error setting APICBASE i386/hvf: Fixes startup memory leak (vmcs caps) i386/hvf: Fix for UB in handling CPUID function 0xD i386/hvf: Integrates x2APIC support with hvf accel eif: cope with huge section sizes eif: cope with huge section offsets target/i386: Fix legacy page table walk rust: add meson_version to all subprojects target/i386/hvf: fix clang compilation warning target/i386: add sha512, sm3, sm4 feature bits
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
8fa11a4d |
| 06-Nov-2024 |
Alexander Graf <graf@amazon.com> |
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That functi
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That function then determined whether real mode or paging is active based on whether the pg_mode variable was 0.
Unfortunately pg_mode is 0 in two situations:
1) Paging is disabled (real mode) 2) Paging is in 2-level paging mode (32bit without PAE)
That means the walker now assumed that 2-level paging mode was real mode, breaking NetBSD as well as Windows XP.
To fix that, this patch adds a new PG flag to pg_mode which indicates whether paging is active at all and uses that to determine whether we are in real mode or not.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2654 Fixes: b56617bbcb4 ("target/i386: Walk NPT in guest real mode") Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Link: https://lore.kernel.org/r/20241106154329.67218-1-graf@amazon.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
Revision tags: v9.1.1, v9.1.0 |
|
#
6ad00eb0 |
| 18-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: Use DPL-level accesses for interrupts and call gates
Stack accesses should be explicit and use the privilege level of the target stack. This ensures that SMAP is not applied when t
target/i386/tcg: Use DPL-level accesses for interrupts and call gates
Stack accesses should be explicit and use the privilege level of the target stack. This ensures that SMAP is not applied when the target stack is in ring 3.
This fixes a bug wherein i386/tcg assumed that an interrupt return, or a far call using the CALL or JMP instruction, was always going from kernel or user mode to kernel mode when using a call gate. This assumption is violated if the call gate has a DPL that is greater than 0.
Analyzed-by: Robert R. Henry <rrh.henry@gmail.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit e136648c5c95ee4ea233cccf999c07e065bef26d) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
f1dd6408 |
| 18-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* tcg/s390x: Fix for TSTEQ/TSTNE * target/i386: Fixes for IN and OUT with REX prefix * target/i386: New CPUID features and lo
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* tcg/s390x: Fix for TSTEQ/TSTNE * target/i386: Fixes for IN and OUT with REX prefix * target/i386: New CPUID features and logic fixes * target/i386: Add support save/load HWCR MSR * target/i386: Move more instructions to new decoder; separate decoding and IR generation * target/i386/tcg: Use DPL-level accesses for interrupts and call gates * accel/kvm: perform capability checks on VM file descriptor when necessary * accel/kvm: dynamically sized kvm memslots array * target/i386: fixes for Hyper-V * docs/system: Add recommendations to Hyper-V enlightenments doc
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmcRTIoUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroMCewf8DnZbz7/0beql2YycrdPJZ3xnmfWW # JenWKIThKHGWRTW2ODsac21n0TNXE0vsOYjw/Z/dNLO+72sLcqvmEB18+dpHAD2J # ltb8OvuROc3nn64OEi08qIj7JYLmJ/osroI+6NnZrCOHo8nCirXoCHB7ZPqAE7/n # yDnownWaduXmXt3+Vs1mpqlBklcClxaURDDEQ8CGsxjC3jW03cno6opJPZpJqk0t # 6aX92vX+3lNhIlije3QESsDX0cP1CFnQmQlNNg/xzk+ZQO+vSRrPV+A/N9xf8m1b # HiaCrlBWYef/sLgOHziOSrJV5/N8W0GDEVYDmpEswHE81BZxrOTZLxqzWw== # =qwfc # -----END PGP SIGNATURE----- # gpg: Signature made Thu 17 Oct 2024 18:42:34 BST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (26 commits) target/i386: Use only 16 and 32-bit operands for IN/OUT accel/kvm: check for KVM_CAP_MEMORY_ATTRIBUTES on vm accel/kvm: check for KVM_CAP_MULTI_ADDRESS_SPACE on vm accel/kvm: check for KVM_CAP_READONLY_MEM on VM target/i386/tcg: Use DPL-level accesses for interrupts and call gates KVM: Rename KVMState->nr_slots to nr_slots_max KVM: Rename KVMMemoryListener.nr_used_slots to nr_slots_used KVM: Define KVM_MEMSLOTS_NUM_MAX_DEFAULT KVM: Dynamic sized kvm memslots array target/i386: assert that cc_op* and pc_save are preserved target/i386: list instructions still in translate.c target/i386: do not check PREFIX_LOCK in old-style decoder target/i386: convert CMPXCHG8B/CMPXCHG16B to new decoder target/i386: decode address before going back to translate.c target/i386: convert bit test instructions to new decoder tcg/s390x: fix constraint for 32-bit TSTEQ/TSTNE docs/system: Add recommendations to Hyper-V enlightenments doc target/i386: Make sure SynIC state is really updated before KVM_RUN target/i386: Exclude 'hv-syndbg' from 'hv-passthrough' target/i386: Fix conditional CONFIG_SYNDBG enablement ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
e136648c |
| 18-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: Use DPL-level accesses for interrupts and call gates
Stack accesses should be explicit and use the privilege level of the target stack. This ensures that SMAP is not applied when t
target/i386/tcg: Use DPL-level accesses for interrupts and call gates
Stack accesses should be explicit and use the privilege level of the target stack. This ensures that SMAP is not applied when the target stack is in ring 3.
This fixes a bug wherein i386/tcg assumed that an interrupt return, or a far call using the CALL or JMP instruction, was always going from kernel or user mode to kernel mode when using a call gate. This assumption is violated if the call gate has a DPL that is greater than 0.
Analyzed-by: Robert R. Henry <rrh.henry@gmail.com> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/249 Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
f36538b8 |
| 20-Aug-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-misc-20240821' of https://gitlab.com/rth7680/qemu into staging
target/i386: Fix carry flag for BLSI target/i386: Fix tss access size in switch_tss_ra linux-user: Handle short reads i
Merge tag 'pull-misc-20240821' of https://gitlab.com/rth7680/qemu into staging
target/i386: Fix carry flag for BLSI target/i386: Fix tss access size in switch_tss_ra linux-user: Handle short reads in mmap_h_gt_g bsd-user: Handle short reads in mmap_h_gt_g
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmbFTzUdHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/9+Qf9GiXgmZU51Rk9LaNz # zlaUPIJy/ER+lCpkaeIqMzJ3EysuWa5tZFOrg21rqmfMr19AIuPSRmCFXuwkF6s+ # DnCiToloM/EvczmVQALE/KhOOm0dwvoAwSFBFTCPfg/IKjb9OcOWHGJVSgFV/1u6 # vrTqUc6xny6QhMjTuVWziE/VAH0V9wRjToii2qN9k/5e2oF1hzDGjHx7T9d//4j5 # hbRyzH0luexvob7JCpxHDELlarkoyR5a7cJQHTj0VTfmR5g6yEMLn+z7ocBcUF09 # pJzcRu2BHUYjzQgV6wqdj5aw8N26c+e8pm1XIA8S1CwBnLRnkuuCKKD7I0tdYvFA # VgDntQ== # =XyeR # -----END PGP SIGNATURE----- # gpg: Signature made Wed 21 Aug 2024 12:21:41 PM AEST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* tag 'pull-misc-20240821' of https://gitlab.com/rth7680/qemu: target/i386: Fix tss access size in switch_tss_ra target/i386: Fix carry flag for BLSI target/i386: Split out gen_prepare_val_nz bsd-user: Handle short reads in mmap_h_gt_g linux-user: Handle short reads in mmap_h_gt_g
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
ded1db48 |
| 19-Aug-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/i386: Fix tss access size in switch_tss_ra
The two limit_max variables represent size - 1, just like the encoding in the GDT, thus the 'old' access was off by one. Access the minimal size of
target/i386: Fix tss access size in switch_tss_ra
The two limit_max variables represent size - 1, just like the encoding in the GDT, thus the 'old' access was off by one. Access the minimal size of the new tss: the complete tss contains the iopb, which may be a larger block than the access api expects, and irrelevant because the iopb is not accessed during the switch itself.
Fixes: 8b131065080a ("target/i386/tcg: use X86Access for TSS access") Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2511 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240819074052.207783-1-richard.henderson@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
show more ...
|
#
da4f7b85 |
| 30-Jul-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-target-arm-20240730' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * hw/char/bcm2835_aux: Fix assert when receive FIFO fills up * hw/arm/smmuv3:
Merge tag 'pull-target-arm-20240730' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * hw/char/bcm2835_aux: Fix assert when receive FIFO fills up * hw/arm/smmuv3: Assert input to oas2bits() is valid * target/arm/kvm: Set PMU for host only when available * target/arm/kvm: Do not silently remove PMU * hvf: arm: Properly disable PMU * hvf: arm: Do not advance PC when raising an exception * hw/misc/bcm2835_property: several minor bugfixes * target/arm: Don't assert for 128-bit tile accesses when SVL is 128 * target/arm: Fix UMOPA/UMOPS of 16-bit values * target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled * system/physmem: Where we assume we have a RAM MR, assert it * sh4, i386, m68k, xtensa, tricore, arm: fix minor Coverity issues
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaotMAZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3rsAEACIzQDAMKWy8DlB8o4W+a/l # yqGijQ5e0JdAifEA2rsDbnaIs/kqDzVxBc0dgIXDxETe5LVZHB742q4vMbaSpSb2 # P8xuL0Q7NRpcIN4THPwLxW0wED+asaJc2TeyImPQRTRhLgk6yn+/4hpqQRkT0mxe # oxxN8bnx9RssqHZ6pQCv5HYNLex3a7dljXlbjWr4KpRRFSMls1cxPSphsK1aZ1xV # 3NXh/vgHcM0LquwxdF0uaPdPGQ1SyZb5KZ9khd0o4cpDivkns/hXQpyJ45nHsypK # kG/TbFQsXPorprWCqBDOXY9rCM6eBDuK89mClKA34EzukIFlSMfIgxfezCzNIXaU # o/cJCGpSzZnCdvZagEWDzkdryE3QFmmpBFRs8mcS3sb+/gm0O8YyMoCrdV87O3c5 # Y/NY1adOKTVf8FLlT3jR93k4pT6wiqIQND13fN3EbnUqfrGpocSyMD0VsYBj/gij # PHPBFHAwCEDKVZSq6SViXdkS15arqL2V2mnOogeY1v0jTj2YRG3FyjrPOatg6tF5 # 3MoUBjTAp9ENtYHAY6mCr2vAYw6l1xZTKUwkXiO/i8rc4XQQ+A3AHhQLtWdu2K5+ # dv1E7QKur5O6FDmJxB5s/vGppfnkSUD6EEvViNSCj+hX0U9SyT80e/KClMehgJqQ # +oME+fRoBHj1DUw4qasWsg== # =NNxN # -----END PGP SIGNATURE----- # gpg: Signature made Tue 30 Jul 2024 07:39:12 PM AEST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
* tag 'pull-target-arm-20240730' of https://git.linaro.org/people/pmaydell/qemu-arm: (21 commits) system/physmem: Where we assume we have a RAM MR, assert it target/sh4: Avoid shift into sign bit in update_itlb_use() target/i386: Remove dead assignment to ss in do_interrupt64() target/m68k: avoid shift into sign bit in dump_address_map() target/xtensa: Make use of 'segment' in pptlb helper less confusing target/tricore: Use unsigned types for bitops in helper_eq_b() target/arm: Ignore SMCR_EL2.LEN and SVCR_EL2.LEN if EL2 is not enabled target/arm: Avoid shifts by -1 in tszimm_shr() and tszimm_shl() target/arm: Fix UMOPA/UMOPS of 16-bit values target/arm: Don't assert for 128-bit tile accesses when SVL is 128 hw/misc/bcm2835_property: Reduce scope of variables in mbox push function hw/misc/bcm2835_property: Restrict scope of start_num, number, otp_row hw/misc/bcm2835_property: Avoid overflow in OTP access properties hw/misc/bcm2835_property: Fix handling of FRAMEBUFFER_SET_PALETTE hvf: arm: Do not advance PC when raising an exception hvf: arm: Properly disable PMU hvf: arm: Raise an exception for sysreg by default target/arm/kvm: Do not silently remove PMU target/arm/kvm: Set PMU for host only when available hw/arm/smmuv3: Assert input to oas2bits() is valid ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
bde8adb8 |
| 23-Jul-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/i386: Remove dead assignment to ss in do_interrupt64()
Coverity points out that in do_interrupt64() in the "to inner privilege" codepath we set "ss = 0", but because we also set "new_stack =
target/i386: Remove dead assignment to ss in do_interrupt64()
Coverity points out that in do_interrupt64() in the "to inner privilege" codepath we set "ss = 0", but because we also set "new_stack = 1" there, later in the function we will always override that value of ss with "ss = 0 | dpl".
Remove the unnecessary initialization of ss, which allows us to reduce the scope of the variable to only where it is used. Borrow a comment from helper_lcall_protected() that explains what "0 | dpl" means here.
Resolves: Coverity CID 1527395 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Message-id: 20240723162525.1585743-1-peter.maydell@linaro.org
show more ...
|
#
58ee924b |
| 17-Jul-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* target/i386/tcg: fixes for seg_helper.c * SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT, but also don't use
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* target/i386/tcg: fixes for seg_helper.c * SEV: Don't allow automatic fallback to legacy KVM_SEV_INIT, but also don't use it by default * scsi: honor bootindex again for legacy drives * hpet, utils, scsi, build, cpu: miscellaneous bugfixes
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmaWoP0UHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroOqfggAg3jxUp6B8dFTEid5aV6qvT4M6nwD # TAYcAl5kRqTOklEmXiPCoA5PeS0rbr+5xzWLAKgkumjCVXbxMoYSr0xJHVuDwQWv # XunUm4kpxJBLKK3uTGAIW9A21thOaA5eAoLIcqu2smBMU953TBevMqA7T67h22rp # y8NnZWWdyQRH0RAaWsCBaHVkkf+DuHSG5LHMYhkdyxzno+UWkTADFppVhaDO78Ba # Egk49oMO+G6of4+dY//p1OtAkAf4bEHePKgxnbZePInJrkgHzr0TJWf9gERWFzdK # JiM0q6DeqopZm+vENxS+WOx7AyDzdN0qOrf6t9bziXMg0Rr2Z8bu01yBCQ== # =cZhV # -----END PGP SIGNATURE----- # gpg: Signature made Wed 17 Jul 2024 02:34:05 AM AEST # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: target/i386/tcg: save current task state before loading new one target/i386/tcg: use X86Access for TSS access target/i386/tcg: check for correct busy state before switching to a new task target/i386/tcg: Compute MMU index once target/i386/tcg: Introduce x86_mmu_index_{kernel_,}pl target/i386/tcg: Reorg push/pop within seg_helper.c target/i386/tcg: use PUSHL/PUSHW for error code target/i386/tcg: Allow IRET from user mode to user mode with SMAP target/i386/tcg: Remove SEG_ADDL target/i386/tcg: fix POP to memory in long mode hpet: fix HPET_TN_SETVAL for high 32-bits of the comparator hpet: fix clamping of period docs: Update description of 'user=username' for '-run-with' qemu/timer: Add host ticks function for LoongArch scsi: fix regression and honor bootindex again for legacy drives hw/scsi/lsi53c895a: bump instruction limit in scripts processing to fix regression disas: Fix build against Capstone v6 cpu: Free queued CPU work Revert "qemu-char: do not operate on sources from finalize callbacks" i386/sev: Don't allow automatic fallback to legacy KVM_SEV*_INIT
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
6a079f2e |
| 19-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: save current task state before loading new one
This is how the steps are ordered in the manual. EFLAGS.NT is overwritten after the fact in the saved image.
Reviewed-by: Richard He
target/i386/tcg: save current task state before loading new one
This is how the steps are ordered in the manual. EFLAGS.NT is overwritten after the fact in the saved image.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
8b131065 |
| 18-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: use X86Access for TSS access
This takes care of probing the vaddr range in advance, and is also faster because it avoids repeated TLB lookups. It also matches the Intel manual bett
target/i386/tcg: use X86Access for TSS access
This takes care of probing the vaddr range in advance, and is also faster because it avoids repeated TLB lookups. It also matches the Intel manual better, as it says "Checks that the current (old) TSS, new TSS, and all segment descriptors used in the task switch are paged into system memory"; note however that it's not clear how the processor checks for segment descriptors, and this check is not included in the AMD manual.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
05d41bbc |
| 19-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: check for correct busy state before switching to a new task
This step is listed in the Intel manual: "Checks that the new task is available (call, jump, exception, or interrupt) or
target/i386/tcg: check for correct busy state before switching to a new task
This step is listed in the Intel manual: "Checks that the new task is available (call, jump, exception, or interrupt) or busy (IRET return)".
The AMD manual lists the same operation under the "Preventing recursion" paragraph of "12.3.4 Nesting Tasks", though it is not clear if the processor checks the busy bit in the IRET case.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
8053862a |
| 18-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: Compute MMU index once
Add the MMU index to the StackAccess struct, so that it can be cached or (in the next patch) computed from information that is not in CPUX86State.
Co-develop
target/i386/tcg: Compute MMU index once
Add the MMU index to the StackAccess struct, so that it can be cached or (in the next patch) computed from information that is not in CPUX86State.
Co-developed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
059368bc |
| 17-Jun-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/i386/tcg: Reorg push/pop within seg_helper.c
Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder t
target/i386/tcg: Reorg push/pop within seg_helper.c
Interrupts and call gates should use accesses with the DPL as the privilege level. While computing the applicable MMU index is easy, the harder thing is how to plumb it in the code.
One possibility could be to add a single argument to the PUSH* macros for the privilege level, but this is repetitive and risks confusion between the involved privilege levels.
Another possibility is to pass both CPL and DPL, and adjusting both PUSH* and POP* to use specific privilege levels (instead of using cpu_{ld,st}*_data). This makes the code more symmetric.
However, a more complicated but much nicer approach is to use a structure to contain the stack parameters, env, unwind return address, and rewrite the macros into functions. The struct provides an easy home for the MMU index as well.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-4-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
312ef324 |
| 18-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: use PUSHL/PUSHW for error code
Do not pre-decrement esp, let the macros subtract the appropriate operand size.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-
target/i386/tcg: use PUSHL/PUSHW for error code
Do not pre-decrement esp, let the macros subtract the appropriate operand size.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
0bd385e7 |
| 11-Jun-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386/tcg: Allow IRET from user mode to user mode with SMAP
This fixes a bug wherein i386/tcg assumed an interrupt return using the IRET instruction was always returning from kernel mode to ei
target/i386/tcg: Allow IRET from user mode to user mode with SMAP
This fixes a bug wherein i386/tcg assumed an interrupt return using the IRET instruction was always returning from kernel mode to either kernel mode or user mode. This assumption is violated when IRET is used as a clever way to restore thread state, as for example in the dotnet runtime. There, IRET returns from user mode to user mode.
This bug is that stack accesses from IRET and RETF, as well as accesses to the parameters in a call gate, are normal data accesses using the current CPL. This manifested itself as a page fault in the guest Linux kernel due to SMAP preventing the access.
This bug appears to have been in QEMU since the beginning.
Analyzed-by: Robert R. Henry <rrh.henry@gmail.com> Co-developed-by: Robert R. Henry <rrh.henry@gmail.com> Signed-off-by: Robert R. Henry <rrh.henry@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
a7cf4949 |
| 17-Jun-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/i386/tcg: Remove SEG_ADDL
This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base
target/i386/tcg: Remove SEG_ADDL
This truncation is now handled by MMU_*32_IDX. The introduction of MMU_*32_IDX in fact applied correct 32-bit wraparound to 16-bit accesses with a high segment base (e.g. big real mode or vm86 mode), which did not use SEG_ADDL.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Link: https://lore.kernel.org/r/20240617161210.4639-3-richard.henderson@linaro.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
85743f54 |
| 17-Jun-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* i386: fix issue with cache topology passthrough * scsi-disk: migrate emulated requests * i386/sev: fix Coverity issues * i3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* i386: fix issue with cache topology passthrough * scsi-disk: migrate emulated requests * i386/sev: fix Coverity issues * i386/tcg: more conversions to new decoder
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmZv6kMUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroOn4Af/evnpsae1fm8may1NQmmezKiks/4X # cR0GaQ7w75Oas05jKsG7Xnrq3Vn6p5wllf3Wf00p7F1iJX18azY9rQgIsUVUgVem # /EIZk1eM6+mDxuIG0taPxc5Aw3cfIBWAjUmzsXrSr55e/wyiIxZCeUo2zk8Il+iL # Z4ceNzY5PZzc2Fl10D3cGs/+ynfiDM53ucwe3ve2T6NrxEVfKQPp5jkIUkBUba6z # zM5O4Q5KTEZYVth1gbDTB/uUJLUFjQ12kCQfRCNX+bEPDHwARr0UWr/Oxtz0jZSd # FvXohz7tI+v+ph0xHyE4tEFqryvLCII1td2ohTAYZZXNGkjK6XZildngBw== # =m4BE # -----END PGP SIGNATURE----- # gpg: Signature made Mon 17 Jun 2024 12:48:19 AM PDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (25 commits) target/i386: SEV: do not assume machine->cgs is SEV target/i386: convert CMPXCHG to new decoder target/i386: convert XADD to new decoder target/i386: convert LZCNT/TZCNT/BSF/BSR/POPCNT to new decoder target/i386: convert SHLD/SHRD to new decoder target/i386: adapt gen_shift_count for SHLD/SHRD target/i386: pull load/writeback out of gen_shiftd_rm_T1 target/i386: convert non-grouped, helper-based 2-byte opcodes target/i386: split X86_CHECK_prot into PE and VM86 checks target/i386: finish converting 0F AE to the new decoder target/i386: fix bad sorting of entries in the 0F table target/i386: replace read_crN helper with read_cr8 target/i386: convert MOV from/to CR and DR to new decoder target/i386: fix processing of intercept 0 (read CR0) target/i386: replace NoSeg special with NoLoadEA target/i386: change X86_ENTRYwr to use T0, use it for moves target/i386: change X86_ENTRYr to use T0 target/i386: put BLS* input in T1, use generic flag writeback target/i386: rewrite flags writeback for ADCX/ADOX target/i386: remove CPUX86State argument from generator functions ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
ae541c0e |
| 25-May-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: convert non-grouped, helper-based 2-byte opcodes
These have very simple generators and no need for complex group decoding. Apart from LAR/LSL which are simplified to use gen_op_deposit
target/i386: convert non-grouped, helper-based 2-byte opcodes
These have very simple generators and no need for complex group decoding. Apart from LAR/LSL which are simplified to use gen_op_deposit_reg_v and movcond, the code is generally lifted from translate.c into the generators.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
3e246da2 |
| 08-Jun-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* scsi-disk: Don't silently truncate serial number * backends/hostmem: Report error on unavailable qemu_madvise() features or
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* scsi-disk: Don't silently truncate serial number * backends/hostmem: Report error on unavailable qemu_madvise() features or unaligned memory sizes * target/i386: fixes and documentation for INHIBIT_IRQ/TF/RF and debugging * i386/hvf: Adds support for INVTSC cpuid bit * i386/hvf: Fixes for dirty memory tracking * i386/hvf: Use hv_vcpu_interrupt() and hv_vcpu_run_until() * hvf: Cleanups * stubs: fixes for --disable-system build * i386/kvm: support for FRED * i386/kvm: fix MCE handling on AMD hosts
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmZkF2oUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroPNlQf+N9y6Eh0nMEEQ69twtV8ytglTY+uX # FsogvnsXHNMVubOWmmeItM6kFXTAkR9cmFaL8dqI1Gs03xEQdQXbF1KejJZOAZVl # RQMOW8Fg2Afr+0lwqCXHvhsmZ4hr5yUkRndyucA/E9AO2uGrtgwsWGDBGaHJOZIA # lAsEMOZgKjXHZnefXjhMrvpk/QNovjEV6f1RHX3oKZjKSI5/G4IqGSmwNYToot8p # 2fgs4Qti4+1gNyM2oBLq7cCMjMS61tSxOMH4uqVoIisjyckPlAFRvc+DXtKsUAAs # 9AgM++pNgpB0IXv67czRUNdRoK7OI8I0ULhI4qHXi6Yg2QYAHqpQ6WL4Lg== # =RP7U # -----END PGP SIGNATURE----- # gpg: Signature made Sat 08 Jun 2024 01:33:46 AM PDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (42 commits) python: mkvenv: remove ensure command Revert "python: use vendored tomli" i386: Add support for overflow recovery i386: Add support for SUCCOR feature i386: Fix MCE support for AMD hosts docs: i386: pc: Avoid mentioning limit of maximum vCPUs target/i386: Add get/set/migrate support for FRED MSRs target/i386: enumerate VMX nested-exception support vmxcap: add support for VMX FRED controls target/i386: mark CR4.FRED not reserved target/i386: add support for FRED in CPUID enumeration hvf: Makes assert_hvf_ok report failed expression i386/hvf: Updates API usage to use modern vCPU run function i386/hvf: In kick_vcpu use hv_vcpu_interrupt to force exit i386/hvf: Fixes dirty memory tracking by page granularity RX->RWX change hvf: Consistent types for vCPU handles i386/hvf: Fixes some compilation warnings i386/hvf: Adds support for INVTSC cpuid bit stubs/meson: Fix qemuutil build when --disable-system scsi-disk: Don't silently truncate serial number ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
69cb498c |
| 29-May-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: fix pushed value of EFLAGS.RF
When preparing an exception stack frame for a fault exception, the value pushed for RF is 1. Take that into account. The same should be true of interrupt
target/i386: fix pushed value of EFLAGS.RF
When preparing an exception stack frame for a fault exception, the value pushed for RF is 1. Take that into account. The same should be true of interrupts for repeated string instructions, but the situation there is complicated.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
78ef97c0 |
| 25-May-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
Build system and target/i386/translate.c cleanups
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
Build system and target/i386/translate.c cleanups
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmZRy1gUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroMTtQf/ZQskuqZyTrDhB/uVUT8oT5JNKQNS # GbFSgDK7jDdBeU3UmoYrlx9vfFR/mH5cA88MlusUy0SjQBNo4onD725o6Vvum/LW # DPe5ZyE34wvOasM7KXqJsD+2SttjaVjCXN4ip+E9WL5By2TWJgrk6IgTtvAhT9cd # LWb5OEIInaq7ZiWz3EpjmGvZd0M4mxqXi5OeDvmoFyf38xElfbWZWbfhJv+H5L1X # stivPBtUbXOzh63NL491hUYQtiAWlow8Qcnn7CYRflb6Vdd4QPK+6W8FX5KyU2eC # bXRXloW7wjEAC9pyiVky1SCvtNg7AVFL+9kxwiGreoZfo+/IMA+NP6pGOg== # =hpWy # -----END PGP SIGNATURE----- # gpg: Signature made Sat 25 May 2024 04:28:24 AM PDT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full]
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: (24 commits) migration: remove unnecessary zlib dependency meson: do not query modules before they are processed tcg: include dependencies in static_library() meson: remove unnecessary dependency meson: remove unnecessary reference to libm target/i386: remove aflag argument of gen_lea_v_seg target/i386: clean up repeated string operations target/i386: introduce gen_lea_ss_ofs target/i386: use mo_stacksize more target/i386: inline gen_add_A0_ds_seg target/i386: split gen_ldst_modrm for load and store target/i386: reg in gen_ldst_modrm is always OR_TMP0 target/i386: raze the gen_eob* jungle target/i386: assert that gen_update_eip_cur and gen_update_eip_next are the same in tb_stop target/i386: avoid calling gen_eob_inhibit_irq before tb_stop target/i386: avoid calling gen_eob_syscall before tb_stop target/i386: document and group DISAS_* constants target/i386: set CC_OP in helpers if they want CC_OP_EFLAGS target/i386: cpu_load_eflags already sets cc_op target/i386: remove unnecessary gen_update_cc_op before gen_eob* ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
abdcc5c8 |
| 16-May-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: set CC_OP in helpers if they want CC_OP_EFLAGS
Mark cc_op as clean and do not spill it at the end of the translation block. Technically this is a tiny bit less efficient, but:
* it res
target/i386: set CC_OP in helpers if they want CC_OP_EFLAGS
Mark cc_op as clean and do not spill it at the end of the translation block. Technically this is a tiny bit less efficient, but:
* it results in translations that are a tiny bit smaller
* for most of these instructions, it is not unlikely that they are close to the end of the basic block, in which case cc_op would not be overwritten
* anyway the cost is probably dwarfed by that of computing flags.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|