Revision tags: v9.2.0, v9.1.2 |
|
#
de6c4c82 |
| 25-Oct-2024 |
Pierrick Bouvier <pierrick.bouvier@linaro.org> |
target/i386: fix hang when using slow path for ptw_setl
When instrumenting memory accesses for plugin, we force memory accesses to use the slow path for mmu [1]. This create a situation where we end
target/i386: fix hang when using slow path for ptw_setl
When instrumenting memory accesses for plugin, we force memory accesses to use the slow path for mmu [1]. This create a situation where we end up calling ptw_setl_slow. This was fixed recently in [2] but the issue still could appear out of plugins use case.
Since this function gets called during a cpu_exec, start_exclusive then hangs. This exclusive section was introduced initially for security reasons [3].
I suspect this code path was never triggered, because ptw_setl_slow would always be called transitively from cpu_exec, resulting in a hang.
[1] https://gitlab.com/qemu-project/qemu/-/commit/6d03226b42247b68ab2f0b3663e0f624335a4055 [2] https://gitlab.com/qemu-project/qemu/-/commit/115ade42d50144c15b74368d32dc734ea277d853 [2] https://gitlab.com/qemu-project/qemu/-/commit/3a41aa8226bdaa709121515faea6e0e5ad1efa39 in 9.1.x series [3] https://gitlab.com/qemu-project/qemu/-/issues/279
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/2566 Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241025175857.2554252-2-pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 7ba055b49b74c4d2f4a338c5198485bdff373fb1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: mention [2] in 9.1.x series)
show more ...
|
#
f7ff24a6 |
| 06-Nov-2024 |
Alexander Graf <graf@amazon.com> |
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That functi
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That function then determined whether real mode or paging is active based on whether the pg_mode variable was 0.
Unfortunately pg_mode is 0 in two situations:
1) Paging is disabled (real mode) 2) Paging is in 2-level paging mode (32bit without PAE)
That means the walker now assumed that 2-level paging mode was real mode, breaking NetBSD as well as Windows XP.
To fix that, this patch adds a new PG flag to pg_mode which indicates whether paging is active at all and uses that to determine whether we are in real mode or not.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2654 Fixes: b56617bbcb4 ("target/i386: Walk NPT in guest real mode") Fixes: 01bfc2e2959 (commit b56617bbcb4 in stable-9.1.x series) Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Link: https://lore.kernel.org/r/20241106154329.67218-1-graf@amazon.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 8fa11a4df344f58375eb26b3b65004345f21ef37) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
abb1565d |
| 16-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-tcg-20241116' of https://gitlab.com/rth7680/qemu into staging
cpu: ensure we don't call start_exclusive from cpu_exec tcg: Allow top bit of SIMD_DATA_BITS to be set in simd_desc() ac
Merge tag 'pull-tcg-20241116' of https://gitlab.com/rth7680/qemu into staging
cpu: ensure we don't call start_exclusive from cpu_exec tcg: Allow top bit of SIMD_DATA_BITS to be set in simd_desc() accel/tcg: Fix user-only probe_access_internal plugin check linux-user: Fix setreuid and setregid to use direct syscalls linux-user: Tolerate CONFIG_LSM_MMAP_MIN_ADDR linux-user: Honor elf alignment when placing images linux-user/*: Reduce vdso alignment to 4k linux-user/arm: Select vdso for be8 and be32 modes
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmc4z/8dHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/vWgf5Af8105enuWEdJ9c+ # KiyTsOWQEOKXTUSlSUxPs9FEeEr2l/mccvqUhiD7ptZq7P5/40+3tB18KXc5YuiE # 45CZGRAr/tjALGT5LidSYzm6RgljWXYlvWVShqKlQpOD2L0GP5k8a7KEKsT3SLtS # 9l+SVvjNOE+Jv23FWSOVYq0K0e5dPKzS1gtviCg+obA56dsiSKiEwwg+a5ca6oRe # 9SUKoRnudpUv3fiYo8yZaHPW0ADhsITAB20ncN+cI9t4li9q5AWUbPZ+ADP113+2 # pWlco1VqR4pONK2UgbSmxDtjQf1GBi7E2MBFBjBMxTaiw/jXAZcZGIK4geZYKdHT # NJj/0Q== # =oKCm # -----END PGP SIGNATURE----- # gpg: Signature made Sat 16 Nov 2024 17:01:51 GMT # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* tag 'pull-tcg-20241116' of https://gitlab.com/rth7680/qemu: tcg: Allow top bit of SIMD_DATA_BITS to be set in simd_desc() linux-user/arm: Select vdso for be8 and be32 modes linux-user/ppc: Reduce vdso alignment to 4k linux-user/loongarch64: Reduce vdso alignment to 4k linux-user/arm: Reduce vdso alignment to 4k linux-user/aarch64: Reduce vdso alignment to 4k linux-user: Drop image_info.alignment linux-user: Honor elf alignment when placing images cpu: ensure we don't call start_exclusive from cpu_exec target/i386: fix hang when using slow path for ptw_setl tests/tcg: Test that sigreturn() does not corrupt the signal mask linux-user: Tolerate CONFIG_LSM_MMAP_MIN_ADDR accel/tcg: Fix user-only probe_access_internal plugin check target/arm: Drop user-only special case in sve_stN_r linux-user: Fix setreuid and setregid to use direct syscalls
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
7ba055b4 |
| 25-Oct-2024 |
Pierrick Bouvier <pierrick.bouvier@linaro.org> |
target/i386: fix hang when using slow path for ptw_setl
When instrumenting memory accesses for plugin, we force memory accesses to use the slow path for mmu [1]. This create a situation where we end
target/i386: fix hang when using slow path for ptw_setl
When instrumenting memory accesses for plugin, we force memory accesses to use the slow path for mmu [1]. This create a situation where we end up calling ptw_setl_slow. This was fixed recently in [2] but the issue still could appear out of plugins use case.
Since this function gets called during a cpu_exec, start_exclusive then hangs. This exclusive section was introduced initially for security reasons [3].
I suspect this code path was never triggered, because ptw_setl_slow would always be called transitively from cpu_exec, resulting in a hang.
[1] https://gitlab.com/qemu-project/qemu/-/commit/6d03226b42247b68ab2f0b3663e0f624335a4055 [2] https://gitlab.com/qemu-project/qemu/-/commit/115ade42d50144c15b74368d32dc734ea277d853 [3] https://gitlab.com/qemu-project/qemu/-/issues/279
Fixes: https://gitlab.com/qemu-project/qemu/-/issues/2566 Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241025175857.2554252-2-pierrick.bouvier@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
f0cfd067 |
| 09-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* i386: fix -M isapc with ubsan * i386: add sha512, sm3, sm4 feature bits * eif: fix Coverity issues * i386/hvf: x2APIC suppo
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* i386: fix -M isapc with ubsan * i386: add sha512, sm3, sm4 feature bits * eif: fix Coverity issues * i386/hvf: x2APIC support * i386/hvf: fixes * i386/tcg: fix 2-stage page walk * eif: fix coverity issues * rust: fix subproject warnings with new rust, avoid useless cmake fallback
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmcvEHYUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroNn4AgAl+GaD/fHHU+9TCyKRg1Ux/iTSkqh # PBs76H2w879TDeuPkKZlnYqc7n85rlh1cJwQz01X79OFEeXP6oHiI9Q6qyflSxF0 # V+DrJhZc1CtZBChx9ZUMWUAWjYJFFjNwYA7/LLuLl6RfOm8bIJUWIhDjliJ4Bcea # 5VI13OtTvYvVurRLUBXWU0inh9KLHIw4RlNgi8Pmb2wNXkPxENpWjsGqWH0jlKS5 # ZUNgTPx/eY5MDwKoAyif2gsdfJlxGxgkpz3Mic4EGE9cw1cRASI3tKb3KH61hNTE # K21UI0+/+kv27cPnpZzYMDSkrJs7PEgVJ/70NRmAJySA76IG3XSsb5+xZg== # =pI4/ # -----END PGP SIGNATURE----- # gpg: Signature made Sat 09 Nov 2024 07:34:14 GMT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: hw/i386/pc: Don't try to init PCI NICs if there is no PCI bus rust: qemu-api-macros: always process subprojects before dependencies i386/hvf: Removes duplicate/shadowed variables in hvf_vcpu_exec i386/hvf: Raise exception on error setting APICBASE i386/hvf: Fixes startup memory leak (vmcs caps) i386/hvf: Fix for UB in handling CPUID function 0xD i386/hvf: Integrates x2APIC support with hvf accel eif: cope with huge section sizes eif: cope with huge section offsets target/i386: Fix legacy page table walk rust: add meson_version to all subprojects target/i386/hvf: fix clang compilation warning target/i386: add sha512, sm3, sm4 feature bits
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
8fa11a4d |
| 06-Nov-2024 |
Alexander Graf <graf@amazon.com> |
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That functi
target/i386: Fix legacy page table walk
Commit b56617bbcb4 ("target/i386: Walk NPT in guest real mode") added logic to run the page table walker even in real mode if we are in NPT mode. That function then determined whether real mode or paging is active based on whether the pg_mode variable was 0.
Unfortunately pg_mode is 0 in two situations:
1) Paging is disabled (real mode) 2) Paging is in 2-level paging mode (32bit without PAE)
That means the walker now assumed that 2-level paging mode was real mode, breaking NetBSD as well as Windows XP.
To fix that, this patch adds a new PG flag to pg_mode which indicates whether paging is active at all and uses that to determine whether we are in real mode or not.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2654 Fixes: b56617bbcb4 ("target/i386: Walk NPT in guest real mode") Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk> Link: https://lore.kernel.org/r/20241106154329.67218-1-graf@amazon.com Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
Revision tags: v9.1.1 |
|
#
3a41aa82 |
| 13-Oct-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/i386: Use probe_access_full_mmu in ptw_translate
The probe_access_full_mmu function was designed for this purpose, and does not report the memory operation event to plugins.
Cc: qemu-stable@
target/i386: Use probe_access_full_mmu in ptw_translate
The probe_access_full_mmu function was designed for this purpose, and does not report the memory operation event to plugins.
Cc: qemu-stable@nongnu.org Fixes: 6d03226b422 ("plugins: force slow path when plugins instrument memory ops") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241013184733.1423747-3-richard.henderson@linaro.org> (cherry picked from commit 115ade42d50144c15b74368d32dc734ea277d853) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
01bfc2e2 |
| 21-Sep-2024 |
Alexander Graf <graf@amazon.com> |
target/i386: Walk NPT in guest real mode
When translating virtual to physical address with a guest CPU that supports nested paging (NPT), we need to perform every page table walk access indirectly t
target/i386: Walk NPT in guest real mode
When translating virtual to physical address with a guest CPU that supports nested paging (NPT), we need to perform every page table walk access indirectly through the NPT, which we correctly do.
However, we treat real mode (no page table walk) special: In that case, we currently just skip any walks and translate VA -> PA. With NPT enabled, we also need to then perform NPT walk to do GVA -> GPA -> HPA which we fail to do so far.
The net result of that is that TCG VMs with NPT enabled that execute real mode code (like SeaBIOS) end up with GPA==HPA mappings which means the guest accesses host code and data. This typically shows as failure to boot guests.
This patch changes the page walk logic for NPT enabled guests so that we always perform a GVA -> GPA translation and then skip any logic that requires an actual PTE.
That way, all remaining logic to walk the NPT stays and we successfully walk the NPT in real mode.
Cc: qemu-stable@nongnu.org Fixes: fe441054bb3f0 ("target-i386: Add NPT support") Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Eduard Vlad <evlad@amazon.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240921085712.28902-1-graf@amazon.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit b56617bbcb473c25815d1bf475e326f84563b1de) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
6b375650 |
| 24-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-tcg-20241022' of https://gitlab.com/rth7680/qemu into staging
tcg: Reset data_gen_ptr correctly tcg/riscv: Implement host vector support tcg/ppc: Fix tcg_out_rlw_rc target/i386: Walk
Merge tag 'pull-tcg-20241022' of https://gitlab.com/rth7680/qemu into staging
tcg: Reset data_gen_ptr correctly tcg/riscv: Implement host vector support tcg/ppc: Fix tcg_out_rlw_rc target/i386: Walk NPT in guest real mode target/i386: Use probe_access_full_mmu in ptw_translate linux-user: Fix build failure caused by missing __u64 on musl linux-user: Emulate /proc/self/maps under mmap_lock linux-user/riscv: Fix definition of RISCV_HWPROBE_EXT_ZVFHMIN linux-user/ppc: Fix sigmask endianness issue in sigreturn
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmcYbccdHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV97TwgAmg27QFCdiTrqZgs2 # P1AO40zTgyTAwWx2gykaEuDWNhz/uSWvlBRN0/636wqGPkbJtrRHYM26og4BAThh # o172/IwiZqfKOR1ndHl9j3BrtmrlIlaEEjiikqy1MTZF127irV6JWoJE1mSUrAxy # 3Cm1K4gnK/e1+LdWf4Lj+K2lE6PpAK/ppKggzOXhtEgKiH1l4bUCl/Fq54wqphUn # YS+cpmgQDCkXFfmPbQqie0HDpe3bhb75qIDQrbC5JcZdHqV73rTwSZvfUOmS/5Re # 18K6nfAXXT+Zm0IrJMey/7b1jUWF3nMUVCTuLvmhSOwBAkIvTVYHko9CjvLtM6YH # UHu3yA== # =V393 # -----END PGP SIGNATURE----- # gpg: Signature made Wed 23 Oct 2024 04:30:15 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* tag 'pull-tcg-20241022' of https://gitlab.com/rth7680/qemu: (24 commits) linux-user/riscv: Fix definition of RISCV_HWPROBE_EXT_ZVFHMIN linux-user: Fix build failure caused by missing __u64 on musl linux-user: Trace rt_sigprocmask's sigsets linux-user/ppc: Fix sigmask endianness issue in sigreturn linux-user: Emulate /proc/self/maps under mmap_lock target/i386: Remove ra parameter from ptw_translate target/i386: Use probe_access_full_mmu in ptw_translate target/i386: Walk NPT in guest real mode include/exec: Improve probe_access_full{, _mmu} documentation tcg/ppc: Fix tcg_out_rlw_rc tcg/riscv: Enable native vector support for TCG host tcg/riscv: Implement vector roti/v/x ops tcg/riscv: Implement vector shi/s/v ops tcg/riscv: Implement vector min/max ops tcg/riscv: Implement vector sat/mul ops tcg/riscv: Accept constant first argument to sub_vec tcg/riscv: Implement vector neg ops tcg/riscv: Implement vector cmp/cmpsel ops tcg/riscv: Add support for basic vector opcodes tcg/riscv: Implement vector mov/dup{m/i} ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
e46fbc7d |
| 13-Oct-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/i386: Remove ra parameter from ptw_translate
This argument is no longer used.
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
target/i386: Remove ra parameter from ptw_translate
This argument is no longer used.
Suggested-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241013184733.1423747-4-richard.henderson@linaro.org>
show more ...
|
#
115ade42 |
| 13-Oct-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/i386: Use probe_access_full_mmu in ptw_translate
The probe_access_full_mmu function was designed for this purpose, and does not report the memory operation event to plugins.
Cc: qemu-stable@
target/i386: Use probe_access_full_mmu in ptw_translate
The probe_access_full_mmu function was designed for this purpose, and does not report the memory operation event to plugins.
Cc: qemu-stable@nongnu.org Fixes: 6d03226b422 ("plugins: force slow path when plugins instrument memory ops") Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20241013184733.1423747-3-richard.henderson@linaro.org>
show more ...
|
#
b56617bb |
| 21-Sep-2024 |
Alexander Graf <graf@amazon.com> |
target/i386: Walk NPT in guest real mode
When translating virtual to physical address with a guest CPU that supports nested paging (NPT), we need to perform every page table walk access indirectly t
target/i386: Walk NPT in guest real mode
When translating virtual to physical address with a guest CPU that supports nested paging (NPT), we need to perform every page table walk access indirectly through the NPT, which we correctly do.
However, we treat real mode (no page table walk) special: In that case, we currently just skip any walks and translate VA -> PA. With NPT enabled, we also need to then perform NPT walk to do GVA -> GPA -> HPA which we fail to do so far.
The net result of that is that TCG VMs with NPT enabled that execute real mode code (like SeaBIOS) end up with GPA==HPA mappings which means the guest accesses host code and data. This typically shows as failure to boot guests.
This patch changes the page walk logic for NPT enabled guests so that we always perform a GVA -> GPA translation and then skip any logic that requires an actual PTE.
That way, all remaining logic to walk the NPT stays and we successfully walk the NPT in real mode.
Cc: qemu-stable@nongnu.org Fixes: fe441054bb3f0 ("target-i386: Add NPT support") Signed-off-by: Alexander Graf <graf@amazon.com> Reported-by: Eduard Vlad <evlad@amazon.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-ID: <20240921085712.28902-1-graf@amazon.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
Revision tags: v9.1.0 |
|
#
873f9ca3 |
| 06-May-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'accel-20240506' of https://github.com/philmd/qemu into staging
Accelerator patches
- Extract page-protection definitions to page-protection.h - Rework in accel/tcg in preparation of extr
Merge tag 'accel-20240506' of https://github.com/philmd/qemu into staging
Accelerator patches
- Extract page-protection definitions to page-protection.h - Rework in accel/tcg in preparation of extracting TCG fields from CPUState - More uses of get_task_state() in user emulation - Xen refactors in preparation for adding multiple map caches (Juergen & Edgar) - MAINTAINERS updates (Aleksandar and Bin)
# -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmY40CAACgkQ4+MsLN6t # wN5drxAA1oIsuUzpAJmlMIxZwlzbICiuexgn/HH9DwWNlrarKo7V1l4YB8jd9WOg # IKuj7c39kJKsDEB8BXApYwcly+l7DYdnAAI8Z7a+eN+ffKNl/0XBaLjsGf58RNwY # fb39/cXWI9ZxKxsHMSyjpiu68gOGvZ5JJqa30Fr+eOGuug9Fn/fOe1zC6l/dMagy # Dnym72stpD+hcsN5sVwohTBIk+7g9og1O/ctRx6Q3ZCOPz4p0+JNf8VUu43/reaR # 294yRK++JrSMhOVFRzP+FH1G25NxiOrVCFXZsUTYU+qPDtdiKtjH1keI/sk7rwZ7 # U573lesl7ewQFf1PvMdaVf0TrQyOe6kUGr9Mn2k8+KgjYRAjTAQk8V4Ric/+xXSU # 0rd7Cz7lyQ8jm0DoOElROv+lTDQs4dvm3BopF3Bojo4xHLHd3SFhROVPG4tvGQ3H # 72Q5UPR2Jr2QZKiImvPceUOg0z5XxoN6KRUkSEpMFOiTRkbwnrH59z/qPijUpe6v # 8l5IlI9GjwkL7pcRensp1VC6e9KC7F5Od1J/2RLDw3UQllMQXqVw2bxD3CEtDRJL # QSZoS4d1jUCW4iAYdqh/8+2cOIPiCJ4ai5u7lSdjrIJkRErm32FV/pQLZauoHlT5 # eTPUgzDoRXVgI1X1slTpVXlEEvRNbhZqSkYLkXr80MLn5hTafo0= # =3Qkg # -----END PGP SIGNATURE----- # gpg: Signature made Mon 06 May 2024 05:42:08 AM PDT # gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE # gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
* tag 'accel-20240506' of https://github.com/philmd/qemu: (28 commits) MAINTAINERS: Update my email address MAINTAINERS: Update Aleksandar Rikalo email system: Pass RAM MemoryRegion and is_write in xen_map_cache() xen: mapcache: Break out xen_map_cache_init_single() xen: mapcache: Break out xen_invalidate_map_cache_single() xen: mapcache: Refactor xen_invalidate_map_cache_entry_unlocked xen: mapcache: Refactor xen_replace_cache_entry_unlocked xen: mapcache: Break out xen_ram_addr_from_mapcache_single xen: mapcache: Refactor xen_remap_bucket for multi-instance xen: mapcache: Refactor xen_map_cache for multi-instance xen: mapcache: Refactor lock functions for multi-instance xen: let xen_ram_addr_from_mapcache() return -1 in case of not found entry system: let qemu_map_ram_ptr() use qemu_ram_ptr_length() user: Use get_task_state() helper user: Declare get_task_state() once in 'accel/tcg/vcpu-state.h' user: Forward declare TaskState type definition accel/tcg: Move @plugin_mem_cbs from CPUState to CPUNegativeOffsetState accel/tcg: Restrict cpu_plugin_mem_cbs_enabled() to TCG accel/tcg: Restrict qemu_plugin_vcpu_exit_hook() to TCG plugins accel/tcg: Update CPUNegativeOffsetState::can_do_io field documentation ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
74781c08 |
| 06-Dec-2023 |
Philippe Mathieu-Daudé <philmd@linaro.org> |
exec/cpu: Extract page-protection definitions to page-protection.h
Extract page-protection definitions from "exec/cpu-all.h" to "exec/page-protection.h".
The list of files requiring the new header
exec/cpu: Extract page-protection definitions to page-protection.h
Extract page-protection definitions from "exec/cpu-all.h" to "exec/page-protection.h".
The list of files requiring the new header was generated using:
$ git grep -wE \ 'PAGE_(READ|WRITE|EXEC|RWX|VALID|ANON|RESERVED|TARGET_.|PASSTHROUGH)'
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org> Acked-by: Nicholas Piggin <npiggin@gmail.com> Acked-by: Richard Henderson <richard.henderson@linaro.org> Message-Id: <20240427155714.53669-3-philmd@linaro.org>
show more ...
|
#
38a23eb3 |
| 26-Mar-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'hw-misc-20240326' of https://github.com/philmd/qemu into staging
Misc HW patch queue
[hw] - Do not silently overwrite 'io_timeout' property in scsi-generic (Lorenz) - Propagate period wh
Merge tag 'hw-misc-20240326' of https://github.com/philmd/qemu into staging
Misc HW patch queue
[hw] - Do not silently overwrite 'io_timeout' property in scsi-generic (Lorenz) - Propagate period when enabling a clock in stm32l4x5 mux (Arnaud, Phil) - Add missing smbios_get_table_legacy() stub (Igor) - Append a space in gpa2hva() HMP error message (Yao) - Fix compiler warning in 'execlog' plugin (Yao)
[target] - i386: Enable page walking from MMIO memory (Gregory, Jonathan) - tricore: Use correct string format in cpu_tlb_fill (Phil)
[docs] - Fix formatting in amigang.rst (Zoltan)
[ui] - Fix cocoa regression in platform fullscreen toggling (Akihiko)
# -----BEGIN PGP SIGNATURE----- # # iQIzBAABCAAdFiEE+qvnXhKRciHc/Wuy4+MsLN6twN4FAmYC7QoACgkQ4+MsLN6t # wN4WIw//cLw1caDa8ki3htGWGVI2P/QFdsvId7ah4Iul7znf6NWUORDBjvIvLpaF # sesPF7BRQ/qJFT5ttB8DsKc1IHw3IASAGL/NK3i7v9GkRiBJNJvQRO2rVfNmXvN8 # ++AZ/J+Y1OZ4Y1hcxXGUVIpwKzndR5Oz9zNXQ+C0CQqYljwxC3huB3m6C7vKOgeq # SNKVw/hrTBYLzyvooKqLb6Xual2+olSwc2/BwqUOOCP6Y1HmgQeWy5ckJqsuVS2T # 5q5VtkduBCsUhgmflsLF3LCKrNTQUw+jOAGH2gyRvXMjmvwCmNy5xo8eOD0iTwXb # +Ffp/kfqm2N1QwNWcBi39+BU+Plti03mnL7C9jNzaEBaQ9Q7wMNqASN0daHSk3vh # 4Vw/FsaUJAohInKYghCgO0fUVpeLis+8p5lDD7QwAL9tiYk7/tgrbtyNLb+m/3P9 # pPNGt9Fnijg+/zDDfjVYwtDMRbL0df7SqTjhJW3TIQ+d83tmoVuCDmBysEXywzSt # 5e4yyjDf8q1C23yipK9I84wuvWjfIDYIPSUzCKaZYf4xbdx7GyNaOoOqWZYpordD # ua/4hRuQ4AcDuCe3XBKsmAex6wpYodjnfEi5Y5uV8vyPfeiVQodY/07pok/NZjEL # tUNy3EkQFqXxT1ctT7FhN2eh6WjSo0SJFtIjVDarJ0mUkS4VXgM= # =ccz/ # -----END PGP SIGNATURE----- # gpg: Signature made Tue 26 Mar 2024 15:43:06 GMT # gpg: using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE # gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full] # Primary key fingerprint: FAAB E75E 1291 7221 DCFD 6BB2 E3E3 2C2C DEAD C0DE
* tag 'hw-misc-20240326' of https://github.com/philmd/qemu: ui/cocoa: Use NSTrackingInVisibleRect ui/cocoa: Resize window after toggling zoom-to-fit ui/cocoa: Fix aspect ratio hw/smbios: add stub for smbios_get_table_legacy() contrib/plugins/execlog: Fix compiler warning docs/system/ppc/amigang.rst: Fix formatting hw/misc/stm32l4x5_rcc: Propagate period when enabling a clock hw/misc/stm32l4x5_rcc: Inline clock_update() in clock_mux_update() hw/clock: Let clock_set_mul_div() return a boolean value target/tricore/helper: Use correct string format in cpu_tlb_fill() monitor/hmp-cmds-target: Append a space in error message in gpa2hva() hw/scsi/scsi-generic: Fix io_timeout property not applying target/i386/tcg: Enable page walking from MMIO memory
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
9dab7bbb |
| 07-Mar-2024 |
Gregory Price <gregory.price@memverge.com> |
target/i386/tcg: Enable page walking from MMIO memory
CXL emulation of interleave requires read and write hooks due to requirement for subpage granularity. The Linux kernel stack now enables using t
target/i386/tcg: Enable page walking from MMIO memory
CXL emulation of interleave requires read and write hooks due to requirement for subpage granularity. The Linux kernel stack now enables using this memory as conventional memory in a separate NUMA node. If a process is deliberately forced to run from that node $ numactl --membind=1 ls the page table walk on i386 fails.
Useful part of backtrace:
(cpu=cpu@entry=0x555556fd9000, fmt=fmt@entry=0x555555fe3378 "cpu_io_recompile: could not find TB for pc=%p") at ../../cpu-target.c:359 (retaddr=0, addr=19595792376, attrs=..., xlat=<optimized out>, cpu=0x555556fd9000, out_offset=<synthetic pointer>) at ../../accel/tcg/cputlb.c:1339 (cpu=0x555556fd9000, full=0x7fffee0d96e0, ret_be=ret_be@entry=0, addr=19595792376, size=size@entry=8, mmu_idx=4, type=MMU_DATA_LOAD, ra=0) at ../../accel/tcg/cputlb.c:2030 (cpu=cpu@entry=0x555556fd9000, p=p@entry=0x7ffff56fddc0, mmu_idx=<optimized out>, type=type@entry=MMU_DATA_LOAD, memop=<optimized out>, ra=ra@entry=0) at ../../accel/tcg/cputlb.c:2356 (cpu=cpu@entry=0x555556fd9000, addr=addr@entry=19595792376, oi=oi@entry=52, ra=ra@entry=0, access_type=access_type@entry=MMU_DATA_LOAD) at ../../accel/tcg/cputlb.c:2439 at ../../accel/tcg/ldst_common.c.inc:301 at ../../target/i386/tcg/sysemu/excp_helper.c:173 (err=0x7ffff56fdf80, out=0x7ffff56fdf70, mmu_idx=0, access_type=MMU_INST_FETCH, addr=18446744072116178925, env=0x555556fdb7c0) at ../../target/i386/tcg/sysemu/excp_helper.c:578 (cs=0x555556fd9000, addr=18446744072116178925, size=<optimized out>, access_type=MMU_INST_FETCH, mmu_idx=0, probe=<optimized out>, retaddr=0) at ../../target/i386/tcg/sysemu/excp_helper.c:604
Avoid this by plumbing the address all the way down from x86_cpu_tlb_fill() where is available as retaddr to the actual accessors which provide it to probe_access_full() which already handles MMIO accesses.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2180 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2220 Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Suggested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Gregory Price <gregory.price@memverge.com> Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Message-ID: <20240307155304.31241-2-Jonathan.Cameron@huawei.com> Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
show more ...
|
#
decafac4 |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: leave the A20 bit set in the final NPT walk
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address.
Unlike t
target/i386: leave the A20 bit set in the final NPT walk
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address.
Unlike the previous patch, in this one the masking must be kept, but it was done too early.
Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b5a9de3259f4c791bde2faff086dd5737625e41e) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
6801a20e |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: remove unnecessary/wrong application of the A20 mask
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_
target/i386: remove unnecessary/wrong application of the A20 mask
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_access_full() and x86_cpu_tlb_fill().
If ptw_translate() on the other hand does a MMU_NESTED_IDX access, the A20 mask must not be applied to the address that is looked up in the nested page tables; it must be applied only to the addresses that hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the previous paragraph).
Therefore, we can remove A20 masking from the computation of the page table entry's address, and let get_physical_address() or mmu_translate() apply it when they know they are returning a host-physical address.
Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit a28fe7dc1939333c81b895cdced81c69eb7c5ad0) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
a28b6b4e |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: Fix physical address truncation
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorr
target/i386: Fix physical address truncation
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorrect when using physical address extensions (PAE) outside of long mode, with the result that a 32-bit operating system using PAE to access memory above 4G will experience undefined behaviour.
The truncation code was originally introduced in commit 33dfdb5 ("x86: only allow real mode to access 32bit without LMA"), where it applied only to translations performed while paging is disabled (and so cannot affect guests using PAE).
Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX") rearranged the code such that the truncation also applied to the use of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use atomic operations for pte updates") brought this truncation into scope for page table entry accesses, and is the first commit for which a Windows 10 32-bit guest will reliably fail to boot if memory above 4G is present.
The truncation code however is not completely redundant. Even though the maximum address size for any executed instruction is 32 bits, helpers for operations such as BOUND, FSAVE or XSAVE may ask get_physical_address() to translate an address outside of the 32-bit range, if invoked with an argument that is close to the 4G boundary. Likewise for processor accesses, for example TSS or IDT accesses, when EFER.LMA==0.
So, move the address truncation in get_physical_address() so that it applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040 Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Cc: qemu-stable@nongnu.org Co-developed-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit b1661801c184119a10ad6cbc3b80330fc22e7b2c) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru> (Mjt: drop unrelated change in target/i386/cpu.c)
show more ...
|
#
6ed82113 |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: mask high bits of CR3 in 32-bit mode
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level paging or PAE paging). Do this in mmu_translate() to remove the last where get_phy
target/i386: mask high bits of CR3 in 32-bit mode
CR3 bits 63:32 are ignored in 32-bit mode (either legacy 2-level paging or PAE paging). Do this in mmu_translate() to remove the last where get_physical_address() meaningfully drops the high bits of the address.
Cc: qemu-stable@nongnu.org Suggested-by: Richard Henderson <richard.henderson@linaro.org> Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> (cherry picked from commit 68fb78d7d5723066ec2cacee7d25d67a4143b42f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
bfe8020c |
| 28-Feb-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* target/i386: Fix physical address truncation on 32-bit PAE * Remove globals for options -no-fd-bootchk and -win2k-hack
# -
Merge tag 'for-upstream' of https://gitlab.com/bonzini/qemu into staging
* target/i386: Fix physical address truncation on 32-bit PAE * Remove globals for options -no-fd-bootchk and -win2k-hack
# -----BEGIN PGP SIGNATURE----- # # iQFIBAABCAAyFiEE8TM4V0tmI4mGbHaCv/vSX3jHroMFAmXebwQUHHBib256aW5p # QHJlZGhhdC5jb20ACgkQv/vSX3jHroPozAf/Vgc9u6C+8PcPDrol6qxjI+EOHLNy # 7M3/OFpUkwLXuOSawb6syYxHpLS38fKRcsb2ninngUmbRWA6p+KNUizlAFMj7op5 # wJmtdamCwCwXXaw20SfWxx2Ih0JS7FQsRsU94HTOdaDB17C9+hBcYwcggsOAXCmq # gyVenEF1mov2A4jLMhdVIRX784AAoEP+QAuhBKQBrQwRLCTTyNdHl7jXdB9w+2sh # KafokoFLcozJHz/tN3AhRKy6zjPugJyQmJwBRuj9tstCILtXpvf/ZE/3pUq5l3ZY # A6dCI0zWAlGNTkpKRXsMFozNIVP2htnyidy29XHptlY5acfjtQ++rMu3BQ== # =WY4H # -----END PGP SIGNATURE----- # gpg: Signature made Tue 27 Feb 2024 23:23:48 GMT # gpg: using RSA key F13338574B662389866C7682BFFBD25F78C7AE83 # gpg: issuer "pbonzini@redhat.com" # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" [full] # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" [full] # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83
* tag 'for-upstream' of https://gitlab.com/bonzini/qemu: ide, vl: turn -win2k-hack into a property on IDE devices ide: collapse parameters to ide_init_drive target/i386: leave the A20 bit set in the final NPT walk target/i386: remove unnecessary/wrong application of the A20 mask target/i386: Fix physical address truncation target/i386: use separate MMU indexes for 32-bit accesses target/i386: introduce function to query MMU indices target/i386: check validity of VMCB addresses target/i386: mask high bits of CR3 in 32-bit mode vl, pc: turn -no-fd-bootchk into a machine property
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
b5a9de32 |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: leave the A20 bit set in the final NPT walk
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address.
Unlike t
target/i386: leave the A20 bit set in the final NPT walk
The A20 mask is only applied to the final memory access. Nested page tables are always walked with the raw guest-physical address.
Unlike the previous patch, in this one the masking must be kept, but it was done too early.
Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
a28fe7dc |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: remove unnecessary/wrong application of the A20 mask
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_
target/i386: remove unnecessary/wrong application of the A20 mask
If ptw_translate() does a MMU_PHYS_IDX access, the A20 mask is already applied in get_physical_address(), which is called via probe_access_full() and x86_cpu_tlb_fill().
If ptw_translate() on the other hand does a MMU_NESTED_IDX access, the A20 mask must not be applied to the address that is looked up in the nested page tables; it must be applied only to the addresses that hold the NPT entries (which is achieved via MMU_PHYS_IDX, per the previous paragraph).
Therefore, we can remove A20 masking from the computation of the page table entry's address, and let get_physical_address() or mmu_translate() apply it when they know they are returning a host-physical address.
Cc: qemu-stable@nongnu.org Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
b1661801 |
| 22-Dec-2023 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: Fix physical address truncation
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorr
target/i386: Fix physical address truncation
The address translation logic in get_physical_address() will currently truncate physical addresses to 32 bits unless long mode is enabled. This is incorrect when using physical address extensions (PAE) outside of long mode, with the result that a 32-bit operating system using PAE to access memory above 4G will experience undefined behaviour.
The truncation code was originally introduced in commit 33dfdb5 ("x86: only allow real mode to access 32bit without LMA"), where it applied only to translations performed while paging is disabled (and so cannot affect guests using PAE).
Commit 9828198 ("target/i386: Add MMU_PHYS_IDX and MMU_NESTED_IDX") rearranged the code such that the truncation also applied to the use of MMU_PHYS_IDX and MMU_NESTED_IDX. Commit 4a1e9d4 ("target/i386: Use atomic operations for pte updates") brought this truncation into scope for page table entry accesses, and is the first commit for which a Windows 10 32-bit guest will reliably fail to boot if memory above 4G is present.
The truncation code however is not completely redundant. Even though the maximum address size for any executed instruction is 32 bits, helpers for operations such as BOUND, FSAVE or XSAVE may ask get_physical_address() to translate an address outside of the 32-bit range, if invoked with an argument that is close to the 4G boundary. Likewise for processor accesses, for example TSS or IDT accesses, when EFER.LMA==0.
So, move the address truncation in get_physical_address() so that it applies to 32-bit MMU indexes, but not to MMU_PHYS_IDX and MMU_NESTED_IDX.
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2040 Fixes: 4a1e9d4d11c ("target/i386: Use atomic operations for pte updates", 2022-10-18) Cc: qemu-stable@nongnu.org Co-developed-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Michael Brown <mcb30@ipxe.org> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|
#
90f64153 |
| 02-Jan-2024 |
Paolo Bonzini <pbonzini@redhat.com> |
target/i386: use separate MMU indexes for 32-bit accesses
Accesses from a 32-bit environment (32-bit code segment for instruction accesses, EFER.LMA==0 for processor accesses) have to mask away the
target/i386: use separate MMU indexes for 32-bit accesses
Accesses from a 32-bit environment (32-bit code segment for instruction accesses, EFER.LMA==0 for processor accesses) have to mask away the upper 32 bits of the address. While a bit wasteful, the easiest way to do so is to use separate MMU indexes. These days, QEMU anyway is compiled with a fixed value for NB_MMU_MODES. Split MMU_USER_IDX, MMU_KSMAP_IDX and MMU_KNOSMAP_IDX in two.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
show more ...
|