Revision tags: v9.2.0, v9.1.2 |
|
#
6d62f309 |
| 05-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Add new MMU indexes for AArch32 Secure PL1&0
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure
target/arm: Add new MMU indexes for AArch32 Secure PL1&0
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0)
This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime.
We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits.
The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it.
Fix this by adding two new MMU indexes: * ARMMMUIdx_E30_0 is for Secure PL0 * ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN" (and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme).
These extra two indexes bring us up to the maximum of 16 that the core code can currently support.
This commit: * adds the new MMU index handling to the various places where we deal in MMU index values * adds assertions that we aren't AArch32 EL3 in a couple of places that currently use the E10 indexes, to document why they don't also need to handle the E30 indexes * documents in a comment why regime_has_2_ranges() doesn't need updating
Notes for backporting: this commit depends on the preceding revert of 4c2c04746932; that revert and this commit should probably be backported to everywhere that we originally backported 4c2c04746932.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org (cherry picked from commit efbe180ad2ed75d4cc64dfc6fb46a015eef713d1) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
f147ed37 |
| 05-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"
This reverts commit 4c2c0474693229c1f533239bb983495c5427784d.
This commit tried to fix a problem with our usage of MMU indexes when
Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"
This reverts commit 4c2c0474693229c1f533239bb983495c5427784d.
This commit tried to fix a problem with our usage of MMU indexes when EL3 is AArch32, using what it described as a "more complicated approach" where we share the same MMU index values for Secure PL1&0 and NonSecure PL1&0. In theory this should work, but the change didn't account for (at least) two things:
(1) The design change means we need to flush the TLBs at any point where the CPU state flips from one to the other. We already flush the TLB when SCR.NS is changed, but we don't flush the TLB when we take an exception from NS PL1&0 into Mon or when we return from Mon to NS PL1&0, and the commit didn't add any code to do that.
(2) The ATS12NS* address translate instructions allow Mon code (which is Secure) to do a stage 1+2 page table walk for NS. I thought this was OK because do_ats_write() does a page table walk which doesn't use the TLBs, so because it can pass both the MMU index and also an ARMSecuritySpace argument we can tell the table walk that we want NS stage1+2, not S. But that means that all the code within the ptw that needs to find e.g. the regime EL cannot do so only with an mmu_idx -- all these functions like regime_sctlr(), regime_el(), etc would need to pass both an mmu_idx and the security_space, so they can tell whether this is a translation regime controlled by EL1 or EL3 (and so whether to look at SCTLR.S or SCTLR.NS, etc).
In particular, because regime_el() wasn't updated to look at the ARMSecuritySpace it would return 1 even when the CPU was in Monitor mode (and the controlling EL is 3). This meant that page table walks in Monitor mode would look at the wrong SCTLR, TCR, etc and would generally fault when they should not.
Rather than trying to make the complicated changes needed to rescue the design of 4c2c04746932, we revert it in order to instead take the route that that commit describes as "the most straightforward" fix, where we add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN".
This revert will re-expose the "spurious alignment faults in Secure PL0" issue #2326; we'll fix it again in the next commit.
Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Message-id: 20241101142845.1712482-2-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org> (cherry picked from commit 056c5c90c171c4895b407af0cf3d198e1d44b40f) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
f15f7273 |
| 05-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-target-arm-20241105' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * Fix MMU indexes for AArch32 Secure PL1&0 in a less complex and buggy way *
Merge tag 'pull-target-arm-20241105' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * Fix MMU indexes for AArch32 Secure PL1&0 in a less complex and buggy way * Fix SVE SDOT/UDOT/USDOT (4-way, indexed) * softfloat: set 2-operand NaN propagation rule at runtime * disas: Fix build against Capstone v6 (again) * hw/rtc/ds1338: Trace send and receive operations * hw/timer/imx_gpt: Convert DPRINTF to trace events * hw/watchdog/wdt_imx2: Remove redundant assignment * hw/sensor/tmp105: Convert printf() to trace event, add tracing for read/write access * hw/net/npcm_gmac: Change error log to trace event * target/arm: Enable FEAT_CMOW for -cpu max
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmcp/yoZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3ucMD/9pWk2ETLjdviPxlacs5IoM # HvGn8Ll2BSMbeb4YdJc7oZ4YJchGpgHhocEwZuaU9HheWjSg+ZEbyhZgN4DdkT8J # pYr+Rl0MgDNN219kYnO/yqnqlgHbtUpE7y57Li3ApoGNbWAVxsH0xoT45Lpi7DOd # uvJfIy/xdaT3zu/4uBjj7c2VrD8wntEayLM8hpqlgeQZKRG3Wtlk/xrQFKOHPDPO # MDbsGoc2FyogRQoo6WH+J6gkkR9PhqXe6Hbf6WIr1/uffZUZU4M8leSw2DgxrYHo # Zf36AzttwO4GHyML/5SR7uvzfXl7OkGyjedLGCUa7INc3br2+GvLMltdLGGPM9cc # ckMHOWd9ZQuSxcpbtPkSYRG0McRE1GLT+KV3BNOLnN9AJl3qv5Qa55iPrtpB08vX # 3jN6H964w99+NoSB2tTHszpep+M7SRuw5QLsuk3tC/qnBMpzKRwZjGVUegNUtfi/ # Lg5ExF8B62K+xb5j5FmODbbXZmb5AD0rV2MGRIVHjiHdnf7J2FmWUJCe2sYFRnRm # nzszhdOKw4PBhC2fb6Vb/DwCqdQy9vcITWpWBtcjkV5mAPhcBo/VNKNeKoc/tPNS # H8FIFIJbtv5aIixqtKcUBUmrBCYy4EoiRMLkqfC09VW60wtWswAP4KBQxi1ogehV # jJw8AgSLCl2MsVmyzgleZQ== # =Woag # -----END PGP SIGNATURE----- # gpg: Signature made Tue 05 Nov 2024 11:19:06 GMT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20241105' of https://git.linaro.org/people/pmaydell/qemu-arm: (31 commits) target/arm: Enable FEAT_CMOW for -cpu max hw/net/npcm_gmac: Change error log to trace event hw/sensor/tmp105: Convert printf() to trace event, add tracing for read/write access hw/watchdog/wdt_imx2: Remove redundant assignment hw/timer/imx_gpt: Convert DPRINTF to trace events hw/rtc/ds1338: Trace send and receive operations disas: Fix build against Capstone v6 (again) target/arm: Fix SVE SDOT/UDOT/USDOT (4-way, indexed) target/arm: Add new MMU indexes for AArch32 Secure PL1&0 Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32" softfloat: Remove fallback rule from pickNaN() target/rx: Explicitly set 2-NaN propagation rule target/openrisc: Explicitly set 2-NaN propagation rule target/microblaze: Explicitly set 2-NaN propagation rule target/microblaze: Move setting of float rounding mode to reset target/alpha: Explicitly set 2-NaN propagation rule target/i386: Set 2-NaN propagation rule explicitly target/xtensa: Explicitly set 2-NaN propagation rule target/xtensa: Factor out calls to set_use_first_nan() target/sparc: Explicitly set 2-NaN propagation rule ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
efbe180a |
| 05-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Add new MMU indexes for AArch32 Secure PL1&0
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure
target/arm: Add new MMU indexes for AArch32 Secure PL1&0
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0)
This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime.
We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits.
The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it.
Fix this by adding two new MMU indexes: * ARMMMUIdx_E30_0 is for Secure PL0 * ARMMMUIdx_E30_3_PAN is for Secure PL1 when PAN is enabled The existing ARMMMUIdx_E3 is used to mean "Secure PL1 without PAN" (and would be named ARMMMUIdx_E30_3 in an AArch32-centric scheme).
These extra two indexes bring us up to the maximum of 16 that the core code can currently support.
This commit: * adds the new MMU index handling to the various places where we deal in MMU index values * adds assertions that we aren't AArch32 EL3 in a couple of places that currently use the E10 indexes, to document why they don't also need to handle the E30 indexes * documents in a comment why regime_has_2_ranges() doesn't need updating
Notes for backporting: this commit depends on the preceding revert of 4c2c04746932; that revert and this commit should probably be backported to everywhere that we originally backported 4c2c04746932.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2588 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20241101142845.1712482-3-peter.maydell@linaro.org
show more ...
|
#
056c5c90 |
| 05-Nov-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"
This reverts commit 4c2c0474693229c1f533239bb983495c5427784d.
This commit tried to fix a problem with our usage of MMU indexes when
Revert "target/arm: Fix usage of MMU indexes when EL3 is AArch32"
This reverts commit 4c2c0474693229c1f533239bb983495c5427784d.
This commit tried to fix a problem with our usage of MMU indexes when EL3 is AArch32, using what it described as a "more complicated approach" where we share the same MMU index values for Secure PL1&0 and NonSecure PL1&0. In theory this should work, but the change didn't account for (at least) two things:
(1) The design change means we need to flush the TLBs at any point where the CPU state flips from one to the other. We already flush the TLB when SCR.NS is changed, but we don't flush the TLB when we take an exception from NS PL1&0 into Mon or when we return from Mon to NS PL1&0, and the commit didn't add any code to do that.
(2) The ATS12NS* address translate instructions allow Mon code (which is Secure) to do a stage 1+2 page table walk for NS. I thought this was OK because do_ats_write() does a page table walk which doesn't use the TLBs, so because it can pass both the MMU index and also an ARMSecuritySpace argument we can tell the table walk that we want NS stage1+2, not S. But that means that all the code within the ptw that needs to find e.g. the regime EL cannot do so only with an mmu_idx -- all these functions like regime_sctlr(), regime_el(), etc would need to pass both an mmu_idx and the security_space, so they can tell whether this is a translation regime controlled by EL1 or EL3 (and so whether to look at SCTLR.S or SCTLR.NS, etc).
In particular, because regime_el() wasn't updated to look at the ARMSecuritySpace it would return 1 even when the CPU was in Monitor mode (and the controlling EL is 3). This meant that page table walks in Monitor mode would look at the wrong SCTLR, TCR, etc and would generally fault when they should not.
Rather than trying to make the complicated changes needed to rescue the design of 4c2c04746932, we revert it in order to instead take the route that that commit describes as "the most straightforward" fix, where we add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN".
This revert will re-expose the "spurious alignment faults in Secure PL0" issue #2326; we'll fix it again in the next commit.
Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Thomas Huth <thuth@redhat.com> Message-id: 20241101142845.1712482-2-peter.maydell@linaro.org Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
ae0a9ccf |
| 29-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Don't assert in regime_is_user() for E10 mmuidx values
In regime_is_user() we assert if we're passed an ARMMMUIdx_E10_* mmuidx value. This used to make sense because we only used this fu
target/arm: Don't assert in regime_is_user() for E10 mmuidx values
In regime_is_user() we assert if we're passed an ARMMMUIdx_E10_* mmuidx value. This used to make sense because we only used this function in ptw.c and would never use it on this kind of stage 1+2 mmuidx, only for an individual stage 1 or stage 2 mmuidx.
However, when we implemented FEAT_E0PD we added a callsite in aa64_va_parameters(), which means this can now be called for stage 1+2 mmuidx values if the guest sets the TCG_ELX.{E0PD0,E0PD1} bits to enable use of the feature. This will then result in an assertion failure later, for instance on a TLBI operation:
#6 0x00007ffff6d0e70f in g_assertion_message_expr (domain=0x0, file=0x55555676eeba "../../target/arm/internals.h", line=978, func=0x555556771d48 <__func__.5> "regime_is_user", expr=<optimised out>) at ../../../glib/gtestutils.c:3279 #7 0x0000555555f286d2 in regime_is_user (env=0x555557f2fe00, mmu_idx=ARMMMUIdx_E10_0) at ../../target/arm/internals.h:978 #8 0x0000555555f3e31c in aa64_va_parameters (env=0x555557f2fe00, va=18446744073709551615, mmu_idx=ARMMMUIdx_E10_0, data=true, el1_is_aa32=false) at ../../target/arm/helper.c:12048 #9 0x0000555555f3163b in tlbi_aa64_get_range (env=0x555557f2fe00, mmuidx=ARMMMUIdx_E10_0, value=106721347371041) at ../../target/arm/helper.c:5214 #10 0x0000555555f317e8 in do_rvae_write (env=0x555557f2fe00, value=106721347371041, idxmap=21, synced=true) at ../../target/arm/helper.c:5260 #11 0x0000555555f31925 in tlbi_aa64_rvae1is_write (env=0x555557f2fe00, ri=0x555557fbeae0, value=106721347371041) at ../../target/arm/helper.c:5302 #12 0x0000555556036f8f in helper_set_cp_reg64 (env=0x555557f2fe00, rip=0x555557fbeae0, value=106721347371041) at ../../target/arm/tcg/op_helper.c:965
Since we do know whether these mmuidx values are for usermode or not, we can easily make regime_is_user() handle them: ARMMMUIdx_E10_0 is user, and the other two are not.
Cc: qemu-stable@nongnu.org Fixes: e4c93e44ab103f ("target/arm: Implement FEAT_E0PD") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20241017172331.822587-1-peter.maydell@linaro.org (cherry picked from commit 1505b651fdbd9af59a4a90876a62ae7ea2d4cd39) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
ea8ae47b |
| 31-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-target-arm-20241029' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * arm/kvm: add support for MTE * docs/system/cpu-hotplug: Update example's so
Merge tag 'pull-target-arm-20241029' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * arm/kvm: add support for MTE * docs/system/cpu-hotplug: Update example's socket-id/core-id * target/arm: Store FPSR cumulative exception bits in env->vfp.fpsr * target/arm: Don't assert in regime_is_user() for E10 mmuidx values * hw/sd/omap_mmc: Fix breakage of OMAP MMC controller * tests/functional: Add functional tests for collie, sx1 * scripts/symlink-install-tree.py: Fix MESONINTROSPECT parsing * docs/system/arm: Document remaining undocumented boards * target/arm: Fix arithmetic underflow in SETM instruction * docs/devel/reset: Fix minor grammatical error * target/arm: kvm: require KVM_CAP_DEVICE_CTRL
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmcg+oYZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3g/KD/4tzAD2zkWpnIPhY5ht4wBz # Kioy+pnXJW5I6pAS4ljnI41pOFnPr6Ln1NfGkP+9pTND8lIQNY0Te2a/NjgEiYJc # rYJ/A6UUuCqQ8+/oWWMPETcbbiKcSS2mzCJ/pNXeIquK5Co0Qk7mzdfObudwZpbw # o3Cc9YrGZc64XAl2Rb83Oy2UHo1xjmV67wtEmcj+hmWC+tFc7pQpAKwIKcBMgns8 # ZILexX18RYZMDqQZQ5tvwTccJeFmljj9PyScou787RXK93BlF3sL/ypq1xMykRru # JpMwAI6jD5LG9NO2zNr3FpBef8sJXqNF+O0DcYmhrKBwRkztuEU6DXF6xzdz/HRa # c14hWK1jHku+HvKBXx3c5wibTbTU71Jv36Gw5VjOBQe/5cdKJAbZw8OH+IK8ozk9 # GwLVQ/JzrIi5m8FwXPwmkOPLX/CY8Wot6IWdJKKGTN8bY+9Cu2gTduFJIvi96HWU # xkG1ySN61wKUR8Z26mizim2nBvQjybjqKEhrtQ21K548j4pWFVBgXJQX0Menca/v # ziSLCd84Pmh9+DtElPCUyau/nX/jyUJ1gCScvcJjF5jAMPBREpAh53j/GL9JEgX6 # 9cX2WG6o+9R4Qcrh1O3Vy1bAUcJ27Tr2NitD+g5XObZ+vC6YgqfN2/M53so4rwws # N4KCRdV6GcU70bQAul3mLQ== # =KWM2 # -----END PGP SIGNATURE----- # gpg: Signature made Tue 29 Oct 2024 15:08:54 GMT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20241029' of https://git.linaro.org/people/pmaydell/qemu-arm: target/arm: kvm: require KVM_CAP_DEVICE_CTRL docs/devel/reset: Fix minor grammatical error target/arm: Fix arithmetic underflow in SETM instruction docs/system/target-arm.rst: Remove "many boards are undocumented" note docs/system/arm: Add placeholder docs for mcimx6ul-evk and mcimx7d-sabre docs/system/arm: Add placeholder doc for xlnx-zcu102 board docs/system/arm: Add placeholder doc for exynos4 boards docs/system/arm: Split fby35 out from aspeed.rst docs/system/arm: Don't use wildcard '*-bmc' in doc titles docs/system/arm/stm32: List olimex-stm32-h405 in document title scripts/symlink-install-tree.py: Fix MESONINTROSPECT parsing tests/functional: Add a functional test for the sx1 board tests/functional: Add a functional test for the collie board hw/sd/omap_mmc: Don't use sd_cmd_type_t target/arm: Don't assert in regime_is_user() for E10 mmuidx values target/arm: Store FPSR cumulative exception bits in env->vfp.fpsr docs/system/cpu-hotplug: Update example's socket-id/core-id arm/kvm: add support for MTE
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
1505b651 |
| 29-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Don't assert in regime_is_user() for E10 mmuidx values
In regime_is_user() we assert if we're passed an ARMMMUIdx_E10_* mmuidx value. This used to make sense because we only used this fu
target/arm: Don't assert in regime_is_user() for E10 mmuidx values
In regime_is_user() we assert if we're passed an ARMMMUIdx_E10_* mmuidx value. This used to make sense because we only used this function in ptw.c and would never use it on this kind of stage 1+2 mmuidx, only for an individual stage 1 or stage 2 mmuidx.
However, when we implemented FEAT_E0PD we added a callsite in aa64_va_parameters(), which means this can now be called for stage 1+2 mmuidx values if the guest sets the TCG_ELX.{E0PD0,E0PD1} bits to enable use of the feature. This will then result in an assertion failure later, for instance on a TLBI operation:
#6 0x00007ffff6d0e70f in g_assertion_message_expr (domain=0x0, file=0x55555676eeba "../../target/arm/internals.h", line=978, func=0x555556771d48 <__func__.5> "regime_is_user", expr=<optimised out>) at ../../../glib/gtestutils.c:3279 #7 0x0000555555f286d2 in regime_is_user (env=0x555557f2fe00, mmu_idx=ARMMMUIdx_E10_0) at ../../target/arm/internals.h:978 #8 0x0000555555f3e31c in aa64_va_parameters (env=0x555557f2fe00, va=18446744073709551615, mmu_idx=ARMMMUIdx_E10_0, data=true, el1_is_aa32=false) at ../../target/arm/helper.c:12048 #9 0x0000555555f3163b in tlbi_aa64_get_range (env=0x555557f2fe00, mmuidx=ARMMMUIdx_E10_0, value=106721347371041) at ../../target/arm/helper.c:5214 #10 0x0000555555f317e8 in do_rvae_write (env=0x555557f2fe00, value=106721347371041, idxmap=21, synced=true) at ../../target/arm/helper.c:5260 #11 0x0000555555f31925 in tlbi_aa64_rvae1is_write (env=0x555557f2fe00, ri=0x555557fbeae0, value=106721347371041) at ../../target/arm/helper.c:5302 #12 0x0000555556036f8f in helper_set_cp_reg64 (env=0x555557f2fe00, rip=0x555557fbeae0, value=106721347371041) at ../../target/arm/tcg/op_helper.c:965
Since we do know whether these mmuidx values are for usermode or not, we can easily make regime_is_user() handle them: ARMMMUIdx_E10_0 is user, and the other two are not.
Cc: qemu-stable@nongnu.org Fixes: e4c93e44ab103f ("target/arm: Implement FEAT_E0PD") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Tested-by: Alex Bennée <alex.bennee@linaro.org> Message-id: 20241017172331.822587-1-peter.maydell@linaro.org
show more ...
|
Revision tags: v9.1.1 |
|
#
3860a2a8 |
| 14-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-tcg-20241013' of https://gitlab.com/rth7680/qemu into staging
linux-user/i386: Emulate orig_ax linux-user/vm86: Fix compilation with Clang tcg: remove singlestep_enabled from DisasCo
Merge tag 'pull-tcg-20241013' of https://gitlab.com/rth7680/qemu into staging
linux-user/i386: Emulate orig_ax linux-user/vm86: Fix compilation with Clang tcg: remove singlestep_enabled from DisasContextBase accel/tcg: Add TCGCPUOps.tlb_fill_align target/hppa: Handle alignment faults in hppa_get_physical_address target/arm: Fix alignment fault priority in get_phys_addr_lpae
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmcMRU4dHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9dSQf+MUJq//oig+bDeUlQ # v3uBMFVi1DBYI1Y/xVODADpn8Ltv5s9v7N+/phi+St2W65OzGNYviHvq/abeyhdo # M40LGtOvjO6Mns+Z9NKTobtT8n4ap4JJyoFjuXFTHkMMDiQ/v7FkEJJoS3W2bemi # zmKYF/vWe3bwI+d3+dyaUjA92gSs+Hlj8uEVBlzn3ubA19ZdvtyfKURPQynrkwlo # dFtAOFRFBU6vrlJSBElxUfYO4jC4Cng19EOrWvIsuKAkACuhiHgah10i3WKw8Asz # 1iRUYXe0EOlX2RYNTD+Oj5j0cViRylirgPtIhEIPBuDP7m1Jy1JO4dVARUJBBU71 # Zd4Uuw== # =EX+a # -----END PGP SIGNATURE----- # gpg: Signature made Sun 13 Oct 2024 23:10:22 BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full] # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A 05C0 64DF 38E8 AF7E 215F
* tag 'pull-tcg-20241013' of https://gitlab.com/rth7680/qemu: (27 commits) target/arm: Fix alignment fault priority in get_phys_addr_lpae target/arm: Implement TCGCPUOps.tlb_fill_align target/arm: Move device detection earlier in get_phys_addr_lpae target/arm: Pass MemOp to get_phys_addr_lpae target/arm: Pass MemOp through get_phys_addr_twostage target/arm: Pass MemOp to get_phys_addr_nogpc target/arm: Pass MemOp to get_phys_addr_gpc target/arm: Pass MemOp to get_phys_addr_with_space_nogpc target/arm: Pass MemOp to get_phys_addr target/hppa: Implement TCGCPUOps.tlb_fill_align target/hppa: Handle alignment faults in hppa_get_physical_address target/hppa: Fix priority of T, D, and B page faults target/hppa: Perform access rights before protection id check target/hppa: Add MemOp argument to hppa_get_physical_address accel/tcg: Use the alignment test in tlb_fill_align accel/tcg: Add TCGCPUOps.tlb_fill_align include/exec/memop: Introduce memop_atomicity_bits include/exec/memop: Rename get_alignment_bits include/exec/memop: Move get_alignment_bits from tcg.h accel/tcg: Assert noreturn from write-only page for atomics ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
1ba3cb88 |
| 07-Oct-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Implement TCGCPUOps.tlb_fill_align
Fill in the tlb_fill_align hook. Handle alignment not due to memory type, since that's no longer handled by generic code. Pass memop to get_phys_addr.
target/arm: Implement TCGCPUOps.tlb_fill_align
Fill in the tlb_fill_align hook. Handle alignment not due to memory type, since that's no longer handled by generic code. Pass memop to get_phys_addr.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
29b4d7db |
| 05-Oct-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Pass MemOp to get_phys_addr_with_space_nogpc
Zero is the safe do-nothing value for callers to use.
Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@
target/arm: Pass MemOp to get_phys_addr_with_space_nogpc
Zero is the safe do-nothing value for callers to use.
Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
ec2c9337 |
| 05-Oct-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Pass MemOp to get_phys_addr
Zero is the safe do-nothing value for callers to use.
Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signe
target/arm: Pass MemOp to get_phys_addr
Zero is the safe do-nothing value for callers to use.
Reviewed-by: Helge Deller <deller@gmx.de> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
767e7d8a |
| 27-Sep-2024 |
Ard Biesheuvel <ardb@kernel.org> |
target/arm: Avoid target_ulong for physical address lookups
target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermedi
target/arm: Avoid target_ulong for physical address lookups
target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermediate physical address when LPAE is being used.
Given that Linux may place leaf level user page tables in high memory when built for LPAE, the kernel will crash with an external abort as soon as it enters user space when running with more than ~3 GiB of system RAM.
So replace target_ulong with vaddr in places where it may carry an address value that is not representable in 32 bits.
Fixes: f3639a64f602ea ("target/arm: Use softmmu tlbs for page table walking") Cc: qemu-stable@nongnu.org Reported-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-id: 20240927071051.1444768-1-ardb+git@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org> (cherry picked from commit 67d762e716a7127ecc114e9708254316dd521911) Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
show more ...
|
#
062cfce8 |
| 01-Oct-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-target-arm-20241001' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * MAINTAINERS: Update STM32L4x5 and B-L475E-IOT01A maintainers * hw/arm/xlnx:
Merge tag 'pull-target-arm-20241001' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * MAINTAINERS: Update STM32L4x5 and B-L475E-IOT01A maintainers * hw/arm/xlnx: Connect secondary CGEM IRQs * m25p80: Add SFDP table for mt35xu01g flash * target/arm: Avoid target_ulong for physical address lookups * hw/ssi/xilinx_spips: Fix flash erase assert in dual parallel configuration * hw: fix memory leak in IRQState allocation * hw/sd/sdcard: Fix handling of disabled boot partitions * arm: Remove deprecated board models
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmb8JW8ZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3m85D/9W5E4BQd4mG0JPT+OcCRP/ # MQGXsU6fOm3mtYfehXMwnJf2POOK0t/IC5W1mOGmIV6R4ABv2il3cXvQznCpcKY3 # eRmMLn4tfI2bn8zJmkIMY5am7j7G3RJzktz8iQ3bstNwD3pYU46tr36iup7vU71b # Z5Q3+uauBoUo/9rw5jgjjoZ9Z3Ay2RECsZc/vD6NIN0kh2TYgdpitR47J8jhS3ZX # +laqimnRl5wQpe0TIYzpdFr82CXfT62WfQ/+6D6xftbTrV1gfLWesc7hHsgw7Sem # ST+nX+0Wr0UfBvsNN4ldi4jB4FfMeUCPX4wBbkKaYyD7bYFnoiz8RPYCxrHlKqeL # 9P7+LuA+h+odIcsCza9zUSpQIu4gGIuovmnjz6rbD8m6poV0OmU/Ncj4JC9hGJNi # Y1utyFELsvpdQhHP1M2K0qEgO3q/fJyzgA5LXkeXLVozjJM6DX7deVdXjwcBWqeI # McJwe/C1TZ/WQlssrWmx6+naA8sygrsbo98a7X+DVsZ0ka6ofZSKkr7aHd3+dia+ # a4KbiMX6ChqZxPbIB+m4GnOkCDefu098rXlOu4gkMdzyQT/sm7wmVzQ3YsW3jVqx # DNG6Mrg6OVvevXQysdLkJIemnM9YeLxf0lEJ/NpkyGQ7LlmdBws+p1ooCvuvg2Ua # CRFY1tuUfWrshpziF1cT4A== # =q4iF # -----END PGP SIGNATURE----- # gpg: Signature made Tue 01 Oct 2024 17:38:07 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20241001' of https://git.linaro.org/people/pmaydell/qemu-arm: (54 commits) hw: Remove omap2 specific defines and enums hw/dma: Remove omap_dma4 device hw/misc/omap_clk: Remove OMAP2-specifics hw/misc: Remove omap_l4 device hw/display: Remove omap_dss hw/misc: Remove omap_tap device hw/ssi: Remove omap_mcspi hw/timer: Remove omap_synctimer hw/timer: Remove omap_gptimer hw/misc: Remove omap_gpmc hw/misc: Remove omap_sdrc device hw/sd: Remove omap2_mmc device hw/intc: Remove omap2-intc device hw/char: Remove omap2_uart hw/gpio: Remove TYPE_OMAP2_GPIO hw/arm: Remove omap2.c docs: Document removal of old Arm boards hw/usb: Remove MUSB USB host controller hw/usb: Remove tusb6010 USB controller hw/block: Remove OneNAND device ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
67d762e7 |
| 27-Sep-2024 |
Ard Biesheuvel <ardb@kernel.org> |
target/arm: Avoid target_ulong for physical address lookups
target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermedi
target/arm: Avoid target_ulong for physical address lookups
target_ulong is typedef'ed as a 32-bit integer when building the qemu-system-arm target, and this is smaller than the size of an intermediate physical address when LPAE is being used.
Given that Linux may place leaf level user page tables in high memory when built for LPAE, the kernel will crash with an external abort as soon as it enters user space when running with more than ~3 GiB of system RAM.
So replace target_ulong with vaddr in places where it may carry an address value that is not representable in 32 bits.
Fixes: f3639a64f602ea ("target/arm: Use softmmu tlbs for page table walking") Cc: qemu-stable@nongnu.org Reported-by: Arnd Bergmann <arnd@arndb.de> Tested-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Ard Biesheuvel <ardb@kernel.org> Message-id: 20240927071051.1444768-1-ardb+git@google.com Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
28ae3179 |
| 13-Sep-2024 |
Peter Maydell <peter.maydell@linaro.org> |
Merge tag 'pull-target-arm-20240913' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * s390: convert s390 virtio-ccw and CPU to three-phase reset * reset: remove
Merge tag 'pull-target-arm-20240913' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * s390: convert s390 virtio-ccw and CPU to three-phase reset * reset: remove now-unused device_class_set_parent_reset() * reset: introduce device_class_set_legacy_reset() * reset: remove unneeded transitional machinery * kvm: Use 'unsigned long' for request argument in functions wrapping ioctl() * hvf: arm: Implement and use hvf_get_physical_address_range so VMs can have larger-than-36-bit IPA spaces when the host supports this * target/arm/tcg: refine cache descriptions with a wrapper * hw/net/can/xlnx-versal-canfd: fix various bugs * MAINTAINERS: update versal, CAN maintainer entries * hw/intc/arm_gic: fix spurious level triggered interrupts
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmbkVokZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3pR5D/0ZJzJi7C0HIa4KYuBkcpZQ # M3iUa1uiZoCniXlWuKFt2rUBrmhbW30YHw5gQjnxoUO4VVqREkFi3e5nzUKRQmvP # FRm8dnuC36qwQJFhm+rQqUb8/AyqrVFnIaHhn7dBKLwRFm9+kbZ0v9x1Eq1DZk3S # mijBQRiOjrj+FRkmyNJLhylGpm+p9VRdnBjmUtN2Yw+2fPkHmUURRSUvhwCK4BB5 # AvKgMC0EIIsLJKLfrWzk/EsYC8ogrGitISzFt8iWLAqxuxtuhv1StstleD4mZMK8 # gH+ZH5tsls2IiTIKkHfcbUcA55efDrQHGDat7n1Q0EWqOjET0soES+GpS0Jj6IXK # uOnsDZ7MLFU/SbpckicLQ/JwNi3HiIfQgBVB2aJZ+cg8CGqaQCI5ZvWs7XFpUgkb # naA4IR5mdNgXJm7ttBKbWarPNcmdODqa/5YDjXdyHmMx3JD994k1y5LIi3o69TgI # rgHzU8ChZqaBDEvNa5KGtadQPnaSBP15Yqbp5rn2knVRKjDdyCdB94aWO5tZkmaO # ARFmNk6h5bhwXdXl2Hu67RS2Kd0/fHMFWsxyHAX4NYT+Vq+ZTjLdaPzwFdfA0yAz # wXWn0EAeYQ5M2xOPfDM/JYSc1THSzhpwy/CBrRjrCRZMDE+bx9BRC7pUXwquE8xF # CW1NUxkvZikQeiMzgEBbTA== # =u6u8 # -----END PGP SIGNATURE----- # gpg: Signature made Fri 13 Sep 2024 16:13:13 BST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [ultimate] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [ultimate] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [ultimate] # Primary key fingerprint: E1A5 C593 CD41 9DE2 8E83 15CF 3C25 25ED 1436 0CDE
* tag 'pull-target-arm-20240913' of https://git.linaro.org/people/pmaydell/qemu-arm: (27 commits) hw/intc/arm_gic: fix spurious level triggered interrupts MAINTAINERS: Add my-self as CAN maintainer MAINTAINERS: Update Xilinx Versal OSPI maintainer's email address MAINTAINERS: Remove Vikram Garhwal as maintainer hw/net/can/xlnx-versal-canfd: Fix FIFO issues hw/net/can/xlnx-versal-canfd: Simplify DLC conversions hw/net/can/xlnx-versal-canfd: Fix byte ordering hw/net/can/xlnx-versal-canfd: Handle flags correctly hw/net/can/xlnx-versal-canfd: Translate CAN ID registers hw/net/can/xlnx-versal-canfd: Fix CAN FD flag check hw/net/can/xlnx-versal-canfd: Fix interrupt level target/arm/tcg: refine cache descriptions with a wrapper hvf: arm: Implement and use hvf_get_physical_address_range hvf: Split up hv_vm_create logic per arch hw/boards: Add hvf_get_physical_address_range to MachineClass kvm: Use 'unsigned long' for request argument in functions wrapping ioctl() hw/core/resettable: Remove transitional_function machinery hw/core/qdev: Simplify legacy_reset handling hw: Remove device_phases_reset() hw: Rename DeviceClass::reset field to legacy_reset ...
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
#
d54ffa54 |
| 13-Sep-2024 |
Danny Canter <danny_canter@apple.com> |
hvf: arm: Implement and use hvf_get_physical_address_range
This patch's main focus is to use the previously added hvf_get_physical_address_range to inform VM creation about the IPA size we need for
hvf: arm: Implement and use hvf_get_physical_address_range
This patch's main focus is to use the previously added hvf_get_physical_address_range to inform VM creation about the IPA size we need for the VM, so we can extend the default 36b IPA size and support VMs with 64+GB of RAM. This is done by freezing the memory map, computing the highest GPA and then (depending on if the platform supports an IPA size that large) telling the kernel to use a size >= for the VM. In pursuit of this a couple of things related to how we handle the physical address range we expose to guests were altered, but for an explanation of what we were doing:
Today, to get the IPA size we were reading id_aa64mmfr0_el1's PARange field from a newly made vcpu. Unfortunately, HVF just returns the hosts PARange directly for the initial value and not the IPA size that will actually back the VM, so we believe we have much more address space than we actually do today it seems.
Starting in macOS 13.0 some APIs were introduced to be able to query the maximum IPA size the kernel supports, and to set the IPA size for a given VM. However, this still has a couple of issues on < macOS 15. Up until macOS 15 (and if the hardware supported it) the max IPA size was 39 bits which is not a valid PARange value, so we can't clamp down what we advertise in the vcpu's id_aa64mmfr0_el1 to our IPA size. Starting in macOS 15 however, the maximum IPA size is 40 bits (if it's supported in the hardware as well) which is also a valid PARange value so we can set our IPA size to the maximum as well as clamp down the PARange we advertise to the guest. This allows VMs with 64+ GB of RAM and should fix the oddness of the PARange situation as well.
Signed-off-by: Danny Canter <danny_canter@apple.com> Message-id: 20240828111552.93482-4-danny_canter@apple.com Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
Revision tags: v9.1.0 |
|
#
3cc050c5 |
| 13-Aug-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-target-arm-20240813' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * hw/misc/stm32l4x5_rcc: Add validation for MCOPRE and MCOSEL values * target
Merge tag 'pull-target-arm-20240813' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * hw/misc/stm32l4x5_rcc: Add validation for MCOPRE and MCOSEL values * target/arm: Clear high SVE elements in handle_vec_simd_wshli * target/arm: Fix usage of MMU indexes when EL3 is AArch32
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAma7eSIZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3gbJEACHhZAvP4f1vic8DNGPw8Yr # v+pRQON+vF+PDBSyNkYCRL5Gy1P257Aujw1ed2dpoDhMemC/co67W2zdzToCvDd5 # XZxlHb/iUCTeZbA/Zp66ZADlvVOdvvQL8EHbd4mSBEZp9rvPSmxatx4I5jstLiAV # 5HimP+AjjGMfklMu+RelW7A7WDRJ0h7F4PwXCA8tLeHPH5XHSkweGYt3OVfSlUAs # +RKiltByC/quujLHxrQcVtLZON1KKiB0P8VPRcaR1QIFARiR1IfLvzhKVpqyOlnV # 3a+ZILtCJE1YEM+h7Aunz/l9MQ0DZe5DzbIdKOQ7NUkerlhq81kriPp67yLv25lk # zgqkHGGDEnIGpSXdmbXTNLcGlH+5O+fWl2RMzYrSFJqvwyRu9egLLi6E0xaNCRvY # gdb6CGPhhu21C1o5Nax0CiaZe3vzzRvC5QsIJ0yww6y7VaGFVt/XRaKBdLHB97nZ # t/9ifa3fmhVEW6pQEy8VdAeFoxIT2lJ2xJgBdMwpZCJlCxB8xKU/rZfrXKS/UUqV # 9Klbcfrx1WFT7zrAWS0Ig7nPttJ+XgjYfgHI3q2e80F6xRmAmaAjnbtVRS+L3It9 # eZ4SmuzurWipRLpdmxdOX1IXdZD9rJMzk9IUIZoklctlR/D+75Iuy0N7gY8G2dbp # fmh38lEQZ0IC90VmNtWltw== # =So/3 # -----END PGP SIGNATURE----- # gpg: Signature made Wed 14 Aug 2024 01:17:54 AM AEST # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
* tag 'pull-target-arm-20240813' of https://git.linaro.org/people/pmaydell/qemu-arm: target/arm: Fix usage of MMU indexes when EL3 is AArch32 target/arm: Update translation regime comment for new features target/arm: Clear high SVE elements in handle_vec_simd_wshli hw/misc/stm32l4x5_rcc: Add validation for MCOPRE and MCOSEL values
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
4c2c0474 |
| 09-Aug-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Fix usage of MMU indexes when EL3 is AArch32
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure
target/arm: Fix usage of MMU indexes when EL3 is AArch32
Our current usage of MMU indexes when EL3 is AArch32 is confused. Architecturally, when EL3 is AArch32, all Secure code runs under the Secure PL1&0 translation regime: * code at EL3, which might be Mon, or SVC, or any of the other privileged modes (PL1) * code at EL0 (Secure PL0)
This is different from when EL3 is AArch64, in which case EL3 is its own translation regime, and EL1 and EL0 (whether AArch32 or AArch64) have their own regime.
We claimed to be mapping Secure PL1 to our ARMMMUIdx_EL3, but didn't do anything special about Secure PL0, which meant it used the same ARMMMUIdx_EL10_0 that NonSecure PL0 does. This resulted in a bug where arm_sctlr() incorrectly picked the NonSecure SCTLR as the controlling register when in Secure PL0, which meant we were spuriously generating alignment faults because we were looking at the wrong SCTLR control bits.
The use of ARMMMUIdx_EL3 for Secure PL1 also resulted in the bug that we wouldn't honour the PAN bit for Secure PL1, because there's no equivalent _PAN mmu index for it.
We could fix this in one of two ways: * The most straightforward is to add new MMU indexes EL30_0, EL30_3, EL30_3_PAN to correspond to "Secure PL1&0 at PL0", "Secure PL1&0 at PL1", and "Secure PL1&0 at PL1 with PAN". This matches how we use indexes for the AArch64 regimes, and preserves propirties like being able to determine the privilege level from an MMU index without any other information. However it would add two MMU indexes (we can share one with ARMMMUIdx_EL3), and we are already using 14 of the 16 the core TLB code permits.
* The more complicated approach is the one we take here. We use the same MMU indexes (E10_0, E10_1, E10_1_PAN) for Secure PL1&0 than we do for NonSecure PL1&0. This saves on MMU indexes, but means we need to check in some places whether we're in the Secure PL1&0 regime or not before we interpret an MMU index.
The changes in this commit were created by auditing all the places where we use specific ARMMMUIdx_ values, and checking whether they needed to be changed to handle the new index value usage.
Note for potential stable backports: taking also the previous (comment-change-only) commit might make the backport easier.
Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2326 Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Bernhard Beschow <shentey@gmail.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240809160430.1144805-3-peter.maydell@linaro.org
show more ...
|
#
26b09663 |
| 22-Jul-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-maintainer-9.1-rc0-230724-1' of https://gitlab.com/stsquad/qemu into staging
Maintainer updates for testing, gdbstub, semihosting, plugins
- bump python in *BSD images via libvirt
Merge tag 'pull-maintainer-9.1-rc0-230724-1' of https://gitlab.com/stsquad/qemu into staging
Maintainer updates for testing, gdbstub, semihosting, plugins
- bump python in *BSD images via libvirt-ci - remove old unused Leon3 Avocado test - re-factor gdb command extension - add stoptrigger plugin to contrib - ensure plugin mem callbacks properly sized - reduce check-tcg noise of inline plugin test - fix register dumping in execlog plugin - restrict semihosting to TCG builds - fix regex in MTE test
# -----BEGIN PGP SIGNATURE----- # # iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmae5OcACgkQ+9DbCVqe # KkR8cgf/eM2Sm7EG7zIQ8SbY53DS07ls6uT7Mfn4374GEmj4Cy1I+WNoLGM5vq1r # qWAC9q2LgJVMQoWJA6Fi3SCKiylBp3/jIdJ7CWN5qj/NmePHSV3EisQXf2qOWWL9 # qOX2hJI7IIYNI2v3IvCzN/fB8F8U60iXERFHRypBH2p6Mz+EGMC3CEhesOEUta6o # 2IMkRW8MoDv9x4B+FnNYav6CfqZjhRenu1CGgVGvWYRds2QDVNB/14kOunmBuwSs # gPb7AhhnpobDYVxMarlJNPMbOdFjtDkYCajCNW7ffLcl+OjhoVR6cJcFpbOMv4kZ # 8Nok8aDjUDWwUbmU0rBynca+1k8OTg== # =TjRc # -----END PGP SIGNATURE----- # gpg: Signature made Tue 23 Jul 2024 09:01:59 AM AEST # gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44 # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
* tag 'pull-maintainer-9.1-rc0-230724-1' of https://gitlab.com/stsquad/qemu: tests/tcg/aarch64: Fix test-mte.py semihosting: Restrict to TCG target/xtensa: Restrict semihosting to TCG target/riscv: Restrict semihosting to TCG target/mips: Restrict semihosting to TCG target/m68k: Restrict semihosting to TCG target/mips: Add semihosting stub target/m68k: Add semihosting stub semihosting: Include missing 'gdbstub/syscalls.h' header plugins/execlog.c: correct dump of registers values tests/plugins: use qemu_plugin_outs for inline stats plugins: fix mem callback array size plugins/stoptrigger: TCG plugin to stop execution under conditions gdbstub: Re-factor gdb command extensions tests/avocado: Remove non-working sparc leon3 test testing: bump to latest libvirt-ci
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
e8122a71 |
| 18-Jul-2024 |
Alex Bennée <alex.bennee@linaro.org> |
gdbstub: Re-factor gdb command extensions
Coverity reported a memory leak (CID 1549757) in this code and its admittedly rather clumsy handling of extending the command table. Instead of handing over
gdbstub: Re-factor gdb command extensions
Coverity reported a memory leak (CID 1549757) in this code and its admittedly rather clumsy handling of extending the command table. Instead of handing over a full array of the commands lets use the lighter weight GPtrArray and simply test for the presence of each entry as we go. This avoids complications of transferring ownership of arrays and keeps the final command entries as static entries in the target code.
Cc: Akihiko Odaki <akihiko.odaki@daynix.com> Cc: Gustavo Bueno Romero <gustavo.romero@linaro.org> Cc: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Gustavo Romero <gustavo.romero@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240718094523.1198645-4-alex.bennee@linaro.org>
show more ...
|
#
23901b2b |
| 11-Jul-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-target-arm-20240711' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * Refactor FPCR/FPSR handling in preparation for FEAT_AFP * More decodetree c
Merge tag 'pull-target-arm-20240711' of https://git.linaro.org/people/pmaydell/qemu-arm into staging
target-arm queue: * Refactor FPCR/FPSR handling in preparation for FEAT_AFP * More decodetree conversions * target/arm: Use cpu_env in cpu_untagged_addr * target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt() * hw/char/pl011: Avoid division-by-zero in pl011_get_baudrate() * hw/misc/bcm2835_thermal: Fix access size handling in bcm2835_thermal_ops * accel/tcg: Make TCGCPUOps::cpu_exec_halt mandatory * STM32L4x5: Handle USART interrupts correctly
# -----BEGIN PGP SIGNATURE----- # # iQJNBAABCAA3FiEE4aXFk81BneKOgxXPPCUl7RQ2DN4FAmaP24MZHHBldGVyLm1h # eWRlbGxAbGluYXJvLm9yZwAKCRA8JSXtFDYM3luAEACF4Uhrcrh7E7RwoDEeQAMQ # IG3+LwUbhnBXIUl7DL0qQTjnmwbbTQH2Ukoq3biqAdSs22JwrT6O6MDQ7fA3X8DI # 3Ew+72BzAAtQHVHJaFRw2f9UVQop8Poa9I7Di6frH4Gxk5AKQY/IwjrD6jYPqhM7 # 9KCksksO3w9DRmpFZ1y5I/dGumTe12btEwdazWxrsyZIBNDoUJSU8xpcMk+9oErF # 23hcsSaXOGDeWwPuEk1q2mMYnRQQtMhVndxV50sF98MfJ3nnMKEttuFuW0znXMCr # Xat8Y4QbigXGmuJNgjXccIzN1Hje+h5zzfUIfVNWBYNzqULvvi/vjwNfJaUiIjm5 # DxeOGUu8iZYQbgvJXvn9NwWbptxvhyWsCLpB46icElcN0jr1MU12wk2IH0CZa7KU # h4kbu0p17dph5Lantd888b1Vu3pOFr4UiRC3qJB9ddBVLyGl/3Km1wb99x038mPo # Mt8Y7Vjnr5OWd+mTNzXFRnYFYIRKu1lI85VuTjd5Uua0lDtFDo/sVnVF9uas84OC # /PrQYGso0UE320li+jYHzE18rKPEi2u/3xTgHWAgh3ra7McWVjWDr2yIsAisKKNH # 2F72gyZNy2n7FJhTYPQAJnozi68maP5f9tHHHXQdfsCE4+2h0fr/wljCeq1+5waq # 4edm31uEbArfW/jLgPHHAA== # =Xkmk # -----END PGP SIGNATURE----- # gpg: Signature made Thu 11 Jul 2024 06:17:55 AM PDT # gpg: using RSA key E1A5C593CD419DE28E8315CF3C2525ED14360CDE # gpg: issuer "peter.maydell@linaro.org" # gpg: Good signature from "Peter Maydell <peter.maydell@linaro.org>" [full] # gpg: aka "Peter Maydell <pmaydell@gmail.com>" [full] # gpg: aka "Peter Maydell <pmaydell@chiark.greenend.org.uk>" [full] # gpg: aka "Peter Maydell <peter@archaic.org.uk>" [unknown]
* tag 'pull-target-arm-20240711' of https://git.linaro.org/people/pmaydell/qemu-arm: (24 commits) target/arm: Convert PMULL to decodetree target/arm: Convert ADDHN, SUBHN, RADDHN, RSUBHN to decodetree target/arm: Convert SADDW, SSUBW, UADDW, USUBW to decodetree target/arm: Convert SQDMULL, SQDMLAL, SQDMLSL to decodetree target/arm: Convert SADDL, SSUBL, SABDL, SABAL, and unsigned to decodetree target/arm: Convert SMULL, UMULL, SMLAL, UMLAL, SMLSL, UMLSL to decodetree hw/arm: In STM32L4x5 SOC, connect USART devices to EXTI hw/misc: In STM32L4x5 EXTI, handle direct interrupts hw/misc: In STM32L4x5 EXTI, consolidate 2 constants accel/tcg: Make TCGCPUOps::cpu_exec_halt mandatory target: Set TCGCPUOps::cpu_exec_halt to target's has_work implementation target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt() target/arm: Use cpu_env in cpu_untagged_addr hw/misc/bcm2835_thermal: Fix access size handling in bcm2835_thermal_ops hw/char/pl011: Avoid division-by-zero in pl011_get_baudrate() target/arm: Allow FPCR bits that aren't in FPSCR target/arm: Rename FPSR_MASK and FPCR_MASK and define them symbolically target/arm: Rename FPCR_ QC, NZCV macros to FPSR_ target/arm: Store FPSR and FPCR in separate CPU state fields target/arm: Implement store_cpu_field_low32() macro ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
fcee3707 |
| 04-Jul-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt()
In commit a96edb687e76 we set the cpu_exec_halt field of the TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the arm_v7
target/arm: Set arm_v7m_tcg_ops cpu_exec_halt to arm_cpu_exec_halt()
In commit a96edb687e76 we set the cpu_exec_halt field of the TCGCPUOps arm_tcg_ops to arm_cpu_exec_halt(), but we left the arm_v7m_tcg_ops struct unchanged. That isn't wrong, because for M-profile FEAT_WFxT doesn't exist and the default handling for "no cpu_exec_halt method" is correct, but it's perhaps a little confusing. We would also like to make setting the cpu_exec_halt method mandatory.
Initialize arm_v7m_tcg_ops cpu_exec_halt to the same function we use for A-profile. (On M-profile we never set up the wfxt timer so there is no change in behaviour here.)
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
show more ...
|
#
f2cb4026 |
| 05-Jul-2024 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-maintainer-july24-050724-1' of https://gitlab.com/stsquad/qemu into staging
Updates for testing, plugins, gdbstub
- restore some 32 bit host builds and testing - move some physm
Merge tag 'pull-maintainer-july24-050724-1' of https://gitlab.com/stsquad/qemu into staging
Updates for testing, plugins, gdbstub
- restore some 32 bit host builds and testing - move some physmem tracepoint definitions - use --userns keep-id for podman builds - cleanup check-tcg compiler flag checking for Arm - fix some casting in fcvt test - tweak check-tcg inline asm for clang - suppress some invalid clang warnings - disable KVM for the TCI builds - improve the insn tracking plugin - cleanups to the lockstep plugin - free plugin data on cpu finalise - assert cpu->index assigned - move qemu_plugin_vcpu_init__async into plugin code - add support for dynamic gdb command tables - allow targets to extend gdb capabilities - enable user-mode MTE support
# -----BEGIN PGP SIGNATURE----- # # iQEzBAABCgAdFiEEZoWumedRZ7yvyN81+9DbCVqeKkQFAmaH3bEACgkQ+9DbCVqe # KkTnvwf9HS68sTICEJqBfY663hjcfdFGsSV/h3q7SN3fhKm/3JHGNK+kumgqdnaC # ykd7tx0AtBGgKm83B7G6MPywsVMIosMeV3mFeJTVHhKsFwGNjSiGkr3j4R2qxjFt # nYQ977FqBKyhvhSplR2wwhwi+JpuGWFGlnQTvdF2Z7ni4YCDFcbl4eiMyGwsjbWm # 0VBP+wCSSMIIbS9Qb7DrhZlfu0+wKZK/q0FLzVVofcLSXGou+Mse/qhtG+yAU/FI # qqqV+7J4PU9E4BqFaklxyRtBrpXNDgpo77pu6ZR7oDXD7HNMuIAuEIlkxMJjarNM # xN64WOOzw15R2RMVyXdYx6ccxWft2Q== # =9Gmk # -----END PGP SIGNATURE----- # gpg: Signature made Fri 05 Jul 2024 04:49:05 AM PDT # gpg: using RSA key 6685AE99E75167BCAFC8DF35FBD0DB095A9E2A44 # gpg: Good signature from "Alex Bennée (Master Work Key) <alex.bennee@linaro.org>" [full]
* tag 'pull-maintainer-july24-050724-1' of https://gitlab.com/stsquad/qemu: (40 commits) tests/tcg/aarch64: Add MTE gdbstub tests gdbstub: Add support for MTE in user mode gdbstub: Use true to set cmd_startswith gdbstub: Pass CPU context to command handler gdbstub: Make hex conversion function non-internal target/arm: Factor out code for setting MTE TCF0 field target/arm: Make some MTE helpers widely available target/arm: Fix exception case in allocation_tag_mem_probe gdbstub: Add support for target-specific stubs gdbstub: Move GdbCmdParseEntry into a new header file gdbstub: Clean up process_string_cmd accel/tcg: Move qemu_plugin_vcpu_init__async() to plugins/ plugins: Free CPUPluginState before destroying vCPU state plugins: Ensure vCPU index is assigned in init/exit hooks plugins/lockstep: clean-up output plugins/lockstep: mention the one-insn-per-tb option plugins/lockstep: make mixed-mode safe plugins/lockstep: preserve sock_path test/plugins: preserve the instruction record over translations test/plugin: make insn plugin less noisy by default ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
f81198ce |
| 05-Jul-2024 |
Gustavo Romero <gustavo.romero@linaro.org> |
gdbstub: Add support for MTE in user mode
This commit implements the stubs to handle the qIsAddressTagged, qMemTag, and QMemTag GDB packets, allowing all GDB 'memory-tag' subcommands to work with QE
gdbstub: Add support for MTE in user mode
This commit implements the stubs to handle the qIsAddressTagged, qMemTag, and QMemTag GDB packets, allowing all GDB 'memory-tag' subcommands to work with QEMU gdbstub on aarch64 user mode. It also implements the get/set functions for the special GDB MTE register 'tag_ctl', used to control the MTE fault type at runtime.
Signed-off-by: Gustavo Romero <gustavo.romero@linaro.org> Message-Id: <20240628050850.536447-11-gustavo.romero@linaro.org> Signed-off-by: Alex Bennée <alex.bennee@linaro.org> Message-Id: <20240705084047.857176-40-alex.bennee@linaro.org>
show more ...
|