59754f85 | 01-Mar-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Do memory type alignment check when translation disabled
If translation is disabled, the default memory type is Device, which requires alignment checking. This is more optimally done ea
target/arm: Do memory type alignment check when translation disabled
If translation is disabled, the default memory type is Device, which requires alignment checking. This is more optimally done early via the MemOp given to the TCG memory operation.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reported-by: Idan Horowitz <idan.horowitz@gmail.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240301204110.656742-6-richard.henderson@linaro.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1204 Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
f2b4a989 | 06-Feb-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Allow access to SPSR_hyp from hyp mode
Architecturally, the AArch32 MSR/MRS to/from banked register instructions are UNPREDICTABLE for attempts to access a banked register that the guest
target/arm: Allow access to SPSR_hyp from hyp mode
Architecturally, the AArch32 MSR/MRS to/from banked register instructions are UNPREDICTABLE for attempts to access a banked register that the guest could access in a more direct way (e.g. using this insn to access r8_fiq when already in FIQ mode). QEMU has chosen to UNDEF on all of these.
However, for the case of accessing SPSR_hyp from hyp mode, it turns out that real hardware permits this, with the same effect as if the guest had directly written to SPSR. Further, there is some guest code out there that assumes it can do this, because it happens to work on hardware: an example Cortex-R52 startup code fragment uses this, and it got copied into various other places, including Zephyr. Zephyr was fixed to not use this: https://github.com/zephyrproject-rtos/zephyr/issues/47330 but other examples are still out there, like the selftest binary for the MPS3-AN536.
For convenience of being able to run guest code, permit this UNPREDICTABLE access instead of UNDEFing it.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240206132931.38376-5-peter.maydell@linaro.org
show more ...
|
282a48ec | 06-Feb-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Add Cortex-R52 IMPDEF sysregs
Add the Cortex-R52 IMPDEF sysregs, by defining them here and also by enabling the AUXCR feature which defines the ACTLR and HACTLR registers. As is our usua
target/arm: Add Cortex-R52 IMPDEF sysregs
Add the Cortex-R52 IMPDEF sysregs, by defining them here and also by enabling the AUXCR feature which defines the ACTLR and HACTLR registers. As is our usual practice, we make these simple reads-as-zero stubs for now.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20240206132931.38376-4-peter.maydell@linaro.org
show more ...
|
855f94ec | 15-Feb-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Fix SVE/SME gross MTE suppression checks
The TBI and TCMA bits are located within mtedesc, not desc.
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Sig
target/arm: Fix SVE/SME gross MTE suppression checks
The TBI and TCMA bits are located within mtedesc, not desc.
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
623507cc | 15-Feb-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Handle mte in do_ldrq, do_ldro
These functions "use the standard load helpers", but fail to clean_data_tbi or populate mtedesc.
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <pe
target/arm: Handle mte in do_ldrq, do_ldro
These functions "use the standard load helpers", but fail to clean_data_tbi or populate mtedesc.
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
96fcc998 | 15-Feb-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Split out make_svemte_desc
Share code that creates mtedesc and embeds within simd_desc.
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: R
target/arm: Split out make_svemte_desc
Share code that creates mtedesc and embeds within simd_desc.
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
b12a7671 | 15-Feb-2024 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Adjust and validate mtedesc sizem1
When we added SVE_MTEDESC_SHIFT, we effectively limited the maximum size of MTEDESC. Adjust SIZEM1 to consume the remaining bits (32 - 10 - 5 - 12 ==
target/arm: Adjust and validate mtedesc sizem1
When we added SVE_MTEDESC_SHIFT, we effectively limited the maximum size of MTEDESC. Adjust SIZEM1 to consume the remaining bits (32 - 10 - 5 - 12 == 5). Assert that the data to be stored fits within the field (expecting 8 * 4 - 1 == 31, exact fit).
Cc: qemu-stable@nongnu.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Gustavo Romero <gustavo.romero@linaro.org> Message-id: 20240207025210.8837-4-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
e2862554 | 09-Jan-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUs
Enable FEAT_NV2 on the 'max' CPU, and stop filtering it out for the Neoverse N2 and Neoverse V1 CPUs.
Signed-off-by: Peter Maydell <pe
target/arm: Add FEAT_NV2 to max, neoverse-n2, neoverse-v1 CPUs
Enable FEAT_NV2 on the 'max' CPU, and stop filtering it out for the Neoverse N2 and Neoverse V1 CPUs.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
show more ...
|
674e5345 | 09-Jan-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Report VNCR_EL2 based faults correctly
If FEAT_NV2 redirects a system register access to a memory offset from VNCR_EL2, that access might fault. In this case we need to report the corre
target/arm: Report VNCR_EL2 based faults correctly
If FEAT_NV2 redirects a system register access to a memory offset from VNCR_EL2, that access might fault. In this case we need to report the correct syndrome information: * Data Abort, from same-EL * no ISS information * the VNCR bit (bit 13) is set
and the exception must be taken to EL2.
Save an appropriate syndrome template when generating code; we can then use that to: * select the right target EL * reconstitute a correct final syndrome for the data abort * report the right syndrome if we take a FEAT_RME granule protection fault on the VNCR-based write
Note that because VNCR is bit 13, we must start keeping bit 13 in template syndromes, by adjusting ARM_INSN_START_WORD2_SHIFT.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com>
show more ...
|
daf9b4a0 | 09-Jan-2024 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Implement FEAT_NV2 redirection of sysregs to RAM
FEAT_NV2 requires that when HCR_EL2.{NV,NV2} == 0b11 then accesses by EL1 to certain system registers are redirected to RAM. The full li
target/arm: Implement FEAT_NV2 redirection of sysregs to RAM
FEAT_NV2 requires that when HCR_EL2.{NV,NV2} == 0b11 then accesses by EL1 to certain system registers are redirected to RAM. The full list of affected registers is in the table in rule R_CSRPQ in the Arm ARM. The registers may be normally accessible at EL1 (like ACTLR_EL1), or normally UNDEF at EL1 (like HCR_EL2). Some registers redirect to RAM only when HCR_EL2.NV1 is 0, and some only when HCR_EL2.NV1 is 1; others trap in both cases.
Add the infrastructure for identifying which registers should be redirected and turning them into memory accesses.
This code does not set the correct syndrome or arrange for the exception to be taken to the correct target EL if the access via VNCR_EL2 faults; we will do that in the next commit.
Subsequent commits will mark up the relevant regdefs to set their nv2_redirect_offset, and if relevant one of the two flags which indicates that the redirect happens only for a particular value of HCR_EL2.NV1.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Tested-by: Miguel Luis <miguel.luis@oracle.com> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|