a97d3c18 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Convert exception generation instructions to decodetree
Convert the exception generation instructions SVC, HVC, SMC, BRK and HLT to decodetree.
The old decoder decoded the halting-debug
target/arm: Convert exception generation instructions to decodetree
Convert the exception generation instructions SVC, HVC, SMC, BRK and HLT to decodetree.
The old decoder decoded the halting-debug insnns DCPS1, DCPS2 and DCPS3 just in order to then make them UNDEF; as with DRPS, we don't bother to decode them, but document the patterns in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-8-peter.maydell@linaro.org
show more ...
|
6e3c8049 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree
Convert MSR (reg), MRS, SYS, SYSL to decodetree. For QEMU these are all essentially the same instruction (system register access).
Signe
target/arm: Convert MSR (reg), MRS, SYS, SYSL to decodetree
Convert MSR (reg), MRS, SYS, SYSL to decodetree. For QEMU these are all essentially the same instruction (system register access).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-7-peter.maydell@linaro.org Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
show more ...
|
45d063d1 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Convert MSR (immediate) to decodetree
Convert the MSR (immediate) insn to decodetree. Our implementation has basically no commonality between the different destinations, so we decode the
target/arm: Convert MSR (immediate) to decodetree
Convert the MSR (immediate) insn to decodetree. Our implementation has basically no commonality between the different destinations, so we decode the destination register in a64.decode.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-6-peter.maydell@linaro.org
show more ...
|
d78b662f | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Convert CFINV, XAFLAG and AXFLAG to decodetree
Convert the CFINV, XAFLAG and AXFLAG insns to decodetree. The old decoder handles these in handle_msr_i(), but the architecture defines the
target/arm: Convert CFINV, XAFLAG and AXFLAG to decodetree
Convert the CFINV, XAFLAG and AXFLAG insns to decodetree. The old decoder handles these in handle_msr_i(), but the architecture defines them as separate instructions from MSR (immediate).
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-5-peter.maydell@linaro.org
show more ...
|
afcd5df5 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Convert barrier insns to decodetree
Convert the insns in the "Barriers" instruction class to decodetree: CLREX, DSB, DMB, ISB and SB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.
target/arm: Convert barrier insns to decodetree
Convert the insns in the "Barriers" instruction class to decodetree: CLREX, DSB, DMB, ISB and SB.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-4-peter.maydell@linaro.org Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
show more ...
|
7fefc706 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Convert hint instruction space to decodetree
Convert the various instructions in the hint instruction space to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Review
target/arm: Convert hint instruction space to decodetree
Convert the various instructions in the hint instruction space to decodetree.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-3-peter.maydell@linaro.org
show more ...
|
68496d41 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores
In the recent refactoring we missed a few places which should be calling finalize_memop_asimd() for ASIMD loads and stores
target/arm: Consistently use finalize_memop_asimd() for ASIMD loads/stores
In the recent refactoring we missed a few places which should be calling finalize_memop_asimd() for ASIMD loads and stores but instead are just calling finalize_memop(); fix these.
For the disas_ldst_single_struct() and disas_ldst_multiple_struct() cases, this is not a behaviour change because there the size is never MO_128 and the two finalize functions do the same thing.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
99bb43c0 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode
In disas_ldst_reg_imm9() we missed one place where a call to a gen_mte_check* function should now be passed the memop we have cre
target/arm: Pass memop to gen_mte_check1_mmuidx() in reg_imm9 decode
In disas_ldst_reg_imm9() we missed one place where a call to a gen_mte_check* function should now be passed the memop we have created rather than just being passed the size. Fix this.
Fixes: 0a9091424d ("target/arm: Pass memop to gen_mte_check1*") Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
show more ...
|
7e278847 | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Return correct result for LDG when ATA=0
The LDG instruction loads the tag from a memory address (identified by [Xn + offset]), and then merges that tag into the destination register Xt.
target/arm: Return correct result for LDG when ATA=0
The LDG instruction loads the tag from a memory address (identified by [Xn + offset]), and then merges that tag into the destination register Xt. We implemented this correctly for the case when allocation tags are enabled, but didn't get it right when ATA=0: instead of merging the tag bits into Xt, we merged them into the memory address [Xn + offset] and then set Xt to that.
Merge the tag bits into the old Xt value, as they should be.
Cc: qemu-stable@nongnu.org Fixes: c15294c1e36a7dd9b25 ("target/arm: Implement LDG, STG, ST2G instructions") Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
243705aa | 19-Jun-2023 |
Peter Maydell <peter.maydell@linaro.org> |
target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics
The atomic memory operations are supposed to return the old memory data value in the destination register. This value is not sign-ex
target/arm: Fix return value from LDSMIN/LDSMAX 8/16 bit atomics
The atomic memory operations are supposed to return the old memory data value in the destination register. This value is not sign-extended, even if the operation is the signed minimum or maximum. (In the pseudocode for the instructions the returned data value is passed to ZeroExtend() to create the value in the register.)
We got this wrong because we were doing a 32-to-64 zero extend on the result for 8 and 16 bit data values, rather than the correct amount of zero extension.
Fix the bug by using ext8u and ext16u for the MO_8 and MO_16 data sizes rather than ext32u.
Cc: qemu-stable@nongnu.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230602155223.2040685-2-peter.maydell@linaro.org
show more ...
|
59b6b42c | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Enable FEAT_LSE2 for -cpu max
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-21-r
target/arm: Enable FEAT_LSE2 for -cpu max
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-21-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
5096ec5b | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Move mte check for store-exclusive
Push the mte check behind the exclusive_addr check. Document the several ways that we are still out of spec with this implementation.
Reviewed-by: Pet
target/arm: Move mte check for store-exclusive
Push the mte check behind the exclusive_addr check. Document the several ways that we are still out of spec with this implementation.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-18-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
c1a1f805 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Relax ordered/atomic alignment checks for LSE2
FEAT_LSE2 only requires that atomic operations not cross a 16-byte boundary. Ordered operations may be completely unaligned if SCTLR.nAA i
target/arm: Relax ordered/atomic alignment checks for LSE2
FEAT_LSE2 only requires that atomic operations not cross a 16-byte boundary. Ordered operations may be completely unaligned if SCTLR.nAA is set.
Because this alignment check is so special, do it by hand. Make sure not to keep TCG temps live across the branch.
Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-17-richard.henderson@linaro.org Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
83f624d9 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Add SCTLR.nAA to TBFLAG_A64
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-16-ric
target/arm: Add SCTLR.nAA to TBFLAG_A64
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-16-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
523da6b9 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Check alignment in helper_mte_check
Fixes a bug in that with SCTLR.A set, we should raise any alignment fault before raising any MTE check fault.
Reviewed-by: Peter Maydell <peter.mayde
target/arm: Check alignment in helper_mte_check
Fixes a bug in that with SCTLR.A set, we should raise any alignment fault before raising any MTE check fault.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-15-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
3b97520c | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Pass single_memop to gen_mte_checkN
Pass the individual memop to gen_mte_checkN. For the moment, do nothing with it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by:
target/arm: Pass single_memop to gen_mte_checkN
Pass the individual memop to gen_mte_checkN. For the moment, do nothing with it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-14-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
0a909142 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Pass memop to gen_mte_check1*
Pass the completed memop to gen_mte_check1_mmuidx. For the moment, do nothing more than extract the size.
Reviewed-by: Peter Maydell <peter.maydell@linaro.
target/arm: Pass memop to gen_mte_check1*
Pass the completed memop to gen_mte_check1_mmuidx. For the moment, do nothing more than extract the size.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-13-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
03176bcd | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Hoist finalize_memop out of do_fp_{ld, st}
We are going to need the complete memop beforehand, so let's not compute it twice.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Revie
target/arm: Hoist finalize_memop out of do_fp_{ld, st}
We are going to need the complete memop beforehand, so let's not compute it twice.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-12-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
a75b66f6 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Hoist finalize_memop out of do_gpr_{ld, st}
We are going to need the complete memop beforehand, so let's not compute it twice.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Sign
target/arm: Hoist finalize_memop out of do_gpr_{ld, st}
We are going to need the complete memop beforehand, so let's not compute it twice.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-11-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
6f47e7c1 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Load/store integer pair with one tcg operation
This is required for LSE2, where the pair must be treated atomically if it does not cross a 16-byte boundary. But it simplifies the code t
target/arm: Load/store integer pair with one tcg operation
This is required for LSE2, where the pair must be treated atomically if it does not cross a 16-byte boundary. But it simplifies the code to do this always.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-10-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
5c13983e | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Sink gen_mte_check1 into load/store_exclusive
No need to duplicate this check across multiple call sites.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard He
target/arm: Sink gen_mte_check1 into load/store_exclusive
No need to duplicate this check across multiple call sites.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-9-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
e6dd5e78 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r
Round len_align to 16 instead of 8, handling an odd 8-byte as part of the tail. Use MO_ATOM_NONE to indicate that all of these memory
target/arm: Use tcg_gen_qemu_{ld, st}_i128 in gen_sve_{ld, st}r
Round len_align to 16 instead of 8, handling an odd 8-byte as part of the tail. Use MO_ATOM_NONE to indicate that all of these memory ops have only byte atomicity.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-8-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
e6073d88 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G
This fixes a bug in that these two insns should have been using atomic 16-byte stores, since MTE is ARMv8.5 and LSE2 is mandatory from ARMv8.4.
target/arm: Use tcg_gen_qemu_st_i128 for STZG, STZ2G
This fixes a bug in that these two insns should have been using atomic 16-byte stores, since MTE is ARMv8.5 and LSE2 is mandatory from ARMv8.4.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-7-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
d450bd01 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld}
While we don't require 16-byte atomicity here, using a single larger operation simplifies the code. Introduce finalize_memop_asimd for
target/arm: Use tcg_gen_qemu_{st, ld}_i128 for do_fp_{st, ld}
While we don't require 16-byte atomicity here, using a single larger operation simplifies the code. Introduce finalize_memop_asimd for this.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-6-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|
c74cc082 | 06-Jun-2023 |
Richard Henderson <richard.henderson@linaro.org> |
target/arm: Use tcg_gen_qemu_ld_i128 for LDXP
While we don't require 16-byte atomicity here, using a single larger load simplifies the code, and makes it a closer match to STXP.
Reviewed-by: Peter
target/arm: Use tcg_gen_qemu_ld_i128 for LDXP
While we don't require 16-byte atomicity here, using a single larger load simplifies the code, and makes it a closer match to STXP.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org> Message-id: 20230530191438.411344-5-richard.henderson@linaro.org Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
show more ...
|