#
7d478306 |
| 30-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Split out exec/user/guest-base.h
TCG will need this declaration, without all of the other bits that come with cpu-all.h.
Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Richard Hend
tcg: Split out exec/user/guest-base.h
TCG will need this declaration, without all of the other bits that come with cpu-all.h.
Reviewed-by: Thomas Huth <thuth@redhat.com> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
c31e5fa4 |
| 28-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Remove TARGET_LONG_BITS, TCG_TYPE_TL
All uses replaced with TCGContext.addr_type.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.o
tcg: Remove TARGET_LONG_BITS, TCG_TYPE_TL
All uses replaced with TCGContext.addr_type.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
fecccfcc |
| 16-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Split INDEX_op_qemu_{ld,st}* for guest address size
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits, as we need one or two host registers to represent the guest address.
Create
tcg: Split INDEX_op_qemu_{ld,st}* for guest address size
For 32-bit hosts, we cannot simply rely on TCGContext.addr_bits, as we need one or two host registers to represent the guest address.
Create the new opcodes and update all users. Since we have not yet eliminated TARGET_LONG_BITS, only one of the two opcodes will ever be used, so we can get away with treating them the same in the backends.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
4baf3978 |
| 09-Mar-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Add addr_type to TCGContext
This will enable replacement of TARGET_LONG_BITS within tcg/.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@
tcg: Add addr_type to TCGContext
This will enable replacement of TARGET_LONG_BITS within tcg/.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
76cef4b2 |
| 08-Mar-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Widen tcg_gen_code pc_start argument to uint64_t
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
#
24e46e6c |
| 26-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
accel/tcg: Widen tcg-ldst.h addresses to uint64_t
Always pass the target address as uint64_t. Adjust tcg_out_{ld,st}_helper_args to match.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-o
accel/tcg: Widen tcg-ldst.h addresses to uint64_t
Always pass the target address as uint64_t. Adjust tcg_out_{ld,st}_helper_args to match.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
c9ad8d27 |
| 08-Mar-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Widen gen_insn_data to uint64_t
We already pass uint64_t to restore_state_to_opc; this changes all of the other uses from insn_start through the encoding to decoding.
Reviewed-by: Anton Johans
tcg: Widen gen_insn_data to uint64_t
We already pass uint64_t to restore_state_to_opc; this changes all of the other uses from insn_start through the encoding to decoding.
Reviewed-by: Anton Johansson <anjo@rev.ng> Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
Revision tags: v7.2.0 |
|
#
e63b8a29 |
| 07-Nov-2022 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Introduce atom_and_align_for_opc
Examine MemOp for atomicity and alignment, adjusting alignment as required to implement atomicity on the host.
Reviewed-by: Peter Maydell <peter.maydell@linaro
tcg: Introduce atom_and_align_for_opc
Examine MemOp for atomicity and alignment, adjusting alignment as required to implement atomicity on the host.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
ebebea53 |
| 17-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Support TCG_TYPE_I128 in tcg_out_{ld,st}_helper_{args,ret}
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
#
8d314041 |
| 14-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Merge tcg_out_helper_load_regs into caller
Now that tcg_out_helper_load_regs is not recursive, we can merge it into its only caller, tcg_out_helper_load_slots.
Reviewed-by: Peter Maydell <pete
tcg: Merge tcg_out_helper_load_regs into caller
Now that tcg_out_helper_load_regs is not recursive, we can merge it into its only caller, tcg_out_helper_load_slots.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
2462e30e |
| 14-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Introduce tcg_out_movext3
With x86_64 as host, we do not have any temporaries with which to resolve cycles, but we do have xchg. As a side bonus, the set of graphs that can be made with 3 nod
tcg: Introduce tcg_out_movext3
With x86_64 as host, we do not have any temporaries with which to resolve cycles, but we do have xchg. As a side bonus, the set of graphs that can be made with 3 nodes and all nodes conflicting is small: two. We can solve the cycle with a single temp.
This is required for x86_64 to handle stores of i128: 1 address register and 2 data registers.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
12fde9bc |
| 06-Nov-2022 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Add INDEX_op_qemu_{ld,st}_i128
Add opcodes for backend support for 128-bit memory operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@
tcg: Add INDEX_op_qemu_{ld,st}_i128
Add opcodes for backend support for 128-bit memory operations.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
de95016d |
| 07-Nov-2022 |
Richard Henderson <richard.henderson@linaro.org> |
accel/tcg: Implement helper_{ld,st}*_mmu for user-only
TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode.
accel/tcg: Implement helper_{ld,st}*_mmu for user-only
TCG backends may need to defer to a helper to implement the atomicity required by a given operation. Mirror the interface used in system mode.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
0cadc1ed |
| 31-Oct-2022 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Unify helper_{be,le}_{ld,st}*
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of fu
tcg: Unify helper_{be,le}_{ld,st}*
With the current structure of cputlb.c, there is no difference between the little-endian and big-endian entry points, aside from the assert. Unify the pairs of functions.
Hoist the qemu_{ld,st}_helpers arrays to tcg.c.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
37031fef |
| 21-Oct-2022 |
Richard Henderson <richard.henderson@linaro.org> |
include/exec/memop: Add MO_ATOM_*
This field may be used to describe the precise atomicity requirements of the guest, which may then be used to constrain the methods by which it may be emulated by t
include/exec/memop: Add MO_ATOM_*
This field may be used to describe the precise atomicity requirements of the guest, which may then be used to constrain the methods by which it may be emulated by the host.
For instance, the AArch64 LDP (32-bit) instruction changes semantics with ARMv8.4 LSE2, from
MO_64 | MO_ATOM_IFALIGN_PAIR (64-bits, single-copy atomic only on 4 byte units, nonatomic if not aligned by 4),
to
MO_64 | MO_ATOM_WITHIN16 (64-bits, single-copy atomic within a 16 byte block)
The former may be implemented with two 4 byte loads, or a single 8 byte load if that happens to be efficient on the host. The latter may not be implemented with two 4 byte loads and may also require a helper when misaligned.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
27823850 |
| 11-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-tcg-20230511-2' of https://gitlab.com/rth7680/qemu into staging
target/m68k: Fix gen_load_fp regression accel/tcg: Ensure fairness with icount disas: Move disas.c into the target-ind
Merge tag 'pull-tcg-20230511-2' of https://gitlab.com/rth7680/qemu into staging
target/m68k: Fix gen_load_fp regression accel/tcg: Ensure fairness with icount disas: Move disas.c into the target-independent source sets tcg: Use common routines for calling slow path helpers tcg/*: Cleanups to qemu_ld/st constraints tcg: Remove TARGET_ALIGNED_ONLY accel/tcg: Reorg system mode load/store helpers
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRcxtYdHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9arQf8Di7CnMQE/jW+8w6v # 5af0dX8/St2JnCXzG+qiW6mJm50Cy4GunCN66JcCAswpENvQLLsJP13c+4KTeB1T # rGBbedFXTw1LsaoOcBvwhq7RTIROz4GESTS4EZoJMlMhMv0VotekUPPz4NFMZRKX # LMvShM2C+f2p4HmDnnbki7M3+tMqpgoGCeBFX8Jy7/5sbpS/7ceXRio3ZRAhasPu # vjA0zqUtoTs7ijKpXf3uRl/c7xql+f0d7SDdCRt4OKasfLCCDwkjtMf6plZ2jzuS # OgwKc5N1jaMF6erHYZJIbfLLdUl20/JJEcbpU3Eh1XuHnzn1msS9JDOm2tvzwsto # OpOKUg== # =Lhy3 # -----END PGP SIGNATURE----- # gpg: Signature made Thu 11 May 2023 11:43:34 AM BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* tag 'pull-tcg-20230511-2' of https://gitlab.com/rth7680/qemu: (53 commits) target/loongarch: Do not include tcg-ldst.h accel/tcg: Reorg system mode store helpers accel/tcg: Reorg system mode load helpers accel/tcg: Introduce tlb_read_idx accel/tcg: Add cpu_in_serial_context tcg: Remove TARGET_ALIGNED_ONLY target/sh4: Remove TARGET_ALIGNED_ONLY target/sh4: Use MO_ALIGN where required target/nios2: Remove TARGET_ALIGNED_ONLY target/mips: Remove TARGET_ALIGNED_ONLY target/mips: Use MO_ALIGN instead of 0 target/mips: Add missing default_tcg_memop_mask target/mips: Add MO_ALIGN to gen_llwp, gen_scwp tcg/s390x: Simplify constraints on qemu_ld/st tcg/s390x: Use ALGFR in constructing softmmu host address tcg/riscv: Simplify constraints on qemu_ld/st tcg/ppc: Remove unused constraint J tcg/ppc: Remove unused constraints A, B, C, D tcg/ppc: Adjust constraints on qemu_ld/st tcg/ppc: Reorg tcg_out_tlb_read ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
1fceff9c |
| 02-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Remove TARGET_ALIGNED_ONLY
All uses have now been expunged.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
#
8429a1ca |
| 10-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Add routines for calling slow-path helpers
Add tcg_out_ld_helper_args, tcg_out_ld_helper_ret, and tcg_out_st_helper_args. These and their subroutines use the existing knowledge of the host fun
tcg: Add routines for calling slow-path helpers
Add tcg_out_ld_helper_args, tcg_out_ld_helper_ret, and tcg_out_st_helper_args. These and their subroutines use the existing knowledge of the host function call abi to load the function call arguments and return results.
These will be used to simplify the backends in turn.
Reviewed-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
47d38784 |
| 05-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-tcg-20230505' of https://gitlab.com/rth7680/qemu into staging
softfloat: Fix the incorrect computation in float32_exp2 tcg: Remove compatability helpers for qemu ld/st target/alpha:
Merge tag 'pull-tcg-20230505' of https://gitlab.com/rth7680/qemu into staging
softfloat: Fix the incorrect computation in float32_exp2 tcg: Remove compatability helpers for qemu ld/st target/alpha: Remove TARGET_ALIGNED_ONLY target/hppa: Remove TARGET_ALIGNED_ONLY target/sparc: Remove TARGET_ALIGNED_ONLY tcg: Cleanups preparing to unify calls to qemu_ld/st helpers
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRVc9UdHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV9OiAgAgwc6wFOzFtSnYrvH # b9YgcJLPX8urgx9g1Exv553hbVtt2J0lsLAhlgwKpms3Os4p6znKhUWcGosHFixO # eBQFqcS22Cu/ZM2s6299GOGDpxCpjx0/bX7JJTjW805SdSgDAuEUIbKe0ZqQT5tx # ++F9is2+plp95/BeQz2+hbkbbpdktUkkk288Adoz3KRHqt/zd8cer0WrqR2uVAuX # swpEluwtCfaewc0iPcNjlp9rLzO882wCFm0RG1EC2j9NHtq8O8xyamM9PPEaRXLv # MiMA2nB6hsGMz33Wuec8cZTMaCLB+Oqhbq7eYPbCA4SmJBE3V9Rgc7GL4B7yCsyI # OXSK+Q== # =GIXd # -----END PGP SIGNATURE----- # gpg: Signature made Fri 05 May 2023 10:23:33 PM BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* tag 'pull-tcg-20230505' of https://gitlab.com/rth7680/qemu: (42 commits) tcg: Widen helper_*_st[bw]_mmu val arguments tcg: Introduce arg_slot_stk_ofs tcg: Replace REG_P with arg_loc_reg_p tcg: Move TCGLabelQemuLdst to tcg.c tcg/sparc64: Pass TCGType to tcg_out_qemu_{ld,st} tcg/sparc64: Drop is_64 test from tcg_out_qemu_ld data return tcg/s390x: Introduce HostAddress tcg/s390x: Pass TCGType to tcg_out_qemu_{ld,st} tcg/riscv: Rationalize args to tcg_out_qemu_{ld,st} tcg/riscv: Require TCG_TARGET_REG_BITS == 64 tcg/ppc: Introduce HostAddress tcg/ppc: Rationalize args to tcg_out_qemu_{ld,st} tcg/mips: Rationalize args to tcg_out_qemu_{ld,st} tcg/loongarch64: Introduce HostAddress tcg/loongarch64: Rationalize args to tcg_out_qemu_{ld,st} tcg/arm: Introduce HostAddress tcg/arm: Rationalize args to tcg_out_qemu_{ld,st} tcg/aarch64: Introduce HostAddress tcg/aarch64: Rationalize args to tcg_out_qemu_{ld,st} tcg/i386: Introduce tcg_out_testi ...
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
d78e4a4f |
| 08-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Introduce arg_slot_stk_ofs
Unify all computation of argument stack offset in one function. This requires that we adjust ref_slot to be in the same units, by adding max_reg_slots during init_cal
tcg: Introduce arg_slot_stk_ofs
Unify all computation of argument stack offset in one function. This requires that we adjust ref_slot to be in the same units, by adding max_reg_slots during init_call_layout.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
338b61e9 |
| 08-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Replace REG_P with arg_loc_reg_p
An inline function is safer than a macro, and REG_P was rather too generic.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Hend
tcg: Replace REG_P with arg_loc_reg_p
An inline function is safer than a macro, and REG_P was rather too generic.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
2528f771 |
| 07-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Move TCGLabelQemuLdst to tcg.c
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
|
#
4ebc33f3 |
| 02-May-2023 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-tcg-20230502-2' of https://gitlab.com/rth7680/qemu into staging
Misc tcg-related patch queue.
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRRb30
Merge tag 'pull-tcg-20230502-2' of https://gitlab.com/rth7680/qemu into staging
Misc tcg-related patch queue.
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRRb30dHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV+obAgAmL4F1gdkbUUPKnkv # poYwutCX+c3kog22TF29BlKgC8vJa6UbRLMphz5q7v3wbCKQJMeNV/sKa+mhnHBK # CB3wP8xXVAahWFARmWTIZEqlB3HQ/RIzhc5saKkiSzcGIrtXUj6fdfrz7mae+w/g # kDGCbK8hGyuE580j9QAIPbpfqPoNhIPziECFA1AsNf5Krpxc1nDqIfZEuUzTLtLO # 1WoSaUVbiGDQrTe2OVKF2mtrGbr2vWI1vnHJl67Lom6rG0LzOjb3W/8IN+n0+46E # 7pMlUCDT1zeTxevRxBvDmwgCYA/QjFosd4enUuhVReTxTNhUc69+QyuOAhHO/IEq # T0V3eA== # =qZDQ # -----END PGP SIGNATURE----- # gpg: Signature made Tue 02 May 2023 09:15:57 PM BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* tag 'pull-tcg-20230502-2' of https://gitlab.com/rth7680/qemu: tcg: Introduce tcg_out_movext2 tcg/mips: Conditionalize tcg_out_exts_i32_i64 tcg/loongarch64: Conditionalize tcg_out_exts_i32_i64 accel/tcg: Add cpu_ld*_code_mmu migration/xbzrle: Use __attribute__((target)) for avx512 qemu/int128: Re-shuffle Int128Alias members tcg: Add tcg_gen_gvec_rotrs tcg: Add tcg_gen_gvec_andcs qemu/host-utils.h: Add clz and ctz functions for lower-bit integers qemu/bitops.h: Limit rotate amounts accel/tcg: Uncache the host address for instruction fetch when tlb size < 1 softmmu: Tidy dirtylimit_dirty_ring_full_time
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
129f1f9e |
| 06-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
tcg: Introduce tcg_out_movext2
This is common code in most qemu_{ld,st} slow paths, moving two registers when there may be overlap between sources and destinations. At present, this is only used by
tcg: Introduce tcg_out_movext2
This is common code in most qemu_{ld,st} slow paths, moving two registers when there may be overlap between sources and destinations. At present, this is only used by 32-bit hosts for 64-bit data, but will shortly be used for more than that.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|
#
327ec8d6 |
| 23-Apr-2023 |
Richard Henderson <richard.henderson@linaro.org> |
Merge tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu into staging
tcg cleanups: - Remove tcg_abort() - Split out extensions as known backend interfaces - Put the separate extension
Merge tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu into staging
tcg cleanups: - Remove tcg_abort() - Split out extensions as known backend interfaces - Put the separate extensions together as tcg_out_movext - Introduce tcg_out_xchg as a backend interface - Clear TCGLabelQemuLdst on allocation - Avoid redundant extensions for riscv
# -----BEGIN PGP SIGNATURE----- # # iQFRBAABCgA7FiEEekgeeIaLTbaoWgXAZN846K9+IV8FAmRE69sdHHJpY2hhcmQu # aGVuZGVyc29uQGxpbmFyby5vcmcACgkQZN846K9+IV/6jQf6Al9cgeJ6guVMpoRS # +sXaTs5U2yaqRvz5gGn2ANFuFgD2QanbWHjS5guTnhbsvq3icyOCpIXIPg/Z04LB # fTgAUCF5ut8U8C12HyGq/p4BFoTTWnCGPwY+PB9pMb5LiEcmaSUUz+fSA8xMX1b6 # EylI8YNd74A9j5PBNbGIXooj8llM71p9YztwQ9V7sPH3ZON4qbPRDgrJsb5TngMa # daTpGoW+A9UyG7z0Ie6UuiOyYAzeQqm64WmMlc7UYeb9lL+yxvCq4+MXH2V/SKqg # GLOF95DCdqj1EeZCOt0aN1ybZPcYFFkmpXrD1iLu0Mhy7Qo/vghX/eFoFnLleD+Y # yM+LTg== # =d2hZ # -----END PGP SIGNATURE----- # gpg: Signature made Sun 23 Apr 2023 09:27:07 AM BST # gpg: using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F # gpg: issuer "richard.henderson@linaro.org" # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [ultimate]
* tag 'pull-tcg-20230423' of https://gitlab.com/rth7680/qemu: tcg/riscv: Conditionalize tcg_out_exts_i32_i64 tcg: Clear TCGLabelQemuLdst on allocation tcg: Introduce tcg_out_xchg tcg: Introduce tcg_out_movext tcg: Split out tcg_out_extrl_i64_i32 tcg: Split out tcg_out_extu_i32_i64 tcg: Split out tcg_out_exts_i32_i64 tcg: Split out tcg_out_ext32u tcg: Split out tcg_out_ext32s tcg: Split out tcg_out_ext16u tcg: Split out tcg_out_ext16s tcg: Split out tcg_out_ext8u tcg: Split out tcg_out_ext8s tcg: Replace tcg_abort with g_assert_not_reached tcg: Replace if + tcg_abort with tcg_debug_assert
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
show more ...
|