Lines Matching refs:a

6  * Permission is hereby granted, free of charge, to any person obtaining a copy
192 /* test if a constant matches the constraint */
714 /* Output an opcode with a full "rm + (index<<shift) + offset" address mode.
715 We handle either RM and INDEX missing with a negative value. In 64-bit
726 /* Try for a rip-relative addressing mode. This has replaced
768 /* Use a single byte MODRM format if possible. Note that the encoding
904 TCGReg r, TCGReg a)
908 tcg_out_vex_modrm(s, avx2_dup_insn[vece] + vex_l, r, 0, a);
912 /* ??? With zero in a register, use PSHUFB. */
913 tcg_out_vex_modrm(s, OPC_PUNPCKLBW, r, a, a);
914 a = r;
917 tcg_out_vex_modrm(s, OPC_PUNPCKLWD, r, a, a);
918 a = r;
921 tcg_out_vex_modrm(s, OPC_PSHUFD, r, 0, a);
926 tcg_out_vex_modrm(s, OPC_PUNPCKLQDQ, r, a, a);
1045 /* Try a 7 byte pc-relative lea before the 10 byte movq. */
1152 * and stores use a 16-byte aligned offset. Validate that the
1197 * and stores use a 16-byte aligned offset. Validate that the
1378 /* AND with no high bits set can use a 32-bit operation. */
1424 /* Set SMALL to force a short forward branch. */
1688 /* If arg2 is a register, swap for LTU/GEU. */
1894 * The sysv i386 abi for struct return places a reference as the
1901 * Pushing a garbage value back onto the stack is quickest.
1919 * decode and discard the duplicates in a single cycle.
1949 * but do allow a pair of 64-bit operations, i.e. MOVBEQ.
1958 * before we need might need a scratch reg.
1960 * Even then, a scratch is only needed for l->raddr. Rather than expose
1961 * a general-purpose scratch when we don't actually know it's available,
2005 * Generate code for the slow path for a load at the end of block
2027 * Generate code for the slow path for a store at the end of block
2088 * In both cases, return a TCGLabelQemuLdst structure if the slow path
2327 * With 16-byte atomicity, a vector load is required.
2330 * Else use we require a runtime test for alignment for VMOVDQA;
2455 * With 16-byte atomicity, a vector store is required.
2458 * Else use we require a runtime test for alignment for VMOVDQA;
2958 /* This is a 32-bit zero-extending right shift. */
3219 /* First merge the two 32-bit inputs to a single 64-bit element. */
3521 return C_O2_I3(a, d, 0, 1, r);
3527 return C_O2_I2(a, d, a, r);
3814 * We can emulate a small sign extend by performing an arithmetic
3815 * 32-bit shift and overwriting the high half of a 64-bit logical
3826 /* Otherwise we will need to use a compare vs 0 to produce
4233 /* Choose R12 because, as a base, it requires a SIB byte. */
4256 * Return path for goto_ptr. Set return value to 0, a-la exit_tb,
4339 /* We're expecting a 2 byte uleb128 encoded value. */