HELLO·Android
系统源代码
IT资讯
技术文章
我的收藏
注册
登录
-
我收藏的文章
创建代码块
我的代码块
我的账号
Pie
|
9.0.0_r8
下载
查看原文件
收藏
根目录
external
valgrind
VEX
priv
guest_arm_toIR.c
/*--------------------------------------------------------------------*/ /*--- begin guest_arm_toIR.c ---*/ /*--------------------------------------------------------------------*/ /* This file is part of Valgrind, a dynamic binary instrumentation framework. Copyright (C) 2004-2017 OpenWorks LLP info@open-works.net NEON support is Copyright (C) 2010-2017 Samsung Electronics contributed by Dmitry Zhurikhin
and Kirill Batuzov
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. The GNU General Public License is contained in the file COPYING. */ /* XXXX thumb to check: that all cases where putIRegT writes r15, we generate a jump. All uses of newTemp assign to an IRTemp and not a UInt For all thumb loads and stores, including VFP ones, new-ITSTATE is backed out before the memory op, and restored afterwards. This needs to happen even after we go uncond. (and for sure it doesn't happen for VFP loads/stores right now). VFP on thumb: check that we exclude all r13/r15 cases that we should. XXXX thumb to do: improve the ITSTATE-zeroing optimisation by taking into account the number of insns guarded by an IT. remove the nasty hack, in the spechelper, of looking for Or32(..., 0xE0) in as the first arg to armg_calculate_condition, and instead use Slice44 as specified in comments in the spechelper. add specialisations for armg_calculate_flag_c and _v, as they are moderately often needed in Thumb code. Correctness: ITSTATE handling in Thumb SVCs is wrong. Correctness (obscure): in m_transtab, when invalidating code address ranges, invalidate up to 18 bytes after the end of the range. This is because the ITSTATE optimisation at the top of _THUMB_WRK below analyses up to 18 bytes before the start of any given instruction, and so might depend on the invalidated area. */ /* Limitations, etc - pretty dodgy exception semantics for {LD,ST}Mxx and {LD,ST}RD. These instructions are non-restartable in the case where the transfer(s) fault. - SWP: the restart jump back is Ijk_Boring; it should be Ijk_NoRedir but that's expensive. See comments on casLE() in guest_x86_toIR.c. */ /* "Special" instructions. This instruction decoder can decode four special instructions which mean nothing natively (are no-ops as far as regs/mem are concerned) but have meaning for supporting Valgrind. A special instruction is flagged by a 16-byte preamble: E1A0C1EC E1A0C6EC E1A0CEEC E1A0C9EC (mov r12, r12, ROR #3; mov r12, r12, ROR #13; mov r12, r12, ROR #29; mov r12, r12, ROR #19) Following that, one of the following 3 are allowed (standard interpretation in parentheses): E18AA00A (orr r10,r10,r10) R3 = client_request ( R4 ) E18BB00B (orr r11,r11,r11) R3 = guest_NRADDR E18CC00C (orr r12,r12,r12) branch-and-link-to-noredir R4 E1899009 (orr r9,r9,r9) IR injection Any other bytes following the 16-byte preamble are illegal and constitute a failure in instruction decoding. This all assumes that the preamble will never occur except in specific code fragments designed for Valgrind to catch. */ /* Translates ARM(v5) code to IR. */ #include "libvex_basictypes.h" #include "libvex_ir.h" #include "libvex.h" #include "libvex_guest_arm.h" #include "main_util.h" #include "main_globals.h" #include "guest_generic_bb_to_IR.h" #include "guest_arm_defs.h" /*------------------------------------------------------------*/ /*--- Globals ---*/ /*------------------------------------------------------------*/ /* These are set at the start of the translation of a instruction, so that we don't have to pass them around endlessly. CONST means does not change during translation of the instruction. */ /* CONST: what is the host's endianness? This has to do with float vs double register accesses on VFP, but it's complex and not properly thought out. */ static VexEndness host_endness; /* CONST: The guest address for the instruction currently being translated. This is the real, "decoded" address (not subject to the CPSR.T kludge). */ static Addr32 guest_R15_curr_instr_notENC; /* CONST, FOR ASSERTIONS ONLY. Indicates whether currently processed insn is Thumb (True) or ARM (False). */ static Bool __curr_is_Thumb; /* MOD: The IRSB* into which we're generating code. */ static IRSB* irsb; /* These are to do with handling writes to r15. They are initially set at the start of disInstr_ARM_WRK to indicate no update, possibly updated during the routine, and examined again at the end. If they have been set to indicate a r15 update then a jump is generated. Note, "explicit" jumps (b, bx, etc) are generated directly, not using this mechanism -- this is intended to handle the implicit-style jumps resulting from (eg) assigning to r15 as the result of insns we wouldn't normally consider branchy. */ /* MOD. Initially False; set to True iff abovementioned handling is required. */ static Bool r15written; /* MOD. Initially IRTemp_INVALID. If the r15 branch to be generated is conditional, this holds the gating IRTemp :: Ity_I32. If the branch to be generated is unconditional, this remains IRTemp_INVALID. */ static IRTemp r15guard; /* :: Ity_I32, 0 or 1 */ /* MOD. Initially Ijk_Boring. If an r15 branch is to be generated, this holds the jump kind. */ static IRTemp r15kind; /*------------------------------------------------------------*/ /*--- Debugging output ---*/ /*------------------------------------------------------------*/ #define DIP(format, args...) \ if (vex_traceflags & VEX_TRACE_FE) \ vex_printf(format, ## args) #define DIS(buf, format, args...) \ if (vex_traceflags & VEX_TRACE_FE) \ vex_sprintf(buf, format, ## args) #define ASSERT_IS_THUMB \ do { vassert(__curr_is_Thumb); } while (0) #define ASSERT_IS_ARM \ do { vassert(! __curr_is_Thumb); } while (0) /*------------------------------------------------------------*/ /*--- Helper bits and pieces for deconstructing the ---*/ /*--- arm insn stream. ---*/ /*------------------------------------------------------------*/ /* Do a little-endian load of a 32-bit word, regardless of the endianness of the underlying host. */ static inline UInt getUIntLittleEndianly ( const UChar* p ) { UInt w = 0; w = (w << 8) | p[3]; w = (w << 8) | p[2]; w = (w << 8) | p[1]; w = (w << 8) | p[0]; return w; } /* Do a little-endian load of a 16-bit word, regardless of the endianness of the underlying host. */ static inline UShort getUShortLittleEndianly ( const UChar* p ) { UShort w = 0; w = (w << 8) | p[1]; w = (w << 8) | p[0]; return w; } static UInt ROR32 ( UInt x, UInt sh ) { vassert(sh >= 0 && sh < 32); if (sh == 0) return x; else return (x << (32-sh)) | (x >> sh); } static Int popcount32 ( UInt x ) { Int res = 0, i; for (i = 0; i < 32; i++) { res += (x & 1); x >>= 1; } return res; } static UInt setbit32 ( UInt x, Int ix, UInt b ) { UInt mask = 1 << ix; x &= ~mask; x |= ((b << ix) & mask); return x; } #define BITS2(_b1,_b0) \ (((_b1) << 1) | (_b0)) #define BITS3(_b2,_b1,_b0) \ (((_b2) << 2) | ((_b1) << 1) | (_b0)) #define BITS4(_b3,_b2,_b1,_b0) \ (((_b3) << 3) | ((_b2) << 2) | ((_b1) << 1) | (_b0)) #define BITS8(_b7,_b6,_b5,_b4,_b3,_b2,_b1,_b0) \ ((BITS4((_b7),(_b6),(_b5),(_b4)) << 4) \ | BITS4((_b3),(_b2),(_b1),(_b0))) #define BITS5(_b4,_b3,_b2,_b1,_b0) \ (BITS8(0,0,0,(_b4),(_b3),(_b2),(_b1),(_b0))) #define BITS6(_b5,_b4,_b3,_b2,_b1,_b0) \ (BITS8(0,0,(_b5),(_b4),(_b3),(_b2),(_b1),(_b0))) #define BITS7(_b6,_b5,_b4,_b3,_b2,_b1,_b0) \ (BITS8(0,(_b6),(_b5),(_b4),(_b3),(_b2),(_b1),(_b0))) #define BITS9(_b8,_b7,_b6,_b5,_b4,_b3,_b2,_b1,_b0) \ (((_b8) << 8) \ | BITS8((_b7),(_b6),(_b5),(_b4),(_b3),(_b2),(_b1),(_b0))) #define BITS10(_b9,_b8,_b7,_b6,_b5,_b4,_b3,_b2,_b1,_b0) \ (((_b9) << 9) | ((_b8) << 8) \ | BITS8((_b7),(_b6),(_b5),(_b4),(_b3),(_b2),(_b1),(_b0))) #define BITS11(_b10,_b9,_b8,_b7,_b6,_b5,_b4,_b3,_b2,_b1,_b0) \ ( ((_b10) << 10) | ((_b9) << 9) | ((_b8) << 8) \ | BITS8((_b7),(_b6),(_b5),(_b4),(_b3),(_b2),(_b1),(_b0))) #define BITS12(_b11,_b10,_b9,_b8,_b7,_b6,_b5,_b4,_b3,_b2,_b1,_b0) \ ( ((_b11) << 11) | ((_b10) << 10) | ((_b9) << 9) | ((_b8) << 8) \ | BITS8((_b7),(_b6),(_b5),(_b4),(_b3),(_b2),(_b1),(_b0))) /* produces _uint[_bMax:_bMin] */ #define SLICE_UInt(_uint,_bMax,_bMin) \ (( ((UInt)(_uint)) >> (_bMin)) \ & (UInt)((1ULL << ((_bMax) - (_bMin) + 1)) - 1ULL)) /*------------------------------------------------------------*/ /*--- Helper bits and pieces for creating IR fragments. ---*/ /*------------------------------------------------------------*/ static IRExpr* mkU64 ( ULong i ) { return IRExpr_Const(IRConst_U64(i)); } static IRExpr* mkU32 ( UInt i ) { return IRExpr_Const(IRConst_U32(i)); } static IRExpr* mkU8 ( UInt i ) { vassert(i < 256); return IRExpr_Const(IRConst_U8( (UChar)i )); } static IRExpr* mkexpr ( IRTemp tmp ) { return IRExpr_RdTmp(tmp); } static IRExpr* unop ( IROp op, IRExpr* a ) { return IRExpr_Unop(op, a); } static IRExpr* binop ( IROp op, IRExpr* a1, IRExpr* a2 ) { return IRExpr_Binop(op, a1, a2); } static IRExpr* triop ( IROp op, IRExpr* a1, IRExpr* a2, IRExpr* a3 ) { return IRExpr_Triop(op, a1, a2, a3); } static IRExpr* loadLE ( IRType ty, IRExpr* addr ) { return IRExpr_Load(Iend_LE, ty, addr); } /* Add a statement to the list held by "irbb". */ static void stmt ( IRStmt* st ) { addStmtToIRSB( irsb, st ); } static void assign ( IRTemp dst, IRExpr* e ) { stmt( IRStmt_WrTmp(dst, e) ); } static void storeLE ( IRExpr* addr, IRExpr* data ) { stmt( IRStmt_Store(Iend_LE, addr, data) ); } static void storeGuardedLE ( IRExpr* addr, IRExpr* data, IRTemp guardT ) { if (guardT == IRTemp_INVALID) { /* unconditional */ storeLE(addr, data); } else { stmt( IRStmt_StoreG(Iend_LE, addr, data, binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0))) ); } } static void loadGuardedLE ( IRTemp dst, IRLoadGOp cvt, IRExpr* addr, IRExpr* alt, IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { if (guardT == IRTemp_INVALID) { /* unconditional */ IRExpr* loaded = NULL; switch (cvt) { case ILGop_Ident32: loaded = loadLE(Ity_I32, addr); break; case ILGop_8Uto32: loaded = unop(Iop_8Uto32, loadLE(Ity_I8, addr)); break; case ILGop_8Sto32: loaded = unop(Iop_8Sto32, loadLE(Ity_I8, addr)); break; case ILGop_16Uto32: loaded = unop(Iop_16Uto32, loadLE(Ity_I16, addr)); break; case ILGop_16Sto32: loaded = unop(Iop_16Sto32, loadLE(Ity_I16, addr)); break; default: vassert(0); } vassert(loaded != NULL); assign(dst, loaded); } else { /* Generate a guarded load into 'dst', but apply 'cvt' to the loaded data before putting the data in 'dst'. If the load does not take place, 'alt' is placed directly in 'dst'. */ stmt( IRStmt_LoadG(Iend_LE, cvt, dst, addr, alt, binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0))) ); } } /* Generate a new temporary of the given type. */ static IRTemp newTemp ( IRType ty ) { vassert(isPlausibleIRType(ty)); return newIRTemp( irsb->tyenv, ty ); } /* Produces a value in 0 .. 3, which is encoded as per the type IRRoundingMode. */ static IRExpr* /* :: Ity_I32 */ get_FAKE_roundingmode ( void ) { return mkU32(Irrm_NEAREST); } /* Generate an expression for SRC rotated right by ROT. */ static IRExpr* genROR32( IRTemp src, Int rot ) { vassert(rot >= 0 && rot < 32); if (rot == 0) return mkexpr(src); return binop(Iop_Or32, binop(Iop_Shl32, mkexpr(src), mkU8(32 - rot)), binop(Iop_Shr32, mkexpr(src), mkU8(rot))); } static IRExpr* mkU128 ( ULong i ) { return binop(Iop_64HLtoV128, mkU64(i), mkU64(i)); } /* Generate a 4-aligned version of the given expression if the given condition is true. Else return it unchanged. */ static IRExpr* align4if ( IRExpr* e, Bool b ) { if (b) return binop(Iop_And32, e, mkU32(~3)); else return e; } /*------------------------------------------------------------*/ /*--- Helpers for accessing guest registers. ---*/ /*------------------------------------------------------------*/ #define OFFB_R0 offsetof(VexGuestARMState,guest_R0) #define OFFB_R1 offsetof(VexGuestARMState,guest_R1) #define OFFB_R2 offsetof(VexGuestARMState,guest_R2) #define OFFB_R3 offsetof(VexGuestARMState,guest_R3) #define OFFB_R4 offsetof(VexGuestARMState,guest_R4) #define OFFB_R5 offsetof(VexGuestARMState,guest_R5) #define OFFB_R6 offsetof(VexGuestARMState,guest_R6) #define OFFB_R7 offsetof(VexGuestARMState,guest_R7) #define OFFB_R8 offsetof(VexGuestARMState,guest_R8) #define OFFB_R9 offsetof(VexGuestARMState,guest_R9) #define OFFB_R10 offsetof(VexGuestARMState,guest_R10) #define OFFB_R11 offsetof(VexGuestARMState,guest_R11) #define OFFB_R12 offsetof(VexGuestARMState,guest_R12) #define OFFB_R13 offsetof(VexGuestARMState,guest_R13) #define OFFB_R14 offsetof(VexGuestARMState,guest_R14) #define OFFB_R15T offsetof(VexGuestARMState,guest_R15T) #define OFFB_CC_OP offsetof(VexGuestARMState,guest_CC_OP) #define OFFB_CC_DEP1 offsetof(VexGuestARMState,guest_CC_DEP1) #define OFFB_CC_DEP2 offsetof(VexGuestARMState,guest_CC_DEP2) #define OFFB_CC_NDEP offsetof(VexGuestARMState,guest_CC_NDEP) #define OFFB_NRADDR offsetof(VexGuestARMState,guest_NRADDR) #define OFFB_D0 offsetof(VexGuestARMState,guest_D0) #define OFFB_D1 offsetof(VexGuestARMState,guest_D1) #define OFFB_D2 offsetof(VexGuestARMState,guest_D2) #define OFFB_D3 offsetof(VexGuestARMState,guest_D3) #define OFFB_D4 offsetof(VexGuestARMState,guest_D4) #define OFFB_D5 offsetof(VexGuestARMState,guest_D5) #define OFFB_D6 offsetof(VexGuestARMState,guest_D6) #define OFFB_D7 offsetof(VexGuestARMState,guest_D7) #define OFFB_D8 offsetof(VexGuestARMState,guest_D8) #define OFFB_D9 offsetof(VexGuestARMState,guest_D9) #define OFFB_D10 offsetof(VexGuestARMState,guest_D10) #define OFFB_D11 offsetof(VexGuestARMState,guest_D11) #define OFFB_D12 offsetof(VexGuestARMState,guest_D12) #define OFFB_D13 offsetof(VexGuestARMState,guest_D13) #define OFFB_D14 offsetof(VexGuestARMState,guest_D14) #define OFFB_D15 offsetof(VexGuestARMState,guest_D15) #define OFFB_D16 offsetof(VexGuestARMState,guest_D16) #define OFFB_D17 offsetof(VexGuestARMState,guest_D17) #define OFFB_D18 offsetof(VexGuestARMState,guest_D18) #define OFFB_D19 offsetof(VexGuestARMState,guest_D19) #define OFFB_D20 offsetof(VexGuestARMState,guest_D20) #define OFFB_D21 offsetof(VexGuestARMState,guest_D21) #define OFFB_D22 offsetof(VexGuestARMState,guest_D22) #define OFFB_D23 offsetof(VexGuestARMState,guest_D23) #define OFFB_D24 offsetof(VexGuestARMState,guest_D24) #define OFFB_D25 offsetof(VexGuestARMState,guest_D25) #define OFFB_D26 offsetof(VexGuestARMState,guest_D26) #define OFFB_D27 offsetof(VexGuestARMState,guest_D27) #define OFFB_D28 offsetof(VexGuestARMState,guest_D28) #define OFFB_D29 offsetof(VexGuestARMState,guest_D29) #define OFFB_D30 offsetof(VexGuestARMState,guest_D30) #define OFFB_D31 offsetof(VexGuestARMState,guest_D31) #define OFFB_FPSCR offsetof(VexGuestARMState,guest_FPSCR) #define OFFB_TPIDRURO offsetof(VexGuestARMState,guest_TPIDRURO) #define OFFB_ITSTATE offsetof(VexGuestARMState,guest_ITSTATE) #define OFFB_QFLAG32 offsetof(VexGuestARMState,guest_QFLAG32) #define OFFB_GEFLAG0 offsetof(VexGuestARMState,guest_GEFLAG0) #define OFFB_GEFLAG1 offsetof(VexGuestARMState,guest_GEFLAG1) #define OFFB_GEFLAG2 offsetof(VexGuestARMState,guest_GEFLAG2) #define OFFB_GEFLAG3 offsetof(VexGuestARMState,guest_GEFLAG3) #define OFFB_CMSTART offsetof(VexGuestARMState,guest_CMSTART) #define OFFB_CMLEN offsetof(VexGuestARMState,guest_CMLEN) /* ---------------- Integer registers ---------------- */ static Int integerGuestRegOffset ( UInt iregNo ) { /* Do we care about endianness here? We do if sub-parts of integer registers are accessed, but I don't think that ever happens on ARM. */ switch (iregNo) { case 0: return OFFB_R0; case 1: return OFFB_R1; case 2: return OFFB_R2; case 3: return OFFB_R3; case 4: return OFFB_R4; case 5: return OFFB_R5; case 6: return OFFB_R6; case 7: return OFFB_R7; case 8: return OFFB_R8; case 9: return OFFB_R9; case 10: return OFFB_R10; case 11: return OFFB_R11; case 12: return OFFB_R12; case 13: return OFFB_R13; case 14: return OFFB_R14; case 15: return OFFB_R15T; default: vassert(0); } } /* Plain ("low level") read from a reg; no +8 offset magic for r15. */ static IRExpr* llGetIReg ( UInt iregNo ) { vassert(iregNo < 16); return IRExpr_Get( integerGuestRegOffset(iregNo), Ity_I32 ); } /* Architected read from a reg in ARM mode. This automagically adds 8 to all reads of r15. */ static IRExpr* getIRegA ( UInt iregNo ) { IRExpr* e; ASSERT_IS_ARM; vassert(iregNo < 16); if (iregNo == 15) { /* If asked for r15, don't read the guest state value, as that may not be up to date in the case where loop unrolling has happened, because the first insn's write to the block is omitted; hence in the 2nd and subsequent unrollings we don't have a correct value in guest r15. Instead produce the constant that we know would be produced at this point. */ vassert(0 == (guest_R15_curr_instr_notENC & 3)); e = mkU32(guest_R15_curr_instr_notENC + 8); } else { e = IRExpr_Get( integerGuestRegOffset(iregNo), Ity_I32 ); } return e; } /* Architected read from a reg in Thumb mode. This automagically adds 4 to all reads of r15. */ static IRExpr* getIRegT ( UInt iregNo ) { IRExpr* e; ASSERT_IS_THUMB; vassert(iregNo < 16); if (iregNo == 15) { /* Ditto comment in getIReg. */ vassert(0 == (guest_R15_curr_instr_notENC & 1)); e = mkU32(guest_R15_curr_instr_notENC + 4); } else { e = IRExpr_Get( integerGuestRegOffset(iregNo), Ity_I32 ); } return e; } /* Plain ("low level") write to a reg; no jump or alignment magic for r15. */ static void llPutIReg ( UInt iregNo, IRExpr* e ) { vassert(iregNo < 16); vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I32); stmt( IRStmt_Put(integerGuestRegOffset(iregNo), e) ); } /* Architected write to an integer register in ARM mode. If it is to r15, record info so at the end of this insn's translation, a branch to it can be made. Also handles conditional writes to the register: if guardT == IRTemp_INVALID then the write is unconditional. If writing r15, also 4-align it. */ static void putIRegA ( UInt iregNo, IRExpr* e, IRTemp guardT /* :: Ity_I32, 0 or 1 */, IRJumpKind jk /* if a jump is generated */ ) { /* if writing r15, force e to be 4-aligned. */ // INTERWORKING FIXME. this needs to be relaxed so that // puts caused by LDMxx which load r15 interwork right. // but is no aligned too relaxed? //if (iregNo == 15) // e = binop(Iop_And32, e, mkU32(~3)); ASSERT_IS_ARM; /* So, generate either an unconditional or a conditional write to the reg. */ if (guardT == IRTemp_INVALID) { /* unconditional write */ llPutIReg( iregNo, e ); } else { llPutIReg( iregNo, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, llGetIReg(iregNo) )); } if (iregNo == 15) { // assert against competing r15 updates. Shouldn't // happen; should be ruled out by the instr matching // logic. vassert(r15written == False); vassert(r15guard == IRTemp_INVALID); vassert(r15kind == Ijk_Boring); r15written = True; r15guard = guardT; r15kind = jk; } } /* Architected write to an integer register in Thumb mode. Writes to r15 are not allowed. Handles conditional writes to the register: if guardT == IRTemp_INVALID then the write is unconditional. */ static void putIRegT ( UInt iregNo, IRExpr* e, IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { /* So, generate either an unconditional or a conditional write to the reg. */ ASSERT_IS_THUMB; vassert(iregNo >= 0 && iregNo <= 14); if (guardT == IRTemp_INVALID) { /* unconditional write */ llPutIReg( iregNo, e ); } else { llPutIReg( iregNo, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, llGetIReg(iregNo) )); } } /* Thumb16 and Thumb32 only. Returns true if reg is 13 or 15. Implements the BadReg predicate in the ARM ARM. */ static Bool isBadRegT ( UInt r ) { vassert(r <= 15); ASSERT_IS_THUMB; return r == 13 || r == 15; } /* ---------------- Double registers ---------------- */ static Int doubleGuestRegOffset ( UInt dregNo ) { /* Do we care about endianness here? Probably do if we ever get into the situation of dealing with the single-precision VFP registers. */ switch (dregNo) { case 0: return OFFB_D0; case 1: return OFFB_D1; case 2: return OFFB_D2; case 3: return OFFB_D3; case 4: return OFFB_D4; case 5: return OFFB_D5; case 6: return OFFB_D6; case 7: return OFFB_D7; case 8: return OFFB_D8; case 9: return OFFB_D9; case 10: return OFFB_D10; case 11: return OFFB_D11; case 12: return OFFB_D12; case 13: return OFFB_D13; case 14: return OFFB_D14; case 15: return OFFB_D15; case 16: return OFFB_D16; case 17: return OFFB_D17; case 18: return OFFB_D18; case 19: return OFFB_D19; case 20: return OFFB_D20; case 21: return OFFB_D21; case 22: return OFFB_D22; case 23: return OFFB_D23; case 24: return OFFB_D24; case 25: return OFFB_D25; case 26: return OFFB_D26; case 27: return OFFB_D27; case 28: return OFFB_D28; case 29: return OFFB_D29; case 30: return OFFB_D30; case 31: return OFFB_D31; default: vassert(0); } } /* Plain ("low level") read from a VFP Dreg. */ static IRExpr* llGetDReg ( UInt dregNo ) { vassert(dregNo < 32); return IRExpr_Get( doubleGuestRegOffset(dregNo), Ity_F64 ); } /* Architected read from a VFP Dreg. */ static IRExpr* getDReg ( UInt dregNo ) { return llGetDReg( dregNo ); } /* Plain ("low level") write to a VFP Dreg. */ static void llPutDReg ( UInt dregNo, IRExpr* e ) { vassert(dregNo < 32); vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_F64); stmt( IRStmt_Put(doubleGuestRegOffset(dregNo), e) ); } /* Architected write to a VFP Dreg. Handles conditional writes to the register: if guardT == IRTemp_INVALID then the write is unconditional. */ static void putDReg ( UInt dregNo, IRExpr* e, IRTemp guardT /* :: Ity_I32, 0 or 1 */) { /* So, generate either an unconditional or a conditional write to the reg. */ if (guardT == IRTemp_INVALID) { /* unconditional write */ llPutDReg( dregNo, e ); } else { llPutDReg( dregNo, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, llGetDReg(dregNo) )); } } /* And now exactly the same stuff all over again, but this time taking/returning I64 rather than F64, to support 64-bit Neon ops. */ /* Plain ("low level") read from a Neon Integer Dreg. */ static IRExpr* llGetDRegI64 ( UInt dregNo ) { vassert(dregNo < 32); return IRExpr_Get( doubleGuestRegOffset(dregNo), Ity_I64 ); } /* Architected read from a Neon Integer Dreg. */ static IRExpr* getDRegI64 ( UInt dregNo ) { return llGetDRegI64( dregNo ); } /* Plain ("low level") write to a Neon Integer Dreg. */ static void llPutDRegI64 ( UInt dregNo, IRExpr* e ) { vassert(dregNo < 32); vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I64); stmt( IRStmt_Put(doubleGuestRegOffset(dregNo), e) ); } /* Architected write to a Neon Integer Dreg. Handles conditional writes to the register: if guardT == IRTemp_INVALID then the write is unconditional. */ static void putDRegI64 ( UInt dregNo, IRExpr* e, IRTemp guardT /* :: Ity_I32, 0 or 1 */) { /* So, generate either an unconditional or a conditional write to the reg. */ if (guardT == IRTemp_INVALID) { /* unconditional write */ llPutDRegI64( dregNo, e ); } else { llPutDRegI64( dregNo, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, llGetDRegI64(dregNo) )); } } /* ---------------- Quad registers ---------------- */ static Int quadGuestRegOffset ( UInt qregNo ) { /* Do we care about endianness here? Probably do if we ever get into the situation of dealing with the 64 bit Neon registers. */ switch (qregNo) { case 0: return OFFB_D0; case 1: return OFFB_D2; case 2: return OFFB_D4; case 3: return OFFB_D6; case 4: return OFFB_D8; case 5: return OFFB_D10; case 6: return OFFB_D12; case 7: return OFFB_D14; case 8: return OFFB_D16; case 9: return OFFB_D18; case 10: return OFFB_D20; case 11: return OFFB_D22; case 12: return OFFB_D24; case 13: return OFFB_D26; case 14: return OFFB_D28; case 15: return OFFB_D30; default: vassert(0); } } /* Plain ("low level") read from a Neon Qreg. */ static IRExpr* llGetQReg ( UInt qregNo ) { vassert(qregNo < 16); return IRExpr_Get( quadGuestRegOffset(qregNo), Ity_V128 ); } /* Architected read from a Neon Qreg. */ static IRExpr* getQReg ( UInt qregNo ) { return llGetQReg( qregNo ); } /* Plain ("low level") write to a Neon Qreg. */ static void llPutQReg ( UInt qregNo, IRExpr* e ) { vassert(qregNo < 16); vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_V128); stmt( IRStmt_Put(quadGuestRegOffset(qregNo), e) ); } /* Architected write to a Neon Qreg. Handles conditional writes to the register: if guardT == IRTemp_INVALID then the write is unconditional. */ static void putQReg ( UInt qregNo, IRExpr* e, IRTemp guardT /* :: Ity_I32, 0 or 1 */) { /* So, generate either an unconditional or a conditional write to the reg. */ if (guardT == IRTemp_INVALID) { /* unconditional write */ llPutQReg( qregNo, e ); } else { llPutQReg( qregNo, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, llGetQReg(qregNo) )); } } /* ---------------- Float registers ---------------- */ static Int floatGuestRegOffset ( UInt fregNo ) { /* Start with the offset of the containing double, and then correct for endianness. Actually this is completely bogus and needs careful thought. */ Int off; /* NB! Limit is 64, not 32, because we might be pulling F32 bits out of SIMD registers, and there are 16 SIMD registers each of 128 bits (4 x F32). */ vassert(fregNo < 64); off = doubleGuestRegOffset(fregNo >> 1); if (host_endness == VexEndnessLE) { if (fregNo & 1) off += 4; } else { vassert(0); } return off; } /* Plain ("low level") read from a VFP Freg. */ static IRExpr* llGetFReg ( UInt fregNo ) { vassert(fregNo < 32); return IRExpr_Get( floatGuestRegOffset(fregNo), Ity_F32 ); } static IRExpr* llGetFReg_up_to_64 ( UInt fregNo ) { vassert(fregNo < 64); return IRExpr_Get( floatGuestRegOffset(fregNo), Ity_F32 ); } /* Architected read from a VFP Freg. */ static IRExpr* getFReg ( UInt fregNo ) { return llGetFReg( fregNo ); } /* Plain ("low level") write to a VFP Freg. */ static void llPutFReg ( UInt fregNo, IRExpr* e ) { vassert(fregNo < 32); vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_F32); stmt( IRStmt_Put(floatGuestRegOffset(fregNo), e) ); } static void llPutFReg_up_to_64 ( UInt fregNo, IRExpr* e ) { vassert(fregNo < 64); vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_F32); stmt( IRStmt_Put(floatGuestRegOffset(fregNo), e) ); } /* Architected write to a VFP Freg. Handles conditional writes to the register: if guardT == IRTemp_INVALID then the write is unconditional. */ static void putFReg ( UInt fregNo, IRExpr* e, IRTemp guardT /* :: Ity_I32, 0 or 1 */) { /* So, generate either an unconditional or a conditional write to the reg. */ if (guardT == IRTemp_INVALID) { /* unconditional write */ llPutFReg( fregNo, e ); } else { llPutFReg( fregNo, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, llGetFReg(fregNo) )); } } /* ---------------- Misc registers ---------------- */ static void putMiscReg32 ( UInt gsoffset, IRExpr* e, /* :: Ity_I32 */ IRTemp guardT /* :: Ity_I32, 0 or 1 */) { switch (gsoffset) { case OFFB_FPSCR: break; case OFFB_QFLAG32: break; case OFFB_GEFLAG0: break; case OFFB_GEFLAG1: break; case OFFB_GEFLAG2: break; case OFFB_GEFLAG3: break; default: vassert(0); /* awaiting more cases */ } vassert(typeOfIRExpr(irsb->tyenv, e) == Ity_I32); if (guardT == IRTemp_INVALID) { /* unconditional write */ stmt(IRStmt_Put(gsoffset, e)); } else { stmt(IRStmt_Put( gsoffset, IRExpr_ITE( binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)), e, IRExpr_Get(gsoffset, Ity_I32) ) )); } } static IRTemp get_ITSTATE ( void ) { ASSERT_IS_THUMB; IRTemp t = newTemp(Ity_I32); assign(t, IRExpr_Get( OFFB_ITSTATE, Ity_I32)); return t; } static void put_ITSTATE ( IRTemp t ) { ASSERT_IS_THUMB; stmt( IRStmt_Put( OFFB_ITSTATE, mkexpr(t)) ); } static IRTemp get_QFLAG32 ( void ) { IRTemp t = newTemp(Ity_I32); assign(t, IRExpr_Get( OFFB_QFLAG32, Ity_I32)); return t; } static void put_QFLAG32 ( IRTemp t, IRTemp condT ) { putMiscReg32( OFFB_QFLAG32, mkexpr(t), condT ); } /* Stickily set the 'Q' flag (APSR bit 27) of the APSR (Application Program Status Register) to indicate that overflow or saturation occurred. Nb: t must be zero to denote no saturation, and any nonzero value to indicate saturation. */ static void or_into_QFLAG32 ( IRExpr* e, IRTemp condT ) { IRTemp old = get_QFLAG32(); IRTemp nyu = newTemp(Ity_I32); assign(nyu, binop(Iop_Or32, mkexpr(old), e) ); put_QFLAG32(nyu, condT); } /* Generate code to set APSR.GE[flagNo]. Each fn call sets 1 bit. flagNo: which flag bit to set [3...0] lowbits_to_ignore: 0 = look at all 32 bits 8 = look at top 24 bits only 16 = look at top 16 bits only 31 = look at the top bit only e: input value to be evaluated. The new value is taken from 'e' with the lowest 'lowbits_to_ignore' masked out. If the resulting value is zero then the GE flag is set to 0; any other value sets the flag to 1. */ static void put_GEFLAG32 ( Int flagNo, /* 0, 1, 2 or 3 */ Int lowbits_to_ignore, /* 0, 8, 16 or 31 */ IRExpr* e, /* Ity_I32 */ IRTemp condT ) { vassert( flagNo >= 0 && flagNo <= 3 ); vassert( lowbits_to_ignore == 0 || lowbits_to_ignore == 8 || lowbits_to_ignore == 16 || lowbits_to_ignore == 31 ); IRTemp masked = newTemp(Ity_I32); assign(masked, binop(Iop_Shr32, e, mkU8(lowbits_to_ignore))); switch (flagNo) { case 0: putMiscReg32(OFFB_GEFLAG0, mkexpr(masked), condT); break; case 1: putMiscReg32(OFFB_GEFLAG1, mkexpr(masked), condT); break; case 2: putMiscReg32(OFFB_GEFLAG2, mkexpr(masked), condT); break; case 3: putMiscReg32(OFFB_GEFLAG3, mkexpr(masked), condT); break; default: vassert(0); } } /* Return the (32-bit, zero-or-nonzero representation scheme) of the specified GE flag. */ static IRExpr* get_GEFLAG32( Int flagNo /* 0, 1, 2, 3 */ ) { switch (flagNo) { case 0: return IRExpr_Get( OFFB_GEFLAG0, Ity_I32 ); case 1: return IRExpr_Get( OFFB_GEFLAG1, Ity_I32 ); case 2: return IRExpr_Get( OFFB_GEFLAG2, Ity_I32 ); case 3: return IRExpr_Get( OFFB_GEFLAG3, Ity_I32 ); default: vassert(0); } } /* Set all 4 GE flags from the given 32-bit value as follows: GE 3 and 2 are set from bit 31 of the value, and GE 1 and 0 are set from bit 15 of the value. All other bits are ignored. */ static void set_GE_32_10_from_bits_31_15 ( IRTemp t32, IRTemp condT ) { IRTemp ge10 = newTemp(Ity_I32); IRTemp ge32 = newTemp(Ity_I32); assign(ge10, binop(Iop_And32, mkexpr(t32), mkU32(0x00008000))); assign(ge32, binop(Iop_And32, mkexpr(t32), mkU32(0x80000000))); put_GEFLAG32( 0, 0, mkexpr(ge10), condT ); put_GEFLAG32( 1, 0, mkexpr(ge10), condT ); put_GEFLAG32( 2, 0, mkexpr(ge32), condT ); put_GEFLAG32( 3, 0, mkexpr(ge32), condT ); } /* Set all 4 GE flags from the given 32-bit value as follows: GE 3 from bit 31, GE 2 from bit 23, GE 1 from bit 15, and GE0 from bit 7. All other bits are ignored. */ static void set_GE_3_2_1_0_from_bits_31_23_15_7 ( IRTemp t32, IRTemp condT ) { IRTemp ge0 = newTemp(Ity_I32); IRTemp ge1 = newTemp(Ity_I32); IRTemp ge2 = newTemp(Ity_I32); IRTemp ge3 = newTemp(Ity_I32); assign(ge0, binop(Iop_And32, mkexpr(t32), mkU32(0x00000080))); assign(ge1, binop(Iop_And32, mkexpr(t32), mkU32(0x00008000))); assign(ge2, binop(Iop_And32, mkexpr(t32), mkU32(0x00800000))); assign(ge3, binop(Iop_And32, mkexpr(t32), mkU32(0x80000000))); put_GEFLAG32( 0, 0, mkexpr(ge0), condT ); put_GEFLAG32( 1, 0, mkexpr(ge1), condT ); put_GEFLAG32( 2, 0, mkexpr(ge2), condT ); put_GEFLAG32( 3, 0, mkexpr(ge3), condT ); } /* ---------------- FPSCR stuff ---------------- */ /* Generate IR to get hold of the rounding mode bits in FPSCR, and convert them to IR format. Bind the final result to the returned temp. */ static IRTemp /* :: Ity_I32 */ mk_get_IR_rounding_mode ( void ) { /* The ARMvfp encoding for rounding mode bits is: 00 to nearest 01 to +infinity 10 to -infinity 11 to zero We need to convert that to the IR encoding: 00 to nearest (the default) 10 to +infinity 01 to -infinity 11 to zero Which can be done by swapping bits 0 and 1. The rmode bits are at 23:22 in FPSCR. */ IRTemp armEncd = newTemp(Ity_I32); IRTemp swapped = newTemp(Ity_I32); /* Fish FPSCR[23:22] out, and slide to bottom. Doesn't matter that we don't zero out bits 24 and above, since the assignment to 'swapped' will mask them out anyway. */ assign(armEncd, binop(Iop_Shr32, IRExpr_Get(OFFB_FPSCR, Ity_I32), mkU8(22))); /* Now swap them. */ assign(swapped, binop(Iop_Or32, binop(Iop_And32, binop(Iop_Shl32, mkexpr(armEncd), mkU8(1)), mkU32(2)), binop(Iop_And32, binop(Iop_Shr32, mkexpr(armEncd), mkU8(1)), mkU32(1)) )); return swapped; } /*------------------------------------------------------------*/ /*--- Helpers for flag handling and conditional insns ---*/ /*------------------------------------------------------------*/ static const HChar* name_ARMCondcode ( ARMCondcode cond ) { switch (cond) { case ARMCondEQ: return "{eq}"; case ARMCondNE: return "{ne}"; case ARMCondHS: return "{hs}"; // or 'cs' case ARMCondLO: return "{lo}"; // or 'cc' case ARMCondMI: return "{mi}"; case ARMCondPL: return "{pl}"; case ARMCondVS: return "{vs}"; case ARMCondVC: return "{vc}"; case ARMCondHI: return "{hi}"; case ARMCondLS: return "{ls}"; case ARMCondGE: return "{ge}"; case ARMCondLT: return "{lt}"; case ARMCondGT: return "{gt}"; case ARMCondLE: return "{le}"; case ARMCondAL: return ""; // {al}: is the default case ARMCondNV: return "{nv}"; default: vpanic("name_ARMCondcode"); } } /* and a handy shorthand for it */ static const HChar* nCC ( ARMCondcode cond ) { return name_ARMCondcode(cond); } /* Build IR to calculate some particular condition from stored CC_OP/CC_DEP1/CC_DEP2/CC_NDEP. Returns an expression of type Ity_I32, suitable for narrowing. Although the return type is Ity_I32, the returned value is either 0 or 1. 'cond' must be :: Ity_I32 and must denote the condition to compute in bits 7:4, and be zero everywhere else. */ static IRExpr* mk_armg_calculate_condition_dyn ( IRExpr* cond ) { vassert(typeOfIRExpr(irsb->tyenv, cond) == Ity_I32); /* And 'cond' had better produce a value in which only bits 7:4 are nonzero. However, obviously we can't assert for that. */ /* So what we're constructing for the first argument is "(cond << 4) | stored-operation". However, as per comments above, 'cond' must be supplied pre-shifted to this function. This pairing scheme requires that the ARM_CC_OP_ values all fit in 4 bits. Hence we are passing a (COND, OP) pair in the lowest 8 bits of the first argument. */ IRExpr** args = mkIRExprVec_4( binop(Iop_Or32, IRExpr_Get(OFFB_CC_OP, Ity_I32), cond), IRExpr_Get(OFFB_CC_DEP1, Ity_I32), IRExpr_Get(OFFB_CC_DEP2, Ity_I32), IRExpr_Get(OFFB_CC_NDEP, Ity_I32) ); IRExpr* call = mkIRExprCCall( Ity_I32, 0/*regparm*/, "armg_calculate_condition", &armg_calculate_condition, args ); /* Exclude the requested condition, OP and NDEP from definedness checking. We're only interested in DEP1 and DEP2. */ call->Iex.CCall.cee->mcx_mask = (1<<0) | (1<<3); return call; } /* Build IR to calculate some particular condition from stored CC_OP/CC_DEP1/CC_DEP2/CC_NDEP. Returns an expression of type Ity_I32, suitable for narrowing. Although the return type is Ity_I32, the returned value is either 0 or 1. */ static IRExpr* mk_armg_calculate_condition ( ARMCondcode cond ) { /* First arg is "(cond << 4) | condition". This requires that the ARM_CC_OP_ values all fit in 4 bits. Hence we are passing a (COND, OP) pair in the lowest 8 bits of the first argument. */ vassert(cond >= 0 && cond <= 15); return mk_armg_calculate_condition_dyn( mkU32(cond << 4) ); } /* Build IR to calculate just the carry flag from stored CC_OP/CC_DEP1/CC_DEP2/CC_NDEP. Returns an expression :: Ity_I32. */ static IRExpr* mk_armg_calculate_flag_c ( void ) { IRExpr** args = mkIRExprVec_4( IRExpr_Get(OFFB_CC_OP, Ity_I32), IRExpr_Get(OFFB_CC_DEP1, Ity_I32), IRExpr_Get(OFFB_CC_DEP2, Ity_I32), IRExpr_Get(OFFB_CC_NDEP, Ity_I32) ); IRExpr* call = mkIRExprCCall( Ity_I32, 0/*regparm*/, "armg_calculate_flag_c", &armg_calculate_flag_c, args ); /* Exclude OP and NDEP from definedness checking. We're only interested in DEP1 and DEP2. */ call->Iex.CCall.cee->mcx_mask = (1<<0) | (1<<3); return call; } /* Build IR to calculate just the overflow flag from stored CC_OP/CC_DEP1/CC_DEP2/CC_NDEP. Returns an expression :: Ity_I32. */ static IRExpr* mk_armg_calculate_flag_v ( void ) { IRExpr** args = mkIRExprVec_4( IRExpr_Get(OFFB_CC_OP, Ity_I32), IRExpr_Get(OFFB_CC_DEP1, Ity_I32), IRExpr_Get(OFFB_CC_DEP2, Ity_I32), IRExpr_Get(OFFB_CC_NDEP, Ity_I32) ); IRExpr* call = mkIRExprCCall( Ity_I32, 0/*regparm*/, "armg_calculate_flag_v", &armg_calculate_flag_v, args ); /* Exclude OP and NDEP from definedness checking. We're only interested in DEP1 and DEP2. */ call->Iex.CCall.cee->mcx_mask = (1<<0) | (1<<3); return call; } /* Build IR to calculate N Z C V in bits 31:28 of the returned word. */ static IRExpr* mk_armg_calculate_flags_nzcv ( void ) { IRExpr** args = mkIRExprVec_4( IRExpr_Get(OFFB_CC_OP, Ity_I32), IRExpr_Get(OFFB_CC_DEP1, Ity_I32), IRExpr_Get(OFFB_CC_DEP2, Ity_I32), IRExpr_Get(OFFB_CC_NDEP, Ity_I32) ); IRExpr* call = mkIRExprCCall( Ity_I32, 0/*regparm*/, "armg_calculate_flags_nzcv", &armg_calculate_flags_nzcv, args ); /* Exclude OP and NDEP from definedness checking. We're only interested in DEP1 and DEP2. */ call->Iex.CCall.cee->mcx_mask = (1<<0) | (1<<3); return call; } static IRExpr* mk_armg_calculate_flag_qc ( IRExpr* resL, IRExpr* resR, Bool Q ) { IRExpr** args1; IRExpr** args2; IRExpr *call1, *call2, *res; if (Q) { args1 = mkIRExprVec_4 ( binop(Iop_GetElem32x4, resL, mkU8(0)), binop(Iop_GetElem32x4, resL, mkU8(1)), binop(Iop_GetElem32x4, resR, mkU8(0)), binop(Iop_GetElem32x4, resR, mkU8(1)) ); args2 = mkIRExprVec_4 ( binop(Iop_GetElem32x4, resL, mkU8(2)), binop(Iop_GetElem32x4, resL, mkU8(3)), binop(Iop_GetElem32x4, resR, mkU8(2)), binop(Iop_GetElem32x4, resR, mkU8(3)) ); } else { args1 = mkIRExprVec_4 ( binop(Iop_GetElem32x2, resL, mkU8(0)), binop(Iop_GetElem32x2, resL, mkU8(1)), binop(Iop_GetElem32x2, resR, mkU8(0)), binop(Iop_GetElem32x2, resR, mkU8(1)) ); } call1 = mkIRExprCCall( Ity_I32, 0/*regparm*/, "armg_calculate_flag_qc", &armg_calculate_flag_qc, args1 ); if (Q) { call2 = mkIRExprCCall( Ity_I32, 0/*regparm*/, "armg_calculate_flag_qc", &armg_calculate_flag_qc, args2 ); } if (Q) { res = binop(Iop_Or32, call1, call2); } else { res = call1; } return res; } // FIXME: this is named wrongly .. looks like a sticky set of // QC, not a write to it. static void setFlag_QC ( IRExpr* resL, IRExpr* resR, Bool Q, IRTemp condT ) { putMiscReg32 (OFFB_FPSCR, binop(Iop_Or32, IRExpr_Get(OFFB_FPSCR, Ity_I32), binop(Iop_Shl32, mk_armg_calculate_flag_qc(resL, resR, Q), mkU8(27))), condT); } /* Build IR to conditionally set the flags thunk. As with putIReg, if guard is IRTemp_INVALID then it's unconditional, else it holds a condition :: Ity_I32. */ static void setFlags_D1_D2_ND ( UInt cc_op, IRTemp t_dep1, IRTemp t_dep2, IRTemp t_ndep, IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { vassert(typeOfIRTemp(irsb->tyenv, t_dep1 == Ity_I32)); vassert(typeOfIRTemp(irsb->tyenv, t_dep2 == Ity_I32)); vassert(typeOfIRTemp(irsb->tyenv, t_ndep == Ity_I32)); vassert(cc_op >= ARMG_CC_OP_COPY && cc_op < ARMG_CC_OP_NUMBER); if (guardT == IRTemp_INVALID) { /* unconditional */ stmt( IRStmt_Put( OFFB_CC_OP, mkU32(cc_op) )); stmt( IRStmt_Put( OFFB_CC_DEP1, mkexpr(t_dep1) )); stmt( IRStmt_Put( OFFB_CC_DEP2, mkexpr(t_dep2) )); stmt( IRStmt_Put( OFFB_CC_NDEP, mkexpr(t_ndep) )); } else { /* conditional */ IRTemp c1 = newTemp(Ity_I1); assign( c1, binop(Iop_CmpNE32, mkexpr(guardT), mkU32(0)) ); stmt( IRStmt_Put( OFFB_CC_OP, IRExpr_ITE( mkexpr(c1), mkU32(cc_op), IRExpr_Get(OFFB_CC_OP, Ity_I32) ) )); stmt( IRStmt_Put( OFFB_CC_DEP1, IRExpr_ITE( mkexpr(c1), mkexpr(t_dep1), IRExpr_Get(OFFB_CC_DEP1, Ity_I32) ) )); stmt( IRStmt_Put( OFFB_CC_DEP2, IRExpr_ITE( mkexpr(c1), mkexpr(t_dep2), IRExpr_Get(OFFB_CC_DEP2, Ity_I32) ) )); stmt( IRStmt_Put( OFFB_CC_NDEP, IRExpr_ITE( mkexpr(c1), mkexpr(t_ndep), IRExpr_Get(OFFB_CC_NDEP, Ity_I32) ) )); } } /* Minor variant of the above that sets NDEP to zero (if it sets it at all) */ static void setFlags_D1_D2 ( UInt cc_op, IRTemp t_dep1, IRTemp t_dep2, IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { IRTemp z32 = newTemp(Ity_I32); assign( z32, mkU32(0) ); setFlags_D1_D2_ND( cc_op, t_dep1, t_dep2, z32, guardT ); } /* Minor variant of the above that sets DEP2 to zero (if it sets it at all) */ static void setFlags_D1_ND ( UInt cc_op, IRTemp t_dep1, IRTemp t_ndep, IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { IRTemp z32 = newTemp(Ity_I32); assign( z32, mkU32(0) ); setFlags_D1_D2_ND( cc_op, t_dep1, z32, t_ndep, guardT ); } /* Minor variant of the above that sets DEP2 and NDEP to zero (if it sets them at all) */ static void setFlags_D1 ( UInt cc_op, IRTemp t_dep1, IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { IRTemp z32 = newTemp(Ity_I32); assign( z32, mkU32(0) ); setFlags_D1_D2_ND( cc_op, t_dep1, z32, z32, guardT ); } /* ARM only */ /* Generate a side-exit to the next instruction, if the given guard expression :: Ity_I32 is 0 (note! the side exit is taken if the condition is false!) This is used to skip over conditional instructions which we can't generate straight-line code for, either because they are too complex or (more likely) they potentially generate exceptions. */ static void mk_skip_over_A32_if_cond_is_false ( IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { ASSERT_IS_ARM; vassert(guardT != IRTemp_INVALID); vassert(0 == (guest_R15_curr_instr_notENC & 3)); stmt( IRStmt_Exit( unop(Iop_Not1, unop(Iop_32to1, mkexpr(guardT))), Ijk_Boring, IRConst_U32(toUInt(guest_R15_curr_instr_notENC + 4)), OFFB_R15T )); } /* Thumb16 only */ /* ditto, but jump over a 16-bit thumb insn */ static void mk_skip_over_T16_if_cond_is_false ( IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { ASSERT_IS_THUMB; vassert(guardT != IRTemp_INVALID); vassert(0 == (guest_R15_curr_instr_notENC & 1)); stmt( IRStmt_Exit( unop(Iop_Not1, unop(Iop_32to1, mkexpr(guardT))), Ijk_Boring, IRConst_U32(toUInt((guest_R15_curr_instr_notENC + 2) | 1)), OFFB_R15T )); } /* Thumb32 only */ /* ditto, but jump over a 32-bit thumb insn */ static void mk_skip_over_T32_if_cond_is_false ( IRTemp guardT /* :: Ity_I32, 0 or 1 */ ) { ASSERT_IS_THUMB; vassert(guardT != IRTemp_INVALID); vassert(0 == (guest_R15_curr_instr_notENC & 1)); stmt( IRStmt_Exit( unop(Iop_Not1, unop(Iop_32to1, mkexpr(guardT))), Ijk_Boring, IRConst_U32(toUInt((guest_R15_curr_instr_notENC + 4) | 1)), OFFB_R15T )); } /* Thumb16 and Thumb32 only Generate a SIGILL followed by a restart of the current instruction if the given temp is nonzero. */ static void gen_SIGILL_T_if_nonzero ( IRTemp t /* :: Ity_I32 */ ) { ASSERT_IS_THUMB; vassert(t != IRTemp_INVALID); vassert(0 == (guest_R15_curr_instr_notENC & 1)); stmt( IRStmt_Exit( binop(Iop_CmpNE32, mkexpr(t), mkU32(0)), Ijk_NoDecode, IRConst_U32(toUInt(guest_R15_curr_instr_notENC | 1)), OFFB_R15T ) ); } /* Inspect the old_itstate, and generate a SIGILL if it indicates that we are currently in an IT block and are not the last in the block. This also rolls back guest_ITSTATE to its old value before the exit and restores it to its new value afterwards. This is so that if the exit is taken, we have an up to date version of ITSTATE available. Without doing that, we have no hope of making precise exceptions work. */ static void gen_SIGILL_T_if_in_but_NLI_ITBlock ( IRTemp old_itstate /* :: Ity_I32 */, IRTemp new_itstate /* :: Ity_I32 */ ) { ASSERT_IS_THUMB; put_ITSTATE(old_itstate); // backout IRTemp guards_for_next3 = newTemp(Ity_I32); assign(guards_for_next3, binop(Iop_Shr32, mkexpr(old_itstate), mkU8(8))); gen_SIGILL_T_if_nonzero(guards_for_next3); put_ITSTATE(new_itstate); //restore } /* Simpler version of the above, which generates a SIGILL if we're anywhere within an IT block. */ static void gen_SIGILL_T_if_in_ITBlock ( IRTemp old_itstate /* :: Ity_I32 */, IRTemp new_itstate /* :: Ity_I32 */ ) { put_ITSTATE(old_itstate); // backout gen_SIGILL_T_if_nonzero(old_itstate); put_ITSTATE(new_itstate); //restore } /* Generate an APSR value, from the NZCV thunk, and from QFLAG32 and GEFLAG0 .. GEFLAG3. */ static IRTemp synthesise_APSR ( void ) { IRTemp res1 = newTemp(Ity_I32); // Get NZCV assign( res1, mk_armg_calculate_flags_nzcv() ); // OR in the Q value IRTemp res2 = newTemp(Ity_I32); assign( res2, binop(Iop_Or32, mkexpr(res1), binop(Iop_Shl32, unop(Iop_1Uto32, binop(Iop_CmpNE32, mkexpr(get_QFLAG32()), mkU32(0))), mkU8(ARMG_CC_SHIFT_Q))) ); // OR in GE0 .. GE3 IRExpr* ge0 = unop(Iop_1Uto32, binop(Iop_CmpNE32, get_GEFLAG32(0), mkU32(0))); IRExpr* ge1 = unop(Iop_1Uto32, binop(Iop_CmpNE32, get_GEFLAG32(1), mkU32(0))); IRExpr* ge2 = unop(Iop_1Uto32, binop(Iop_CmpNE32, get_GEFLAG32(2), mkU32(0))); IRExpr* ge3 = unop(Iop_1Uto32, binop(Iop_CmpNE32, get_GEFLAG32(3), mkU32(0))); IRTemp res3 = newTemp(Ity_I32); assign(res3, binop(Iop_Or32, mkexpr(res2), binop(Iop_Or32, binop(Iop_Or32, binop(Iop_Shl32, ge0, mkU8(16)), binop(Iop_Shl32, ge1, mkU8(17))), binop(Iop_Or32, binop(Iop_Shl32, ge2, mkU8(18)), binop(Iop_Shl32, ge3, mkU8(19))) ))); return res3; } /* and the inverse transformation: given an APSR value, set the NZCV thunk, the Q flag, and the GE flags. */ static void desynthesise_APSR ( Bool write_nzcvq, Bool write_ge, IRTemp apsrT, IRTemp condT ) { vassert(write_nzcvq || write_ge); if (write_nzcvq) { // Do NZCV IRTemp immT = newTemp(Ity_I32); assign(immT, binop(Iop_And32, mkexpr(apsrT), mkU32(0xF0000000)) ); setFlags_D1(ARMG_CC_OP_COPY, immT, condT); // Do Q IRTemp qnewT = newTemp(Ity_I32); assign(qnewT, binop(Iop_And32, mkexpr(apsrT), mkU32(ARMG_CC_MASK_Q))); put_QFLAG32(qnewT, condT); } if (write_ge) { // Do GE3..0 put_GEFLAG32(0, 0, binop(Iop_And32, mkexpr(apsrT), mkU32(1<<16)), condT); put_GEFLAG32(1, 0, binop(Iop_And32, mkexpr(apsrT), mkU32(1<<17)), condT); put_GEFLAG32(2, 0, binop(Iop_And32, mkexpr(apsrT), mkU32(1<<18)), condT); put_GEFLAG32(3, 0, binop(Iop_And32, mkexpr(apsrT), mkU32(1<<19)), condT); } } /*------------------------------------------------------------*/ /*--- Helpers for saturation ---*/ /*------------------------------------------------------------*/ /* FIXME: absolutely the only diff. between (a) armUnsignedSatQ and (b) armSignedSatQ is that in (a) the floor is set to 0, whereas in (b) the floor is computed from the value of imm5. these two fnsn should be commoned up. */ /* UnsignedSatQ(): 'clamp' each value so it lies between 0 <= x <= (2^N)-1 Optionally return flag resQ saying whether saturation occurred. See definition in manual, section A2.2.1, page 41 (bits(N), boolean) UnsignedSatQ( integer i, integer N ) { if ( i > (2^N)-1 ) { result = (2^N)-1; saturated = TRUE; } elsif ( i < 0 ) { result = 0; saturated = TRUE; } else { result = i; saturated = FALSE; } return ( result
, saturated ); } */ static void armUnsignedSatQ( IRTemp* res, /* OUT - Ity_I32 */ IRTemp* resQ, /* OUT - Ity_I32 */ IRTemp regT, /* value to clamp - Ity_I32 */ UInt imm5 ) /* saturation ceiling */ { ULong ceil64 = (1ULL << imm5) - 1; // (2^imm5)-1 UInt ceil = (UInt)ceil64; UInt floor = 0; IRTemp nd0 = newTemp(Ity_I32); IRTemp nd1 = newTemp(Ity_I32); IRTemp nd2 = newTemp(Ity_I1); IRTemp nd3 = newTemp(Ity_I32); IRTemp nd4 = newTemp(Ity_I32); IRTemp nd5 = newTemp(Ity_I1); IRTemp nd6 = newTemp(Ity_I32); assign( nd0, mkexpr(regT) ); assign( nd1, mkU32(ceil) ); assign( nd2, binop( Iop_CmpLT32S, mkexpr(nd1), mkexpr(nd0) ) ); assign( nd3, IRExpr_ITE(mkexpr(nd2), mkexpr(nd1), mkexpr(nd0)) ); assign( nd4, mkU32(floor) ); assign( nd5, binop( Iop_CmpLT32S, mkexpr(nd3), mkexpr(nd4) ) ); assign( nd6, IRExpr_ITE(mkexpr(nd5), mkexpr(nd4), mkexpr(nd3)) ); assign( *res, mkexpr(nd6) ); /* if saturation occurred, then resQ is set to some nonzero value if sat did not occur, resQ is guaranteed to be zero. */ if (resQ) { assign( *resQ, binop(Iop_Xor32, mkexpr(*res), mkexpr(regT)) ); } } /* SignedSatQ(): 'clamp' each value so it lies between -2^N <= x <= (2^N) - 1 Optionally return flag resQ saying whether saturation occurred. - see definition in manual, section A2.2.1, page 41 (bits(N), boolean ) SignedSatQ( integer i, integer N ) { if ( i > 2^(N-1) - 1 ) { result = 2^(N-1) - 1; saturated = TRUE; } elsif ( i < -(2^(N-1)) ) { result = -(2^(N-1)); saturated = FALSE; } else { result = i; saturated = FALSE; } return ( result[N-1:0], saturated ); } */ static void armSignedSatQ( IRTemp regT, /* value to clamp - Ity_I32 */ UInt imm5, /* saturation ceiling */ IRTemp* res, /* OUT - Ity_I32 */ IRTemp* resQ ) /* OUT - Ity_I32 */ { Long ceil64 = (1LL << (imm5-1)) - 1; // (2^(imm5-1))-1 Long floor64 = -(1LL << (imm5-1)); // -(2^(imm5-1)) Int ceil = (Int)ceil64; Int floor = (Int)floor64; IRTemp nd0 = newTemp(Ity_I32); IRTemp nd1 = newTemp(Ity_I32); IRTemp nd2 = newTemp(Ity_I1); IRTemp nd3 = newTemp(Ity_I32); IRTemp nd4 = newTemp(Ity_I32); IRTemp nd5 = newTemp(Ity_I1); IRTemp nd6 = newTemp(Ity_I32); assign( nd0, mkexpr(regT) ); assign( nd1, mkU32(ceil) ); assign( nd2, binop( Iop_CmpLT32S, mkexpr(nd1), mkexpr(nd0) ) ); assign( nd3, IRExpr_ITE( mkexpr(nd2), mkexpr(nd1), mkexpr(nd0) ) ); assign( nd4, mkU32(floor) ); assign( nd5, binop( Iop_CmpLT32S, mkexpr(nd3), mkexpr(nd4) ) ); assign( nd6, IRExpr_ITE( mkexpr(nd5), mkexpr(nd4), mkexpr(nd3) ) ); assign( *res, mkexpr(nd6) ); /* if saturation occurred, then resQ is set to some nonzero value if sat did not occur, resQ is guaranteed to be zero. */ if (resQ) { assign( *resQ, binop(Iop_Xor32, mkexpr(*res), mkexpr(regT)) ); } } /* Compute a value 0 :: I32 or 1 :: I32, indicating whether signed overflow occurred for 32-bit addition. Needs both args and the result. HD p27. */ static IRExpr* signed_overflow_after_Add32 ( IRExpr* resE, IRTemp argL, IRTemp argR ) { IRTemp res = newTemp(Ity_I32); assign(res, resE); return binop( Iop_Shr32, binop( Iop_And32, binop( Iop_Xor32, mkexpr(res), mkexpr(argL) ), binop( Iop_Xor32, mkexpr(res), mkexpr(argR) )), mkU8(31) ); } /* Similarly .. also from HD p27 .. */ static IRExpr* signed_overflow_after_Sub32 ( IRExpr* resE, IRTemp argL, IRTemp argR ) { IRTemp res = newTemp(Ity_I32); assign(res, resE); return binop( Iop_Shr32, binop( Iop_And32, binop( Iop_Xor32, mkexpr(argL), mkexpr(argR) ), binop( Iop_Xor32, mkexpr(res), mkexpr(argL) )), mkU8(31) ); } /*------------------------------------------------------------*/ /*--- Larger helpers ---*/ /*------------------------------------------------------------*/ /* Compute both the result and new C flag value for a LSL by an imm5 or by a register operand. May generate reads of the old C value (hence only safe to use before any writes to guest state happen). Are factored out so can be used by both ARM and Thumb. Note that in compute_result_and_C_after_{LSL,LSR,ASR}_by{imm5,reg}, "res" (the result) is a.k.a. "shop", shifter operand "newC" (the new C) is a.k.a. "shco", shifter carry out The calling convention for res and newC is a bit funny. They could be passed by value, but instead are passed by ref. The C (shco) value computed must be zero in bits 31:1, as the IR optimisations for flag handling (guest_arm_spechelper) rely on that, and the slow-path handlers (armg_calculate_flags_nzcv) assert for it. Same applies to all these functions that compute shco after a shift or rotate, not just this one. */ static void compute_result_and_C_after_LSL_by_imm5 ( /*OUT*/HChar* buf, IRTemp* res, IRTemp* newC, IRTemp rMt, UInt shift_amt, /* operands */ UInt rM /* only for debug printing */ ) { if (shift_amt == 0) { if (newC) { assign( *newC, mk_armg_calculate_flag_c() ); } assign( *res, mkexpr(rMt) ); DIS(buf, "r%u", rM); } else { vassert(shift_amt >= 1 && shift_amt <= 31); if (newC) { assign( *newC, binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(32 - shift_amt)), mkU32(1))); } assign( *res, binop(Iop_Shl32, mkexpr(rMt), mkU8(shift_amt)) ); DIS(buf, "r%u, LSL #%u", rM, shift_amt); } } static void compute_result_and_C_after_LSL_by_reg ( /*OUT*/HChar* buf, IRTemp* res, IRTemp* newC, IRTemp rMt, IRTemp rSt, /* operands */ UInt rM, UInt rS /* only for debug printing */ ) { // shift left in range 0 .. 255 // amt = rS & 255 // res = amt < 32 ? Rm << amt : 0 // newC = amt == 0 ? oldC : // amt in 1..32 ? Rm[32-amt] : 0 IRTemp amtT = newTemp(Ity_I32); assign( amtT, binop(Iop_And32, mkexpr(rSt), mkU32(255)) ); if (newC) { /* mux0X(amt == 0, mux0X(amt < 32, 0, Rm[(32-amt) & 31]), oldC) */ /* About the best you can do is pray that iropt is able to nuke most or all of the following junk. */ IRTemp oldC = newTemp(Ity_I32); assign(oldC, mk_armg_calculate_flag_c() ); assign( *newC, IRExpr_ITE( binop(Iop_CmpEQ32, mkexpr(amtT), mkU32(0)), mkexpr(oldC), IRExpr_ITE( binop(Iop_CmpLE32U, mkexpr(amtT), mkU32(32)), binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), unop(Iop_32to8, binop(Iop_And32, binop(Iop_Sub32, mkU32(32), mkexpr(amtT)), mkU32(31) ) ) ), mkU32(1) ), mkU32(0) ) ) ); } // (Rm << (Rs & 31)) & (((Rs & 255) - 32) >>s 31) // Lhs of the & limits the shift to 31 bits, so as to // give known IR semantics. Rhs of the & is all 1s for // Rs <= 31 and all 0s for Rs >= 32. assign( *res, binop( Iop_And32, binop(Iop_Shl32, mkexpr(rMt), unop(Iop_32to8, binop(Iop_And32, mkexpr(rSt), mkU32(31)))), binop(Iop_Sar32, binop(Iop_Sub32, mkexpr(amtT), mkU32(32)), mkU8(31)))); DIS(buf, "r%u, LSL r%u", rM, rS); } static void compute_result_and_C_after_LSR_by_imm5 ( /*OUT*/HChar* buf, IRTemp* res, IRTemp* newC, IRTemp rMt, UInt shift_amt, /* operands */ UInt rM /* only for debug printing */ ) { if (shift_amt == 0) { // conceptually a 32-bit shift, however: // res = 0 // newC = Rm[31] if (newC) { assign( *newC, binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(31)), mkU32(1))); } assign( *res, mkU32(0) ); DIS(buf, "r%u, LSR #0(a.k.a. 32)", rM); } else { // shift in range 1..31 // res = Rm >>u shift_amt // newC = Rm[shift_amt - 1] vassert(shift_amt >= 1 && shift_amt <= 31); if (newC) { assign( *newC, binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(shift_amt - 1)), mkU32(1))); } assign( *res, binop(Iop_Shr32, mkexpr(rMt), mkU8(shift_amt)) ); DIS(buf, "r%u, LSR #%u", rM, shift_amt); } } static void compute_result_and_C_after_LSR_by_reg ( /*OUT*/HChar* buf, IRTemp* res, IRTemp* newC, IRTemp rMt, IRTemp rSt, /* operands */ UInt rM, UInt rS /* only for debug printing */ ) { // shift right in range 0 .. 255 // amt = rS & 255 // res = amt < 32 ? Rm >>u amt : 0 // newC = amt == 0 ? oldC : // amt in 1..32 ? Rm[amt-1] : 0 IRTemp amtT = newTemp(Ity_I32); assign( amtT, binop(Iop_And32, mkexpr(rSt), mkU32(255)) ); if (newC) { /* mux0X(amt == 0, mux0X(amt < 32, 0, Rm[(amt-1) & 31]), oldC) */ IRTemp oldC = newTemp(Ity_I32); assign(oldC, mk_armg_calculate_flag_c() ); assign( *newC, IRExpr_ITE( binop(Iop_CmpEQ32, mkexpr(amtT), mkU32(0)), mkexpr(oldC), IRExpr_ITE( binop(Iop_CmpLE32U, mkexpr(amtT), mkU32(32)), binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), unop(Iop_32to8, binop(Iop_And32, binop(Iop_Sub32, mkexpr(amtT), mkU32(1)), mkU32(31) ) ) ), mkU32(1) ), mkU32(0) ) ) ); } // (Rm >>u (Rs & 31)) & (((Rs & 255) - 32) >>s 31) // Lhs of the & limits the shift to 31 bits, so as to // give known IR semantics. Rhs of the & is all 1s for // Rs <= 31 and all 0s for Rs >= 32. assign( *res, binop( Iop_And32, binop(Iop_Shr32, mkexpr(rMt), unop(Iop_32to8, binop(Iop_And32, mkexpr(rSt), mkU32(31)))), binop(Iop_Sar32, binop(Iop_Sub32, mkexpr(amtT), mkU32(32)), mkU8(31)))); DIS(buf, "r%u, LSR r%u", rM, rS); } static void compute_result_and_C_after_ASR_by_imm5 ( /*OUT*/HChar* buf, IRTemp* res, IRTemp* newC, IRTemp rMt, UInt shift_amt, /* operands */ UInt rM /* only for debug printing */ ) { if (shift_amt == 0) { // conceptually a 32-bit shift, however: // res = Rm >>s 31 // newC = Rm[31] if (newC) { assign( *newC, binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(31)), mkU32(1))); } assign( *res, binop(Iop_Sar32, mkexpr(rMt), mkU8(31)) ); DIS(buf, "r%u, ASR #0(a.k.a. 32)", rM); } else { // shift in range 1..31 // res = Rm >>s shift_amt // newC = Rm[shift_amt - 1] vassert(shift_amt >= 1 && shift_amt <= 31); if (newC) { assign( *newC, binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(shift_amt - 1)), mkU32(1))); } assign( *res, binop(Iop_Sar32, mkexpr(rMt), mkU8(shift_amt)) ); DIS(buf, "r%u, ASR #%u", rM, shift_amt); } } static void compute_result_and_C_after_ASR_by_reg ( /*OUT*/HChar* buf, IRTemp* res, IRTemp* newC, IRTemp rMt, IRTemp rSt, /* operands */ UInt rM, UInt rS /* only for debug printing */ ) { // arithmetic shift right in range 0 .. 255 // amt = rS & 255 // res = amt < 32 ? Rm >>s amt : Rm >>s 31 // newC = amt == 0 ? oldC : // amt in 1..32 ? Rm[amt-1] : Rm[31] IRTemp amtT = newTemp(Ity_I32); assign( amtT, binop(Iop_And32, mkexpr(rSt), mkU32(255)) ); if (newC) { /* mux0X(amt == 0, mux0X(amt < 32, Rm[31], Rm[(amt-1) & 31]) oldC) */ IRTemp oldC = newTemp(Ity_I32); assign(oldC, mk_armg_calculate_flag_c() ); assign( *newC, IRExpr_ITE( binop(Iop_CmpEQ32, mkexpr(amtT), mkU32(0)), mkexpr(oldC), IRExpr_ITE( binop(Iop_CmpLE32U, mkexpr(amtT), mkU32(32)), binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), unop(Iop_32to8, binop(Iop_And32, binop(Iop_Sub32, mkexpr(amtT), mkU32(1)), mkU32(31) ) ) ), mkU32(1) ), binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(31) ), mkU32(1) ) ) ) ); } // (Rm >>s (amt
>u 1) // newC = Rm[0] if (newC) { assign( *newC, binop(Iop_And32, mkexpr(rMt), mkU32(1))); } assign( oldcT, mk_armg_calculate_flag_c() ); assign( *res, binop(Iop_Or32, binop(Iop_Shl32, mkexpr(oldcT), mkU8(31)), binop(Iop_Shr32, mkexpr(rMt), mkU8(1))) ); DIS(buf, "r%u, RRX", rM); } else { // rotate right in range 1..31 // res = Rm `ror` shift_amt // newC = Rm[shift_amt - 1] vassert(shift_amt >= 1 && shift_amt <= 31); if (newC) { assign( *newC, binop(Iop_And32, binop(Iop_Shr32, mkexpr(rMt), mkU8(shift_amt - 1)), mkU32(1))); } assign( *res, binop(Iop_Or32, binop(Iop_Shr32, mkexpr(rMt), mkU8(shift_amt)), binop(Iop_Shl32, mkexpr(rMt), mkU8(32-shift_amt)))); DIS(buf, "r%u, ROR #%u", rM, shift_amt); } break; default: /*NOTREACHED*/ vassert(0); } } /* Generate an expression corresponding to the register-shift case of a shifter operand. This is used both for ARM and Thumb2. Bind it to a temporary, and return that via *res. If newC is non-NULL, also compute a value for the shifter's carry out (in the LSB of a word), bind it to a temporary, and return that via *shco. Generates GETs from the guest state and is therefore not safe to use once we start doing PUTs to it, for any given instruction. 'how' is encoded thusly: 00b LSL, 01b LSR, 10b ASR, 11b ROR Most but not all ARM and Thumb integer insns use this encoding. Be careful to ensure the right value is passed here. */ static void compute_result_and_C_after_shift_by_reg ( /*OUT*/HChar* buf, /*OUT*/IRTemp* res, /*OUT*/IRTemp* newC, IRTemp rMt, /* reg to shift */ UInt how, /* what kind of shift */ IRTemp rSt, /* shift amount */ UInt rM, /* only for debug printing */ UInt rS /* only for debug printing */ ) { vassert(how < 4); switch (how) { case 0: { /* LSL */ compute_result_and_C_after_LSL_by_reg( buf, res, newC, rMt, rSt, rM, rS ); break; } case 1: { /* LSR */ compute_result_and_C_after_LSR_by_reg( buf, res, newC, rMt, rSt, rM, rS ); break; } case 2: { /* ASR */ compute_result_and_C_after_ASR_by_reg( buf, res, newC, rMt, rSt, rM, rS ); break; } case 3: { /* ROR */ compute_result_and_C_after_ROR_by_reg( buf, res, newC, rMt, rSt, rM, rS ); break; } default: /*NOTREACHED*/ vassert(0); } } /* Generate an expression corresponding to a shifter_operand, bind it to a temporary, and return that via *shop. If shco is non-NULL, also compute a value for the shifter's carry out (in the LSB of a word), bind it to a temporary, and return that via *shco. If for some reason we can't come up with a shifter operand (missing case? not really a shifter operand?) return False. Generates GETs from the guest state and is therefore not safe to use once we start doing PUTs to it, for any given instruction. For ARM insns only; not for Thumb. */ static Bool mk_shifter_operand ( UInt insn_25, UInt insn_11_0, /*OUT*/IRTemp* shop, /*OUT*/IRTemp* shco, /*OUT*/HChar* buf ) { UInt insn_4 = (insn_11_0 >> 4) & 1; UInt insn_7 = (insn_11_0 >> 7) & 1; vassert(insn_25 <= 0x1); vassert(insn_11_0 <= 0xFFF); vassert(shop && *shop == IRTemp_INVALID); *shop = newTemp(Ity_I32); if (shco) { vassert(*shco == IRTemp_INVALID); *shco = newTemp(Ity_I32); } /* 32-bit immediate */ if (insn_25 == 1) { /* immediate: (7:0) rotated right by 2 * (11:8) */ UInt imm = (insn_11_0 >> 0) & 0xFF; UInt rot = 2 * ((insn_11_0 >> 8) & 0xF); vassert(rot <= 30); imm = ROR32(imm, rot); if (shco) { if (rot == 0) { assign( *shco, mk_armg_calculate_flag_c() ); } else { assign( *shco, mkU32( (imm >> 31) & 1 ) ); } } DIS(buf, "#0x%x", imm); assign( *shop, mkU32(imm) ); return True; } /* Shift/rotate by immediate */ if (insn_25 == 0 && insn_4 == 0) { /* Rm (3:0) shifted (6:5) by immediate (11:7) */ UInt shift_amt = (insn_11_0 >> 7) & 0x1F; UInt rM = (insn_11_0 >> 0) & 0xF; UInt how = (insn_11_0 >> 5) & 3; /* how: 00 = Shl, 01 = Shr, 10 = Sar, 11 = Ror */ IRTemp rMt = newTemp(Ity_I32); assign(rMt, getIRegA(rM)); vassert(shift_amt <= 31); compute_result_and_C_after_shift_by_imm5( buf, shop, shco, rMt, how, shift_amt, rM ); return True; } /* Shift/rotate by register */ if (insn_25 == 0 && insn_4 == 1) { /* Rm (3:0) shifted (6:5) by Rs (11:8) */ UInt rM = (insn_11_0 >> 0) & 0xF; UInt rS = (insn_11_0 >> 8) & 0xF; UInt how = (insn_11_0 >> 5) & 3; /* how: 00 = Shl, 01 = Shr, 10 = Sar, 11 = Ror */ IRTemp rMt = newTemp(Ity_I32); IRTemp rSt = newTemp(Ity_I32); if (insn_7 == 1) return False; /* not really a shifter operand */ assign(rMt, getIRegA(rM)); assign(rSt, getIRegA(rS)); compute_result_and_C_after_shift_by_reg( buf, shop, shco, rMt, how, rSt, rM, rS ); return True; } vex_printf("mk_shifter_operand(0x%x,0x%x)\n", insn_25, insn_11_0 ); return False; } /* ARM only */ static IRExpr* mk_EA_reg_plusminus_imm12 ( UInt rN, UInt bU, UInt imm12, /*OUT*/HChar* buf ) { vassert(rN < 16); vassert(bU < 2); vassert(imm12 < 0x1000); HChar opChar = bU == 1 ? '+' : '-'; DIS(buf, "[r%u, #%c%u]", rN, opChar, imm12); return binop( (bU == 1 ? Iop_Add32 : Iop_Sub32), getIRegA(rN), mkU32(imm12) ); } /* ARM only. NB: This is "DecodeImmShift" in newer versions of the the ARM ARM. */ static IRExpr* mk_EA_reg_plusminus_shifted_reg ( UInt rN, UInt bU, UInt rM, UInt sh2, UInt imm5, /*OUT*/HChar* buf ) { vassert(rN < 16); vassert(bU < 2); vassert(rM < 16); vassert(sh2 < 4); vassert(imm5 < 32); HChar opChar = bU == 1 ? '+' : '-'; IRExpr* index = NULL; switch (sh2) { case 0: /* LSL */ /* imm5 can be in the range 0 .. 31 inclusive. */ index = binop(Iop_Shl32, getIRegA(rM), mkU8(imm5)); DIS(buf, "[r%u, %c r%u LSL #%u]", rN, opChar, rM, imm5); break; case 1: /* LSR */ if (imm5 == 0) { index = mkU32(0); vassert(0); // ATC } else { index = binop(Iop_Shr32, getIRegA(rM), mkU8(imm5)); } DIS(buf, "[r%u, %cr%u, LSR #%u]", rN, opChar, rM, imm5 == 0 ? 32 : imm5); break; case 2: /* ASR */ /* Doesn't this just mean that the behaviour with imm5 == 0 is the same as if it had been 31 ? */ if (imm5 == 0) { index = binop(Iop_Sar32, getIRegA(rM), mkU8(31)); vassert(0); // ATC } else { index = binop(Iop_Sar32, getIRegA(rM), mkU8(imm5)); } DIS(buf, "[r%u, %cr%u, ASR #%u]", rN, opChar, rM, imm5 == 0 ? 32 : imm5); break; case 3: /* ROR or RRX */ if (imm5 == 0) { IRTemp rmT = newTemp(Ity_I32); IRTemp cflagT = newTemp(Ity_I32); assign(rmT, getIRegA(rM)); assign(cflagT, mk_armg_calculate_flag_c()); index = binop(Iop_Or32, binop(Iop_Shl32, mkexpr(cflagT), mkU8(31)), binop(Iop_Shr32, mkexpr(rmT), mkU8(1))); DIS(buf, "[r%u, %cr%u, RRX]", rN, opChar, rM); } else { IRTemp rmT = newTemp(Ity_I32); assign(rmT, getIRegA(rM)); vassert(imm5 >= 1 && imm5 <= 31); index = binop(Iop_Or32, binop(Iop_Shl32, mkexpr(rmT), mkU8(32-imm5)), binop(Iop_Shr32, mkexpr(rmT), mkU8(imm5))); DIS(buf, "[r%u, %cr%u, ROR #%u]", rN, opChar, rM, imm5); } break; default: vassert(0); } vassert(index); return binop(bU == 1 ? Iop_Add32 : Iop_Sub32, getIRegA(rN), index); } /* ARM only */ static IRExpr* mk_EA_reg_plusminus_imm8 ( UInt rN, UInt bU, UInt imm8, /*OUT*/HChar* buf ) { vassert(rN < 16); vassert(bU < 2); vassert(imm8 < 0x100); HChar opChar = bU == 1 ? '+' : '-'; DIS(buf, "[r%u, #%c%u]", rN, opChar, imm8); return binop( (bU == 1 ? Iop_Add32 : Iop_Sub32), getIRegA(rN), mkU32(imm8) ); } /* ARM only */ static IRExpr* mk_EA_reg_plusminus_reg ( UInt rN, UInt bU, UInt rM, /*OUT*/HChar* buf ) { vassert(rN < 16); vassert(bU < 2); vassert(rM < 16); HChar opChar = bU == 1 ? '+' : '-'; IRExpr* index = getIRegA(rM); DIS(buf, "[r%u, %c r%u]", rN, opChar, rM); return binop(bU == 1 ? Iop_Add32 : Iop_Sub32, getIRegA(rN), index); } /* irRes :: Ity_I32 holds a floating point comparison result encoded as an IRCmpF64Result. Generate code to convert it to an ARM-encoded (N,Z,C,V) group in the lowest 4 bits of an I32 value. Assign a new temp to hold that value, and return the temp. */ static IRTemp mk_convert_IRCmpF64Result_to_NZCV ( IRTemp irRes ) { IRTemp ix = newTemp(Ity_I32); IRTemp termL = newTemp(Ity_I32); IRTemp termR = newTemp(Ity_I32); IRTemp nzcv = newTemp(Ity_I32); /* This is where the fun starts. We have to convert 'irRes' from an IR-convention return result (IRCmpF64Result) to an ARM-encoded (N,Z,C,V) group. The final result is in the bottom 4 bits of 'nzcv'. */ /* Map compare result from IR to ARM(nzcv) */ /* FP cmp result | IR | ARM(nzcv) -------------------------------- UN 0x45 0011 LT 0x01 1000 GT 0x00 0010 EQ 0x40 0110 */ /* Now since you're probably wondering WTF .. ix fishes the useful bits out of the IR value, bits 6 and 0, and places them side by side, giving a number which is 0, 1, 2 or 3. termL is a sequence cooked up by GNU superopt. It converts ix into an almost correct value NZCV value (incredibly), except for the case of UN, where it produces 0100 instead of the required 0011. termR is therefore a correction term, also computed from ix. It is 1 in the UN case and 0 for LT, GT and UN. Hence, to get the final correct value, we subtract termR from termL. Don't take my word for it. There's a test program at the bottom of this file, to try this out with. */ assign( ix, binop(Iop_Or32, binop(Iop_And32, binop(Iop_Shr32, mkexpr(irRes), mkU8(5)), mkU32(3)), binop(Iop_And32, mkexpr(irRes), mkU32(1)))); assign( termL, binop(Iop_Add32, binop(Iop_Shr32, binop(Iop_Sub32, binop(Iop_Shl32, binop(Iop_Xor32, mkexpr(ix), mkU32(1)), mkU8(30)), mkU32(1)), mkU8(29)), mkU32(1))); assign( termR, binop(Iop_And32, binop(Iop_And32, mkexpr(ix), binop(Iop_Shr32, mkexpr(ix), mkU8(1))), mkU32(1))); assign(nzcv, binop(Iop_Sub32, mkexpr(termL), mkexpr(termR))); return nzcv; } /* Thumb32 only. This is "ThumbExpandImm" in the ARM ARM. If updatesC is non-NULL, a boolean is written to it indicating whether or not the C flag is updated, as per ARM ARM "ThumbExpandImm_C". */ static UInt thumbExpandImm ( Bool* updatesC, UInt imm1, UInt imm3, UInt imm8 ) { vassert(imm1 < (1<<1)); vassert(imm3 < (1<<3)); vassert(imm8 < (1<<8)); UInt i_imm3_a = (imm1 << 4) | (imm3 << 1) | ((imm8 >> 7) & 1); UInt abcdefgh = imm8; UInt lbcdefgh = imm8 | 0x80; if (updatesC) { *updatesC = i_imm3_a >= 8; } switch (i_imm3_a) { case 0: case 1: return abcdefgh; case 2: case 3: return (abcdefgh << 16) | abcdefgh; case 4: case 5: return (abcdefgh << 24) | (abcdefgh << 8); case 6: case 7: return (abcdefgh << 24) | (abcdefgh << 16) | (abcdefgh << 8) | abcdefgh; case 8 ... 31: return lbcdefgh << (32 - i_imm3_a); default: break; } /*NOTREACHED*/vassert(0); } /* Version of thumbExpandImm where we simply feed it the instruction halfwords (the lowest addressed one is I0). */ static UInt thumbExpandImm_from_I0_I1 ( Bool* updatesC, UShort i0s, UShort i1s ) { UInt i0 = (UInt)i0s; UInt i1 = (UInt)i1s; UInt imm1 = SLICE_UInt(i0,10,10); UInt imm3 = SLICE_UInt(i1,14,12); UInt imm8 = SLICE_UInt(i1,7,0); return thumbExpandImm(updatesC, imm1, imm3, imm8); } /* Thumb16 only. Given the firstcond and mask fields from an IT instruction, compute the 32-bit ITSTATE value implied, as described in libvex_guest_arm.h. This is not the ARM ARM representation. Also produce the t/e chars for the 2nd, 3rd, 4th insns, for disassembly printing. Returns False if firstcond or mask denote something invalid. The number and conditions for the instructions to be conditionalised depend on firstcond and mask: mask cond 1 cond 2 cond 3 cond 4 1000 fc[3:0] x100 fc[3:0] fc[3:1]:x xy10 fc[3:0] fc[3:1]:x fc[3:1]:y xyz1 fc[3:0] fc[3:1]:x fc[3:1]:y fc[3:1]:z The condition fields are assembled in *itstate backwards (cond 4 at the top, cond 1 at the bottom). Conditions are << 4'd and then ^0xE'd, and those fields that correspond to instructions in the IT block are tagged with a 1 bit. */ static Bool compute_ITSTATE ( /*OUT*/UInt* itstate, /*OUT*/HChar* ch1, /*OUT*/HChar* ch2, /*OUT*/HChar* ch3, UInt firstcond, UInt mask ) { vassert(firstcond <= 0xF); vassert(mask <= 0xF); *itstate = 0; *ch1 = *ch2 = *ch3 = '.'; if (mask == 0) return False; /* the logic below actually ensures this anyway, but clearer to make it explicit. */ if (firstcond == 0xF) return False; /* NV is not allowed */ if (firstcond == 0xE && popcount32(mask) != 1) return False; /* if firstcond is AL then all the rest must be too */ UInt m3 = (mask >> 3) & 1; UInt m2 = (mask >> 2) & 1; UInt m1 = (mask >> 1) & 1; UInt m0 = (mask >> 0) & 1; UInt fc = (firstcond << 4) | 1/*in-IT-block*/; UInt ni = (0xE/*AL*/ << 4) | 0/*not-in-IT-block*/; if (m3 == 1 && (m2|m1|m0) == 0) { *itstate = (ni << 24) | (ni << 16) | (ni << 8) | fc; *itstate ^= 0xE0E0E0E0; return True; } if (m2 == 1 && (m1|m0) == 0) { *itstate = (ni << 24) | (ni << 16) | (setbit32(fc, 4, m3) << 8) | fc; *itstate ^= 0xE0E0E0E0; *ch1 = m3 == (firstcond & 1) ? 't' : 'e'; return True; } if (m1 == 1 && m0 == 0) { *itstate = (ni << 24) | (setbit32(fc, 4, m2) << 16) | (setbit32(fc, 4, m3) << 8) | fc; *itstate ^= 0xE0E0E0E0; *ch1 = m3 == (firstcond & 1) ? 't' : 'e'; *ch2 = m2 == (firstcond & 1) ? 't' : 'e'; return True; } if (m0 == 1) { *itstate = (setbit32(fc, 4, m1) << 24) | (setbit32(fc, 4, m2) << 16) | (setbit32(fc, 4, m3) << 8) | fc; *itstate ^= 0xE0E0E0E0; *ch1 = m3 == (firstcond & 1) ? 't' : 'e'; *ch2 = m2 == (firstcond & 1) ? 't' : 'e'; *ch3 = m1 == (firstcond & 1) ? 't' : 'e'; return True; } return False; } /* Generate IR to do 32-bit bit reversal, a la Hacker's Delight Chapter 7 Section 1. */ static IRTemp gen_BITREV ( IRTemp x0 ) { IRTemp x1 = newTemp(Ity_I32); IRTemp x2 = newTemp(Ity_I32); IRTemp x3 = newTemp(Ity_I32); IRTemp x4 = newTemp(Ity_I32); IRTemp x5 = newTemp(Ity_I32); UInt c1 = 0x55555555; UInt c2 = 0x33333333; UInt c3 = 0x0F0F0F0F; UInt c4 = 0x00FF00FF; UInt c5 = 0x0000FFFF; assign(x1, binop(Iop_Or32, binop(Iop_Shl32, binop(Iop_And32, mkexpr(x0), mkU32(c1)), mkU8(1)), binop(Iop_Shr32, binop(Iop_And32, mkexpr(x0), mkU32(~c1)), mkU8(1)) )); assign(x2, binop(Iop_Or32, binop(Iop_Shl32, binop(Iop_And32, mkexpr(x1), mkU32(c2)), mkU8(2)), binop(Iop_Shr32, binop(Iop_And32, mkexpr(x1), mkU32(~c2)), mkU8(2)) )); assign(x3, binop(Iop_Or32, binop(Iop_Shl32, binop(Iop_And32, mkexpr(x2), mkU32(c3)), mkU8(4)), binop(Iop_Shr32, binop(Iop_And32, mkexpr(x2), mkU32(~c3)), mkU8(4)) )); assign(x4, binop(Iop_Or32, binop(Iop_Shl32, binop(Iop_And32, mkexpr(x3), mkU32(c4)), mkU8(8)), binop(Iop_Shr32, binop(Iop_And32, mkexpr(x3), mkU32(~c4)), mkU8(8)) )); assign(x5, binop(Iop_Or32, binop(Iop_Shl32, binop(Iop_And32, mkexpr(x4), mkU32(c5)), mkU8(16)), binop(Iop_Shr32, binop(Iop_And32, mkexpr(x4), mkU32(~c5)), mkU8(16)) )); return x5; } /* Generate IR to do rearrange bytes 3:2:1:0 in a word in to the order 0:1:2:3 (aka byte-swap). */ static IRTemp gen_REV ( IRTemp arg ) { IRTemp res = newTemp(Ity_I32); assign(res, binop(Iop_Or32, binop(Iop_Shl32, mkexpr(arg), mkU8(24)), binop(Iop_Or32, binop(Iop_And32, binop(Iop_Shl32, mkexpr(arg), mkU8(8)), mkU32(0x00FF0000)), binop(Iop_Or32, binop(Iop_And32, binop(Iop_Shr32, mkexpr(arg), mkU8(8)), mkU32(0x0000FF00)), binop(Iop_And32, binop(Iop_Shr32, mkexpr(arg), mkU8(24)), mkU32(0x000000FF) ) )))); return res; } /* Generate IR to do rearrange bytes 3:2:1:0 in a word in to the order 2:3:0:1 (swap within lo and hi halves). */ static IRTemp gen_REV16 ( IRTemp arg ) { IRTemp res = newTemp(Ity_I32); assign(res, binop(Iop_Or32, binop(Iop_And32, binop(Iop_Shl32, mkexpr(arg), mkU8(8)), mkU32(0xFF00FF00)), binop(Iop_And32, binop(Iop_Shr32, mkexpr(arg), mkU8(8)), mkU32(0x00FF00FF)))); return res; } /*------------------------------------------------------------*/ /*--- Advanced SIMD (NEON) instructions ---*/ /*------------------------------------------------------------*/ /*------------------------------------------------------------*/ /*--- NEON data processing ---*/ /*------------------------------------------------------------*/ /* For all NEON DP ops, we use the normal scheme to handle conditional writes to registers -- pass in condT and hand that on to the put*Reg functions. In ARM mode condT is always IRTemp_INVALID since NEON is unconditional for ARM. In Thumb mode condT is derived from the ITSTATE shift register in the normal way. */ static UInt get_neon_d_regno(UInt theInstr) { UInt x = ((theInstr >> 18) & 0x10) | ((theInstr >> 12) & 0xF); if (theInstr & 0x40) { if (x & 1) { x = x + 0x100; } else { x = x >> 1; } } return x; } static UInt get_neon_n_regno(UInt theInstr) { UInt x = ((theInstr >> 3) & 0x10) | ((theInstr >> 16) & 0xF); if (theInstr & 0x40) { if (x & 1) { x = x + 0x100; } else { x = x >> 1; } } return x; } static UInt get_neon_m_regno(UInt theInstr) { UInt x = ((theInstr >> 1) & 0x10) | (theInstr & 0xF); if (theInstr & 0x40) { if (x & 1) { x = x + 0x100; } else { x = x >> 1; } } return x; } static Bool dis_neon_vext ( UInt theInstr, IRTemp condT ) { UInt dreg = get_neon_d_regno(theInstr); UInt mreg = get_neon_m_regno(theInstr); UInt nreg = get_neon_n_regno(theInstr); UInt imm4 = (theInstr >> 8) & 0xf; UInt Q = (theInstr >> 6) & 1; HChar reg_t = Q ? 'q' : 'd'; if (Q) { putQReg(dreg, triop(Iop_SliceV128, /*hiV128*/getQReg(mreg), /*loV128*/getQReg(nreg), mkU8(imm4)), condT); } else { putDRegI64(dreg, triop(Iop_Slice64, /*hiI64*/getDRegI64(mreg), /*loI64*/getDRegI64(nreg), mkU8(imm4)), condT); } DIP("vext.8 %c%u, %c%u, %c%u, #%u\n", reg_t, dreg, reg_t, nreg, reg_t, mreg, imm4); return True; } /* Generate specific vector FP binary ops, possibly with a fake rounding mode as required by the primop. */ static IRExpr* binop_w_fake_RM ( IROp op, IRExpr* argL, IRExpr* argR ) { switch (op) { case Iop_Add32Fx4: case Iop_Sub32Fx4: case Iop_Mul32Fx4: return triop(op, get_FAKE_roundingmode(), argL, argR ); case Iop_Add32x4: case Iop_Add16x8: case Iop_Sub32x4: case Iop_Sub16x8: case Iop_Mul32x4: case Iop_Mul16x8: case Iop_Mul32x2: case Iop_Mul16x4: case Iop_Add32Fx2: case Iop_Sub32Fx2: case Iop_Mul32Fx2: case Iop_PwAdd32Fx2: return binop(op, argL, argR); default: ppIROp(op); vassert(0); } } /* VTBL, VTBX */ static Bool dis_neon_vtb ( UInt theInstr, IRTemp condT ) { UInt op = (theInstr >> 6) & 1; UInt dreg = get_neon_d_regno(theInstr & ~(1 << 6)); UInt nreg = get_neon_n_regno(theInstr & ~(1 << 6)); UInt mreg = get_neon_m_regno(theInstr & ~(1 << 6)); UInt len = (theInstr >> 8) & 3; Int i; IROp cmp; ULong imm; IRTemp arg_l; IRTemp old_mask, new_mask, cur_mask; IRTemp old_res, new_res; IRTemp old_arg, new_arg; if (dreg >= 0x100 || mreg >= 0x100 || nreg >= 0x100) return False; if (nreg + len > 31) return False; cmp = Iop_CmpGT8Ux8; old_mask = newTemp(Ity_I64); old_res = newTemp(Ity_I64); old_arg = newTemp(Ity_I64); assign(old_mask, mkU64(0)); assign(old_res, mkU64(0)); assign(old_arg, getDRegI64(mreg)); imm = 8; imm = (imm << 8) | imm; imm = (imm << 16) | imm; imm = (imm << 32) | imm; for (i = 0; i <= len; i++) { arg_l = newTemp(Ity_I64); new_mask = newTemp(Ity_I64); cur_mask = newTemp(Ity_I64); new_res = newTemp(Ity_I64); new_arg = newTemp(Ity_I64); assign(arg_l, getDRegI64(nreg+i)); assign(new_arg, binop(Iop_Sub8x8, mkexpr(old_arg), mkU64(imm))); assign(cur_mask, binop(cmp, mkU64(imm), mkexpr(old_arg))); assign(new_mask, binop(Iop_Or64, mkexpr(old_mask), mkexpr(cur_mask))); assign(new_res, binop(Iop_Or64, mkexpr(old_res), binop(Iop_And64, binop(Iop_Perm8x8, mkexpr(arg_l), binop(Iop_And64, mkexpr(old_arg), mkexpr(cur_mask))), mkexpr(cur_mask)))); old_arg = new_arg; old_mask = new_mask; old_res = new_res; } if (op) { new_res = newTemp(Ity_I64); assign(new_res, binop(Iop_Or64, binop(Iop_And64, getDRegI64(dreg), unop(Iop_Not64, mkexpr(old_mask))), mkexpr(old_res))); old_res = new_res; } putDRegI64(dreg, mkexpr(old_res), condT); DIP("vtb%c.8 d%u, {", op ? 'x' : 'l', dreg); if (len > 0) { DIP("d%u-d%u", nreg, nreg + len); } else { DIP("d%u", nreg); } DIP("}, d%u\n", mreg); return True; } /* VDUP (scalar) */ static Bool dis_neon_vdup ( UInt theInstr, IRTemp condT ) { UInt Q = (theInstr >> 6) & 1; UInt dreg = ((theInstr >> 18) & 0x10) | ((theInstr >> 12) & 0xF); UInt mreg = ((theInstr >> 1) & 0x10) | (theInstr & 0xF); UInt imm4 = (theInstr >> 16) & 0xF; UInt index; UInt size; IRTemp arg_m; IRTemp res; IROp op, op2; if ((imm4 == 0) || (imm4 == 8)) return False; if ((Q == 1) && ((dreg & 1) == 1)) return False; if (Q) dreg >>= 1; arg_m = newTemp(Ity_I64); assign(arg_m, getDRegI64(mreg)); if (Q) res = newTemp(Ity_V128); else res = newTemp(Ity_I64); if ((imm4 & 1) == 1) { op = Q ? Iop_Dup8x16 : Iop_Dup8x8; op2 = Iop_GetElem8x8; index = imm4 >> 1; size = 8; } else if ((imm4 & 3) == 2) { op = Q ? Iop_Dup16x8 : Iop_Dup16x4; op2 = Iop_GetElem16x4; index = imm4 >> 2; size = 16; } else if ((imm4 & 7) == 4) { op = Q ? Iop_Dup32x4 : Iop_Dup32x2; op2 = Iop_GetElem32x2; index = imm4 >> 3; size = 32; } else { return False; // can this ever happen? } assign(res, unop(op, binop(op2, mkexpr(arg_m), mkU8(index)))); if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } DIP("vdup.%u %c%u, d%u[%u]\n", size, Q ? 'q' : 'd', dreg, mreg, index); return True; } /* A7.4.1 Three registers of the same length */ static Bool dis_neon_data_3same ( UInt theInstr, IRTemp condT ) { /* In paths where this returns False, indicating a non-decodable instruction, there may still be some IR assignments to temporaries generated. This is inconvenient but harmless, and the post-front-end IR optimisation pass will just remove them anyway. So there's no effort made here to tidy it up. */ UInt Q = (theInstr >> 6) & 1; UInt dreg = get_neon_d_regno(theInstr); UInt nreg = get_neon_n_regno(theInstr); UInt mreg = get_neon_m_regno(theInstr); UInt A = (theInstr >> 8) & 0xF; UInt B = (theInstr >> 4) & 1; UInt C = (theInstr >> 20) & 0x3; UInt U = (theInstr >> 24) & 1; UInt size = C; IRTemp arg_n; IRTemp arg_m; IRTemp res; if (Q) { arg_n = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(arg_n, getQReg(nreg)); assign(arg_m, getQReg(mreg)); } else { arg_n = newTemp(Ity_I64); arg_m = newTemp(Ity_I64); res = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); assign(arg_m, getDRegI64(mreg)); } switch(A) { case 0: if (B == 0) { /* VHADD */ ULong imm = 0; IRExpr *imm_val; IROp addOp; IROp andOp; IROp shOp; HChar regType = Q ? 'q' : 'd'; if (size == 3) return False; switch(size) { case 0: imm = 0x101010101010101LL; break; case 1: imm = 0x1000100010001LL; break; case 2: imm = 0x100000001LL; break; default: vassert(0); } if (Q) { imm_val = binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)); andOp = Iop_AndV128; } else { imm_val = mkU64(imm); andOp = Iop_And64; } if (U) { switch(size) { case 0: addOp = Q ? Iop_Add8x16 : Iop_Add8x8; shOp = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; break; case 1: addOp = Q ? Iop_Add16x8 : Iop_Add16x4; shOp = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; break; case 2: addOp = Q ? Iop_Add32x4 : Iop_Add32x2; shOp = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; break; default: vassert(0); } } else { switch(size) { case 0: addOp = Q ? Iop_Add8x16 : Iop_Add8x8; shOp = Q ? Iop_SarN8x16 : Iop_SarN8x8; break; case 1: addOp = Q ? Iop_Add16x8 : Iop_Add16x4; shOp = Q ? Iop_SarN16x8 : Iop_SarN16x4; break; case 2: addOp = Q ? Iop_Add32x4 : Iop_Add32x2; shOp = Q ? Iop_SarN32x4 : Iop_SarN32x2; break; default: vassert(0); } } assign(res, binop(addOp, binop(addOp, binop(shOp, mkexpr(arg_m), mkU8(1)), binop(shOp, mkexpr(arg_n), mkU8(1))), binop(shOp, binop(addOp, binop(andOp, mkexpr(arg_m), imm_val), binop(andOp, mkexpr(arg_n), imm_val)), mkU8(1)))); DIP("vhadd.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, regType, dreg, regType, nreg, regType, mreg); } else { /* VQADD */ IROp op, op2; IRTemp tmp; HChar reg_t = Q ? 'q' : 'd'; if (Q) { switch (size) { case 0: op = U ? Iop_QAdd8Ux16 : Iop_QAdd8Sx16; op2 = Iop_Add8x16; break; case 1: op = U ? Iop_QAdd16Ux8 : Iop_QAdd16Sx8; op2 = Iop_Add16x8; break; case 2: op = U ? Iop_QAdd32Ux4 : Iop_QAdd32Sx4; op2 = Iop_Add32x4; break; case 3: op = U ? Iop_QAdd64Ux2 : Iop_QAdd64Sx2; op2 = Iop_Add64x2; break; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_QAdd8Ux8 : Iop_QAdd8Sx8; op2 = Iop_Add8x8; break; case 1: op = U ? Iop_QAdd16Ux4 : Iop_QAdd16Sx4; op2 = Iop_Add16x4; break; case 2: op = U ? Iop_QAdd32Ux2 : Iop_QAdd32Sx2; op2 = Iop_Add32x2; break; case 3: op = U ? Iop_QAdd64Ux1 : Iop_QAdd64Sx1; op2 = Iop_Add64; break; default: vassert(0); } } if (Q) { tmp = newTemp(Ity_V128); } else { tmp = newTemp(Ity_I64); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); assign(tmp, binop(op2, mkexpr(arg_n), mkexpr(arg_m))); setFlag_QC(mkexpr(res), mkexpr(tmp), Q, condT); DIP("vqadd.%c%d %c%u %c%u, %c%u\n", U ? 'u' : 's', 8 << size, reg_t, dreg, reg_t, nreg, reg_t, mreg); } break; case 1: if (B == 0) { /* VRHADD */ /* VRHADD C, A, B ::= C = (A >> 1) + (B >> 1) + (((A & 1) + (B & 1) + 1) >> 1) */ IROp shift_op, add_op; IRTemp cc; ULong one = 1; HChar reg_t = Q ? 'q' : 'd'; switch (size) { case 0: one = (one << 8) | one; /* fall through */ case 1: one = (one << 16) | one; /* fall through */ case 2: one = (one << 32) | one; break; case 3: return False; default: vassert(0); } if (Q) { switch (size) { case 0: shift_op = U ? Iop_ShrN8x16 : Iop_SarN8x16; add_op = Iop_Add8x16; break; case 1: shift_op = U ? Iop_ShrN16x8 : Iop_SarN16x8; add_op = Iop_Add16x8; break; case 2: shift_op = U ? Iop_ShrN32x4 : Iop_SarN32x4; add_op = Iop_Add32x4; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: shift_op = U ? Iop_ShrN8x8 : Iop_SarN8x8; add_op = Iop_Add8x8; break; case 1: shift_op = U ? Iop_ShrN16x4 : Iop_SarN16x4; add_op = Iop_Add16x4; break; case 2: shift_op = U ? Iop_ShrN32x2 : Iop_SarN32x2; add_op = Iop_Add32x2; break; case 3: return False; default: vassert(0); } } if (Q) { cc = newTemp(Ity_V128); assign(cc, binop(shift_op, binop(add_op, binop(add_op, binop(Iop_AndV128, mkexpr(arg_n), binop(Iop_64HLtoV128, mkU64(one), mkU64(one))), binop(Iop_AndV128, mkexpr(arg_m), binop(Iop_64HLtoV128, mkU64(one), mkU64(one)))), binop(Iop_64HLtoV128, mkU64(one), mkU64(one))), mkU8(1))); assign(res, binop(add_op, binop(add_op, binop(shift_op, mkexpr(arg_n), mkU8(1)), binop(shift_op, mkexpr(arg_m), mkU8(1))), mkexpr(cc))); } else { cc = newTemp(Ity_I64); assign(cc, binop(shift_op, binop(add_op, binop(add_op, binop(Iop_And64, mkexpr(arg_n), mkU64(one)), binop(Iop_And64, mkexpr(arg_m), mkU64(one))), mkU64(one)), mkU8(1))); assign(res, binop(add_op, binop(add_op, binop(shift_op, mkexpr(arg_n), mkU8(1)), binop(shift_op, mkexpr(arg_m), mkU8(1))), mkexpr(cc))); } DIP("vrhadd.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, reg_t, dreg, reg_t, nreg, reg_t, mreg); } else { if (U == 0) { switch(C) { case 0: { /* VAND */ HChar reg_t = Q ? 'q' : 'd'; if (Q) { assign(res, binop(Iop_AndV128, mkexpr(arg_n), mkexpr(arg_m))); } else { assign(res, binop(Iop_And64, mkexpr(arg_n), mkexpr(arg_m))); } DIP("vand %c%u, %c%u, %c%u\n", reg_t, dreg, reg_t, nreg, reg_t, mreg); break; } case 1: { /* VBIC */ HChar reg_t = Q ? 'q' : 'd'; if (Q) { assign(res, binop(Iop_AndV128,mkexpr(arg_n), unop(Iop_NotV128, mkexpr(arg_m)))); } else { assign(res, binop(Iop_And64, mkexpr(arg_n), unop(Iop_Not64, mkexpr(arg_m)))); } DIP("vbic %c%u, %c%u, %c%u\n", reg_t, dreg, reg_t, nreg, reg_t, mreg); break; } case 2: if ( nreg != mreg) { /* VORR */ HChar reg_t = Q ? 'q' : 'd'; if (Q) { assign(res, binop(Iop_OrV128, mkexpr(arg_n), mkexpr(arg_m))); } else { assign(res, binop(Iop_Or64, mkexpr(arg_n), mkexpr(arg_m))); } DIP("vorr %c%u, %c%u, %c%u\n", reg_t, dreg, reg_t, nreg, reg_t, mreg); } else { /* VMOV */ HChar reg_t = Q ? 'q' : 'd'; assign(res, mkexpr(arg_m)); DIP("vmov %c%u, %c%u\n", reg_t, dreg, reg_t, mreg); } break; case 3:{ /* VORN */ HChar reg_t = Q ? 'q' : 'd'; if (Q) { assign(res, binop(Iop_OrV128,mkexpr(arg_n), unop(Iop_NotV128, mkexpr(arg_m)))); } else { assign(res, binop(Iop_Or64, mkexpr(arg_n), unop(Iop_Not64, mkexpr(arg_m)))); } DIP("vorn %c%u, %c%u, %c%u\n", reg_t, dreg, reg_t, nreg, reg_t, mreg); break; } default: vassert(0); } } else { switch(C) { case 0: /* VEOR (XOR) */ if (Q) { assign(res, binop(Iop_XorV128, mkexpr(arg_n), mkexpr(arg_m))); } else { assign(res, binop(Iop_Xor64, mkexpr(arg_n), mkexpr(arg_m))); } DIP("veor %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); break; case 1: /* VBSL */ if (Q) { IRTemp reg_d = newTemp(Ity_V128); assign(reg_d, getQReg(dreg)); assign(res, binop(Iop_OrV128, binop(Iop_AndV128, mkexpr(arg_n), mkexpr(reg_d)), binop(Iop_AndV128, mkexpr(arg_m), unop(Iop_NotV128, mkexpr(reg_d)) ) ) ); } else { IRTemp reg_d = newTemp(Ity_I64); assign(reg_d, getDRegI64(dreg)); assign(res, binop(Iop_Or64, binop(Iop_And64, mkexpr(arg_n), mkexpr(reg_d)), binop(Iop_And64, mkexpr(arg_m), unop(Iop_Not64, mkexpr(reg_d))))); } DIP("vbsl %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); break; case 2: /* VBIT */ if (Q) { IRTemp reg_d = newTemp(Ity_V128); assign(reg_d, getQReg(dreg)); assign(res, binop(Iop_OrV128, binop(Iop_AndV128, mkexpr(arg_n), mkexpr(arg_m)), binop(Iop_AndV128, mkexpr(reg_d), unop(Iop_NotV128, mkexpr(arg_m))))); } else { IRTemp reg_d = newTemp(Ity_I64); assign(reg_d, getDRegI64(dreg)); assign(res, binop(Iop_Or64, binop(Iop_And64, mkexpr(arg_n), mkexpr(arg_m)), binop(Iop_And64, mkexpr(reg_d), unop(Iop_Not64, mkexpr(arg_m))))); } DIP("vbit %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); break; case 3: /* VBIF */ if (Q) { IRTemp reg_d = newTemp(Ity_V128); assign(reg_d, getQReg(dreg)); assign(res, binop(Iop_OrV128, binop(Iop_AndV128, mkexpr(reg_d), mkexpr(arg_m)), binop(Iop_AndV128, mkexpr(arg_n), unop(Iop_NotV128, mkexpr(arg_m))))); } else { IRTemp reg_d = newTemp(Ity_I64); assign(reg_d, getDRegI64(dreg)); assign(res, binop(Iop_Or64, binop(Iop_And64, mkexpr(reg_d), mkexpr(arg_m)), binop(Iop_And64, mkexpr(arg_n), unop(Iop_Not64, mkexpr(arg_m))))); } DIP("vbif %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); break; default: vassert(0); } } } break; case 2: if (B == 0) { /* VHSUB */ /* (A >> 1) - (B >> 1) - (NOT (A) & B & 1) */ ULong imm = 0; IRExpr *imm_val; IROp subOp; IROp notOp; IROp andOp; IROp shOp; if (size == 3) return False; switch(size) { case 0: imm = 0x101010101010101LL; break; case 1: imm = 0x1000100010001LL; break; case 2: imm = 0x100000001LL; break; default: vassert(0); } if (Q) { imm_val = binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)); andOp = Iop_AndV128; notOp = Iop_NotV128; } else { imm_val = mkU64(imm); andOp = Iop_And64; notOp = Iop_Not64; } if (U) { switch(size) { case 0: subOp = Q ? Iop_Sub8x16 : Iop_Sub8x8; shOp = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; break; case 1: subOp = Q ? Iop_Sub16x8 : Iop_Sub16x4; shOp = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; break; case 2: subOp = Q ? Iop_Sub32x4 : Iop_Sub32x2; shOp = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; break; default: vassert(0); } } else { switch(size) { case 0: subOp = Q ? Iop_Sub8x16 : Iop_Sub8x8; shOp = Q ? Iop_SarN8x16 : Iop_SarN8x8; break; case 1: subOp = Q ? Iop_Sub16x8 : Iop_Sub16x4; shOp = Q ? Iop_SarN16x8 : Iop_SarN16x4; break; case 2: subOp = Q ? Iop_Sub32x4 : Iop_Sub32x2; shOp = Q ? Iop_SarN32x4 : Iop_SarN32x2; break; default: vassert(0); } } assign(res, binop(subOp, binop(subOp, binop(shOp, mkexpr(arg_n), mkU8(1)), binop(shOp, mkexpr(arg_m), mkU8(1))), binop(andOp, binop(andOp, unop(notOp, mkexpr(arg_n)), mkexpr(arg_m)), imm_val))); DIP("vhsub.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VQSUB */ IROp op, op2; IRTemp tmp; if (Q) { switch (size) { case 0: op = U ? Iop_QSub8Ux16 : Iop_QSub8Sx16; op2 = Iop_Sub8x16; break; case 1: op = U ? Iop_QSub16Ux8 : Iop_QSub16Sx8; op2 = Iop_Sub16x8; break; case 2: op = U ? Iop_QSub32Ux4 : Iop_QSub32Sx4; op2 = Iop_Sub32x4; break; case 3: op = U ? Iop_QSub64Ux2 : Iop_QSub64Sx2; op2 = Iop_Sub64x2; break; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_QSub8Ux8 : Iop_QSub8Sx8; op2 = Iop_Sub8x8; break; case 1: op = U ? Iop_QSub16Ux4 : Iop_QSub16Sx4; op2 = Iop_Sub16x4; break; case 2: op = U ? Iop_QSub32Ux2 : Iop_QSub32Sx2; op2 = Iop_Sub32x2; break; case 3: op = U ? Iop_QSub64Ux1 : Iop_QSub64Sx1; op2 = Iop_Sub64; break; default: vassert(0); } } if (Q) tmp = newTemp(Ity_V128); else tmp = newTemp(Ity_I64); assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); assign(tmp, binop(op2, mkexpr(arg_n), mkexpr(arg_m))); setFlag_QC(mkexpr(res), mkexpr(tmp), Q, condT); DIP("vqsub.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } break; case 3: { IROp op; if (Q) { switch (size) { case 0: op = U ? Iop_CmpGT8Ux16 : Iop_CmpGT8Sx16; break; case 1: op = U ? Iop_CmpGT16Ux8 : Iop_CmpGT16Sx8; break; case 2: op = U ? Iop_CmpGT32Ux4 : Iop_CmpGT32Sx4; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_CmpGT8Ux8 : Iop_CmpGT8Sx8; break; case 1: op = U ? Iop_CmpGT16Ux4 : Iop_CmpGT16Sx4; break; case 2: op = U ? Iop_CmpGT32Ux2: Iop_CmpGT32Sx2; break; case 3: return False; default: vassert(0); } } if (B == 0) { /* VCGT */ assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vcgt.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VCGE */ /* VCGE res, argn, argm is equal to VCGT tmp, argm, argn VNOT res, tmp */ assign(res, unop(Q ? Iop_NotV128 : Iop_Not64, binop(op, mkexpr(arg_m), mkexpr(arg_n)))); DIP("vcge.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } break; case 4: if (B == 0) { /* VSHL */ IROp op = Iop_INVALID, sub_op = Iop_INVALID; IRTemp tmp = IRTemp_INVALID; if (U) { switch (size) { case 0: op = Q ? Iop_Shl8x16 : Iop_Shl8x8; break; case 1: op = Q ? Iop_Shl16x8 : Iop_Shl16x4; break; case 2: op = Q ? Iop_Shl32x4 : Iop_Shl32x2; break; case 3: op = Q ? Iop_Shl64x2 : Iop_Shl64; break; default: vassert(0); } } else { tmp = newTemp(Q ? Ity_V128 : Ity_I64); switch (size) { case 0: op = Q ? Iop_Sar8x16 : Iop_Sar8x8; sub_op = Q ? Iop_Sub8x16 : Iop_Sub8x8; break; case 1: op = Q ? Iop_Sar16x8 : Iop_Sar16x4; sub_op = Q ? Iop_Sub16x8 : Iop_Sub16x4; break; case 2: op = Q ? Iop_Sar32x4 : Iop_Sar32x2; sub_op = Q ? Iop_Sub32x4 : Iop_Sub32x2; break; case 3: op = Q ? Iop_Sar64x2 : Iop_Sar64; sub_op = Q ? Iop_Sub64x2 : Iop_Sub64; break; default: vassert(0); } } if (U) { if (!Q && (size == 3)) assign(res, binop(op, mkexpr(arg_m), unop(Iop_64to8, mkexpr(arg_n)))); else assign(res, binop(op, mkexpr(arg_m), mkexpr(arg_n))); } else { if (Q) assign(tmp, binop(sub_op, binop(Iop_64HLtoV128, mkU64(0), mkU64(0)), mkexpr(arg_n))); else assign(tmp, binop(sub_op, mkU64(0), mkexpr(arg_n))); if (!Q && (size == 3)) assign(res, binop(op, mkexpr(arg_m), unop(Iop_64to8, mkexpr(tmp)))); else assign(res, binop(op, mkexpr(arg_m), mkexpr(tmp))); } DIP("vshl.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, Q ? 'q' : 'd', nreg); } else { /* VQSHL */ IROp op, op_rev, op_shrn, op_shln, cmp_neq, cmp_gt; IRTemp tmp, shval, mask, old_shval; UInt i; ULong esize; cmp_neq = Q ? Iop_CmpNEZ8x16 : Iop_CmpNEZ8x8; cmp_gt = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; if (U) { switch (size) { case 0: op = Q ? Iop_QShl8x16 : Iop_QShl8x8; op_rev = Q ? Iop_Shr8x16 : Iop_Shr8x8; op_shrn = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; op_shln = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_QShl16x8 : Iop_QShl16x4; op_rev = Q ? Iop_Shr16x8 : Iop_Shr16x4; op_shrn = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; op_shln = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_QShl32x4 : Iop_QShl32x2; op_rev = Q ? Iop_Shr32x4 : Iop_Shr32x2; op_shrn = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; op_shln = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_QShl64x2 : Iop_QShl64x1; op_rev = Q ? Iop_Shr64x2 : Iop_Shr64; op_shrn = Q ? Iop_ShrN64x2 : Iop_Shr64; op_shln = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_QSal8x16 : Iop_QSal8x8; op_rev = Q ? Iop_Sar8x16 : Iop_Sar8x8; op_shrn = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; op_shln = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_QSal16x8 : Iop_QSal16x4; op_rev = Q ? Iop_Sar16x8 : Iop_Sar16x4; op_shrn = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; op_shln = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_QSal32x4 : Iop_QSal32x2; op_rev = Q ? Iop_Sar32x4 : Iop_Sar32x2; op_shrn = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; op_shln = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_QSal64x2 : Iop_QSal64x1; op_rev = Q ? Iop_Sar64x2 : Iop_Sar64; op_shrn = Q ? Iop_ShrN64x2 : Iop_Shr64; op_shln = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } } if (Q) { tmp = newTemp(Ity_V128); shval = newTemp(Ity_V128); mask = newTemp(Ity_V128); } else { tmp = newTemp(Ity_I64); shval = newTemp(Ity_I64); mask = newTemp(Ity_I64); } assign(res, binop(op, mkexpr(arg_m), mkexpr(arg_n))); /* Only least significant byte from second argument is used. Copy this byte to the whole vector element. */ assign(shval, binop(op_shrn, binop(op_shln, mkexpr(arg_n), mkU8((8 << size) - 8)), mkU8((8 << size) - 8))); for(i = 0; i < size; i++) { old_shval = shval; shval = newTemp(Q ? Ity_V128 : Ity_I64); assign(shval, binop(Q ? Iop_OrV128 : Iop_Or64, mkexpr(old_shval), binop(op_shln, mkexpr(old_shval), mkU8(8 << i)))); } /* If shift is greater or equal to the element size and element is non-zero, then QC flag should be set. */ esize = (8 << size) - 1; esize = (esize << 8) | esize; esize = (esize << 16) | esize; esize = (esize << 32) | esize; setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, binop(cmp_gt, mkexpr(shval), Q ? mkU128(esize) : mkU64(esize)), unop(cmp_neq, mkexpr(arg_m))), Q ? mkU128(0) : mkU64(0), Q, condT); /* Othervise QC flag should be set if shift value is positive and result beign rightshifted the same value is not equal to left argument. */ assign(mask, binop(cmp_gt, mkexpr(shval), Q ? mkU128(0) : mkU64(0))); if (!Q && size == 3) assign(tmp, binop(op_rev, mkexpr(res), unop(Iop_64to8, mkexpr(arg_n)))); else assign(tmp, binop(op_rev, mkexpr(res), mkexpr(arg_n))); setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(tmp), mkexpr(mask)), binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(arg_m), mkexpr(mask)), Q, condT); DIP("vqshl.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, Q ? 'q' : 'd', nreg); } break; case 5: if (B == 0) { /* VRSHL */ IROp op, op_shrn, op_shln, cmp_gt, op_add; IRTemp shval, old_shval, imm_val, round; UInt i; ULong imm; cmp_gt = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; imm = 1L; switch (size) { case 0: imm = (imm << 8) | imm; /* fall through */ case 1: imm = (imm << 16) | imm; /* fall through */ case 2: imm = (imm << 32) | imm; /* fall through */ case 3: break; default: vassert(0); } imm_val = newTemp(Q ? Ity_V128 : Ity_I64); round = newTemp(Q ? Ity_V128 : Ity_I64); assign(imm_val, Q ? mkU128(imm) : mkU64(imm)); if (U) { switch (size) { case 0: op = Q ? Iop_Shl8x16 : Iop_Shl8x8; op_add = Q ? Iop_Add8x16 : Iop_Add8x8; op_shrn = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; op_shln = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_Shl16x8 : Iop_Shl16x4; op_add = Q ? Iop_Add16x8 : Iop_Add16x4; op_shrn = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; op_shln = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_Shl32x4 : Iop_Shl32x2; op_add = Q ? Iop_Add32x4 : Iop_Add32x2; op_shrn = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; op_shln = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_Shl64x2 : Iop_Shl64; op_add = Q ? Iop_Add64x2 : Iop_Add64; op_shrn = Q ? Iop_ShrN64x2 : Iop_Shr64; op_shln = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_Sal8x16 : Iop_Sal8x8; op_add = Q ? Iop_Add8x16 : Iop_Add8x8; op_shrn = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; op_shln = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_Sal16x8 : Iop_Sal16x4; op_add = Q ? Iop_Add16x8 : Iop_Add16x4; op_shrn = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; op_shln = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_Sal32x4 : Iop_Sal32x2; op_add = Q ? Iop_Add32x4 : Iop_Add32x2; op_shrn = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; op_shln = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_Sal64x2 : Iop_Sal64x1; op_add = Q ? Iop_Add64x2 : Iop_Add64; op_shrn = Q ? Iop_ShrN64x2 : Iop_Shr64; op_shln = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } } if (Q) { shval = newTemp(Ity_V128); } else { shval = newTemp(Ity_I64); } /* Only least significant byte from second argument is used. Copy this byte to the whole vector element. */ assign(shval, binop(op_shrn, binop(op_shln, mkexpr(arg_n), mkU8((8 << size) - 8)), mkU8((8 << size) - 8))); for (i = 0; i < size; i++) { old_shval = shval; shval = newTemp(Q ? Ity_V128 : Ity_I64); assign(shval, binop(Q ? Iop_OrV128 : Iop_Or64, mkexpr(old_shval), binop(op_shln, mkexpr(old_shval), mkU8(8 << i)))); } /* Compute the result */ if (!Q && size == 3 && U) { assign(round, binop(Q ? Iop_AndV128 : Iop_And64, binop(op, mkexpr(arg_m), unop(Iop_64to8, binop(op_add, mkexpr(arg_n), mkexpr(imm_val)))), binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(imm_val), binop(cmp_gt, Q ? mkU128(0) : mkU64(0), mkexpr(arg_n))))); assign(res, binop(op_add, binop(op, mkexpr(arg_m), unop(Iop_64to8, mkexpr(arg_n))), mkexpr(round))); } else { assign(round, binop(Q ? Iop_AndV128 : Iop_And64, binop(op, mkexpr(arg_m), binop(op_add, mkexpr(arg_n), mkexpr(imm_val))), binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(imm_val), binop(cmp_gt, Q ? mkU128(0) : mkU64(0), mkexpr(arg_n))))); assign(res, binop(op_add, binop(op, mkexpr(arg_m), mkexpr(arg_n)), mkexpr(round))); } DIP("vrshl.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, Q ? 'q' : 'd', nreg); } else { /* VQRSHL */ IROp op, op_rev, op_shrn, op_shln, cmp_neq, cmp_gt, op_add; IRTemp tmp, shval, mask, old_shval, imm_val, round; UInt i; ULong esize, imm; cmp_neq = Q ? Iop_CmpNEZ8x16 : Iop_CmpNEZ8x8; cmp_gt = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; imm = 1L; switch (size) { case 0: imm = (imm << 8) | imm; /* fall through */ case 1: imm = (imm << 16) | imm; /* fall through */ case 2: imm = (imm << 32) | imm; /* fall through */ case 3: break; default: vassert(0); } imm_val = newTemp(Q ? Ity_V128 : Ity_I64); round = newTemp(Q ? Ity_V128 : Ity_I64); assign(imm_val, Q ? mkU128(imm) : mkU64(imm)); if (U) { switch (size) { case 0: op = Q ? Iop_QShl8x16 : Iop_QShl8x8; op_add = Q ? Iop_Add8x16 : Iop_Add8x8; op_rev = Q ? Iop_Shr8x16 : Iop_Shr8x8; op_shrn = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; op_shln = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_QShl16x8 : Iop_QShl16x4; op_add = Q ? Iop_Add16x8 : Iop_Add16x4; op_rev = Q ? Iop_Shr16x8 : Iop_Shr16x4; op_shrn = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; op_shln = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_QShl32x4 : Iop_QShl32x2; op_add = Q ? Iop_Add32x4 : Iop_Add32x2; op_rev = Q ? Iop_Shr32x4 : Iop_Shr32x2; op_shrn = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; op_shln = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_QShl64x2 : Iop_QShl64x1; op_add = Q ? Iop_Add64x2 : Iop_Add64; op_rev = Q ? Iop_Shr64x2 : Iop_Shr64; op_shrn = Q ? Iop_ShrN64x2 : Iop_Shr64; op_shln = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_QSal8x16 : Iop_QSal8x8; op_add = Q ? Iop_Add8x16 : Iop_Add8x8; op_rev = Q ? Iop_Sar8x16 : Iop_Sar8x8; op_shrn = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; op_shln = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_QSal16x8 : Iop_QSal16x4; op_add = Q ? Iop_Add16x8 : Iop_Add16x4; op_rev = Q ? Iop_Sar16x8 : Iop_Sar16x4; op_shrn = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; op_shln = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_QSal32x4 : Iop_QSal32x2; op_add = Q ? Iop_Add32x4 : Iop_Add32x2; op_rev = Q ? Iop_Sar32x4 : Iop_Sar32x2; op_shrn = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; op_shln = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_QSal64x2 : Iop_QSal64x1; op_add = Q ? Iop_Add64x2 : Iop_Add64; op_rev = Q ? Iop_Sar64x2 : Iop_Sar64; op_shrn = Q ? Iop_ShrN64x2 : Iop_Shr64; op_shln = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } } if (Q) { tmp = newTemp(Ity_V128); shval = newTemp(Ity_V128); mask = newTemp(Ity_V128); } else { tmp = newTemp(Ity_I64); shval = newTemp(Ity_I64); mask = newTemp(Ity_I64); } /* Only least significant byte from second argument is used. Copy this byte to the whole vector element. */ assign(shval, binop(op_shrn, binop(op_shln, mkexpr(arg_n), mkU8((8 << size) - 8)), mkU8((8 << size) - 8))); for (i = 0; i < size; i++) { old_shval = shval; shval = newTemp(Q ? Ity_V128 : Ity_I64); assign(shval, binop(Q ? Iop_OrV128 : Iop_Or64, mkexpr(old_shval), binop(op_shln, mkexpr(old_shval), mkU8(8 << i)))); } /* Compute the result */ assign(round, binop(Q ? Iop_AndV128 : Iop_And64, binop(op, mkexpr(arg_m), binop(op_add, mkexpr(arg_n), mkexpr(imm_val))), binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(imm_val), binop(cmp_gt, Q ? mkU128(0) : mkU64(0), mkexpr(arg_n))))); assign(res, binop(op_add, binop(op, mkexpr(arg_m), mkexpr(arg_n)), mkexpr(round))); /* If shift is greater or equal to the element size and element is non-zero, then QC flag should be set. */ esize = (8 << size) - 1; esize = (esize << 8) | esize; esize = (esize << 16) | esize; esize = (esize << 32) | esize; setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, binop(cmp_gt, mkexpr(shval), Q ? mkU128(esize) : mkU64(esize)), unop(cmp_neq, mkexpr(arg_m))), Q ? mkU128(0) : mkU64(0), Q, condT); /* Othervise QC flag should be set if shift value is positive and result beign rightshifted the same value is not equal to left argument. */ assign(mask, binop(cmp_gt, mkexpr(shval), Q ? mkU128(0) : mkU64(0))); if (!Q && size == 3) assign(tmp, binop(op_rev, mkexpr(res), unop(Iop_64to8, mkexpr(arg_n)))); else assign(tmp, binop(op_rev, mkexpr(res), mkexpr(arg_n))); setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(tmp), mkexpr(mask)), binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(arg_m), mkexpr(mask)), Q, condT); DIP("vqrshl.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, Q ? 'q' : 'd', nreg); } break; case 6: /* VMAX, VMIN */ if (B == 0) { /* VMAX */ IROp op; if (U == 0) { switch (size) { case 0: op = Q ? Iop_Max8Sx16 : Iop_Max8Sx8; break; case 1: op = Q ? Iop_Max16Sx8 : Iop_Max16Sx4; break; case 2: op = Q ? Iop_Max32Sx4 : Iop_Max32Sx2; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_Max8Ux16 : Iop_Max8Ux8; break; case 1: op = Q ? Iop_Max16Ux8 : Iop_Max16Ux4; break; case 2: op = Q ? Iop_Max32Ux4 : Iop_Max32Ux2; break; case 3: return False; default: vassert(0); } } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vmax.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VMIN */ IROp op; if (U == 0) { switch (size) { case 0: op = Q ? Iop_Min8Sx16 : Iop_Min8Sx8; break; case 1: op = Q ? Iop_Min16Sx8 : Iop_Min16Sx4; break; case 2: op = Q ? Iop_Min32Sx4 : Iop_Min32Sx2; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_Min8Ux16 : Iop_Min8Ux8; break; case 1: op = Q ? Iop_Min16Ux8 : Iop_Min16Ux4; break; case 2: op = Q ? Iop_Min32Ux4 : Iop_Min32Ux2; break; case 3: return False; default: vassert(0); } } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vmin.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } break; case 7: if (B == 0) { /* VABD */ IROp op_cmp, op_sub; IRTemp cond; if ((theInstr >> 23) & 1) { vpanic("VABDL should not be in dis_neon_data_3same\n"); } if (Q) { switch (size) { case 0: op_cmp = U ? Iop_CmpGT8Ux16 : Iop_CmpGT8Sx16; op_sub = Iop_Sub8x16; break; case 1: op_cmp = U ? Iop_CmpGT16Ux8 : Iop_CmpGT16Sx8; op_sub = Iop_Sub16x8; break; case 2: op_cmp = U ? Iop_CmpGT32Ux4 : Iop_CmpGT32Sx4; op_sub = Iop_Sub32x4; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op_cmp = U ? Iop_CmpGT8Ux8 : Iop_CmpGT8Sx8; op_sub = Iop_Sub8x8; break; case 1: op_cmp = U ? Iop_CmpGT16Ux4 : Iop_CmpGT16Sx4; op_sub = Iop_Sub16x4; break; case 2: op_cmp = U ? Iop_CmpGT32Ux2 : Iop_CmpGT32Sx2; op_sub = Iop_Sub32x2; break; case 3: return False; default: vassert(0); } } if (Q) { cond = newTemp(Ity_V128); } else { cond = newTemp(Ity_I64); } assign(cond, binop(op_cmp, mkexpr(arg_n), mkexpr(arg_m))); assign(res, binop(Q ? Iop_OrV128 : Iop_Or64, binop(Q ? Iop_AndV128 : Iop_And64, binop(op_sub, mkexpr(arg_n), mkexpr(arg_m)), mkexpr(cond)), binop(Q ? Iop_AndV128 : Iop_And64, binop(op_sub, mkexpr(arg_m), mkexpr(arg_n)), unop(Q ? Iop_NotV128 : Iop_Not64, mkexpr(cond))))); DIP("vabd.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VABA */ IROp op_cmp, op_sub, op_add; IRTemp cond, acc, tmp; if ((theInstr >> 23) & 1) { vpanic("VABAL should not be in dis_neon_data_3same"); } if (Q) { switch (size) { case 0: op_cmp = U ? Iop_CmpGT8Ux16 : Iop_CmpGT8Sx16; op_sub = Iop_Sub8x16; op_add = Iop_Add8x16; break; case 1: op_cmp = U ? Iop_CmpGT16Ux8 : Iop_CmpGT16Sx8; op_sub = Iop_Sub16x8; op_add = Iop_Add16x8; break; case 2: op_cmp = U ? Iop_CmpGT32Ux4 : Iop_CmpGT32Sx4; op_sub = Iop_Sub32x4; op_add = Iop_Add32x4; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op_cmp = U ? Iop_CmpGT8Ux8 : Iop_CmpGT8Sx8; op_sub = Iop_Sub8x8; op_add = Iop_Add8x8; break; case 1: op_cmp = U ? Iop_CmpGT16Ux4 : Iop_CmpGT16Sx4; op_sub = Iop_Sub16x4; op_add = Iop_Add16x4; break; case 2: op_cmp = U ? Iop_CmpGT32Ux2 : Iop_CmpGT32Sx2; op_sub = Iop_Sub32x2; op_add = Iop_Add32x2; break; case 3: return False; default: vassert(0); } } if (Q) { cond = newTemp(Ity_V128); acc = newTemp(Ity_V128); tmp = newTemp(Ity_V128); assign(acc, getQReg(dreg)); } else { cond = newTemp(Ity_I64); acc = newTemp(Ity_I64); tmp = newTemp(Ity_I64); assign(acc, getDRegI64(dreg)); } assign(cond, binop(op_cmp, mkexpr(arg_n), mkexpr(arg_m))); assign(tmp, binop(Q ? Iop_OrV128 : Iop_Or64, binop(Q ? Iop_AndV128 : Iop_And64, binop(op_sub, mkexpr(arg_n), mkexpr(arg_m)), mkexpr(cond)), binop(Q ? Iop_AndV128 : Iop_And64, binop(op_sub, mkexpr(arg_m), mkexpr(arg_n)), unop(Q ? Iop_NotV128 : Iop_Not64, mkexpr(cond))))); assign(res, binop(op_add, mkexpr(acc), mkexpr(tmp))); DIP("vaba.%c%d %c%u, %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } break; case 8: if (B == 0) { IROp op; if (U == 0) { /* VADD */ switch (size) { case 0: op = Q ? Iop_Add8x16 : Iop_Add8x8; break; case 1: op = Q ? Iop_Add16x8 : Iop_Add16x4; break; case 2: op = Q ? Iop_Add32x4 : Iop_Add32x2; break; case 3: op = Q ? Iop_Add64x2 : Iop_Add64; break; default: vassert(0); } DIP("vadd.i%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VSUB */ switch (size) { case 0: op = Q ? Iop_Sub8x16 : Iop_Sub8x8; break; case 1: op = Q ? Iop_Sub16x8 : Iop_Sub16x4; break; case 2: op = Q ? Iop_Sub32x4 : Iop_Sub32x2; break; case 3: op = Q ? Iop_Sub64x2 : Iop_Sub64; break; default: vassert(0); } DIP("vsub.i%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); } else { IROp op; switch (size) { case 0: op = Q ? Iop_CmpNEZ8x16 : Iop_CmpNEZ8x8; break; case 1: op = Q ? Iop_CmpNEZ16x8 : Iop_CmpNEZ16x4; break; case 2: op = Q ? Iop_CmpNEZ32x4 : Iop_CmpNEZ32x2; break; case 3: op = Q ? Iop_CmpNEZ64x2 : Iop_CmpwNEZ64; break; default: vassert(0); } if (U == 0) { /* VTST */ assign(res, unop(op, binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(arg_n), mkexpr(arg_m)))); DIP("vtst.%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VCEQ */ assign(res, unop(Q ? Iop_NotV128 : Iop_Not64, unop(op, binop(Q ? Iop_XorV128 : Iop_Xor64, mkexpr(arg_n), mkexpr(arg_m))))); DIP("vceq.i%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } break; case 9: if (B == 0) { /* VMLA, VMLS (integer) */ IROp op, op2; UInt P = (theInstr >> 24) & 1; if (P) { switch (size) { case 0: op = Q ? Iop_Mul8x16 : Iop_Mul8x8; op2 = Q ? Iop_Sub8x16 : Iop_Sub8x8; break; case 1: op = Q ? Iop_Mul16x8 : Iop_Mul16x4; op2 = Q ? Iop_Sub16x8 : Iop_Sub16x4; break; case 2: op = Q ? Iop_Mul32x4 : Iop_Mul32x2; op2 = Q ? Iop_Sub32x4 : Iop_Sub32x2; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_Mul8x16 : Iop_Mul8x8; op2 = Q ? Iop_Add8x16 : Iop_Add8x8; break; case 1: op = Q ? Iop_Mul16x8 : Iop_Mul16x4; op2 = Q ? Iop_Add16x8 : Iop_Add16x4; break; case 2: op = Q ? Iop_Mul32x4 : Iop_Mul32x2; op2 = Q ? Iop_Add32x4 : Iop_Add32x2; break; case 3: return False; default: vassert(0); } } assign(res, binop(op2, Q ? getQReg(dreg) : getDRegI64(dreg), binop(op, mkexpr(arg_n), mkexpr(arg_m)))); DIP("vml%c.i%d %c%u, %c%u, %c%u\n", P ? 's' : 'a', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VMUL */ IROp op; UInt P = (theInstr >> 24) & 1; if (P) { switch (size) { case 0: op = Q ? Iop_PolynomialMul8x16 : Iop_PolynomialMul8x8; break; case 1: case 2: case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_Mul8x16 : Iop_Mul8x8; break; case 1: op = Q ? Iop_Mul16x8 : Iop_Mul16x4; break; case 2: op = Q ? Iop_Mul32x4 : Iop_Mul32x2; break; case 3: return False; default: vassert(0); } } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vmul.%c%d %c%u, %c%u, %c%u\n", P ? 'p' : 'i', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } break; case 10: { /* VPMAX, VPMIN */ UInt P = (theInstr >> 4) & 1; IROp op; if (Q) return False; if (P) { switch (size) { case 0: op = U ? Iop_PwMin8Ux8 : Iop_PwMin8Sx8; break; case 1: op = U ? Iop_PwMin16Ux4 : Iop_PwMin16Sx4; break; case 2: op = U ? Iop_PwMin32Ux2 : Iop_PwMin32Sx2; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_PwMax8Ux8 : Iop_PwMax8Sx8; break; case 1: op = U ? Iop_PwMax16Ux4 : Iop_PwMax16Sx4; break; case 2: op = U ? Iop_PwMax32Ux2 : Iop_PwMax32Sx2; break; case 3: return False; default: vassert(0); } } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vp%s.%c%d %c%u, %c%u, %c%u\n", P ? "min" : "max", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); break; } case 11: if (B == 0) { if (U == 0) { /* VQDMULH */ IROp op ,op2; ULong imm; switch (size) { case 0: case 3: return False; case 1: op = Q ? Iop_QDMulHi16Sx8 : Iop_QDMulHi16Sx4; op2 = Q ? Iop_CmpEQ16x8 : Iop_CmpEQ16x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Q ? Iop_QDMulHi32Sx4 : Iop_QDMulHi32Sx2; op2 = Q ? Iop_CmpEQ32x4 : Iop_CmpEQ32x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, binop(op2, mkexpr(arg_n), Q ? mkU128(imm) : mkU64(imm)), binop(op2, mkexpr(arg_m), Q ? mkU128(imm) : mkU64(imm))), Q ? mkU128(0) : mkU64(0), Q, condT); DIP("vqdmulh.s%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VQRDMULH */ IROp op ,op2; ULong imm; switch(size) { case 0: case 3: return False; case 1: imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; op = Q ? Iop_QRDMulHi16Sx8 : Iop_QRDMulHi16Sx4; op2 = Q ? Iop_CmpEQ16x8 : Iop_CmpEQ16x4; break; case 2: imm = 1LL << 31; imm = (imm << 32) | imm; op = Q ? Iop_QRDMulHi32Sx4 : Iop_QRDMulHi32Sx2; op2 = Q ? Iop_CmpEQ32x4 : Iop_CmpEQ32x2; break; default: vassert(0); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, binop(op2, mkexpr(arg_n), Q ? mkU128(imm) : mkU64(imm)), binop(op2, mkexpr(arg_m), Q ? mkU128(imm) : mkU64(imm))), Q ? mkU128(0) : mkU64(0), Q, condT); DIP("vqrdmulh.s%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } else { if (U == 0) { /* VPADD */ IROp op; if (Q) return False; switch (size) { case 0: op = Q ? Iop_PwAdd8x16 : Iop_PwAdd8x8; break; case 1: op = Q ? Iop_PwAdd16x8 : Iop_PwAdd16x4; break; case 2: op = Q ? Iop_PwAdd32x4 : Iop_PwAdd32x2; break; case 3: return False; default: vassert(0); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vpadd.i%d %c%u, %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { return False; } } break; case 12: { return False; } /* Starting from here these are FP SIMD cases */ case 13: if (B == 0) { IROp op; if (U == 0) { if ((C >> 1) == 0) { /* VADD */ op = Q ? Iop_Add32Fx4 : Iop_Add32Fx2 ; DIP("vadd.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VSUB */ op = Q ? Iop_Sub32Fx4 : Iop_Sub32Fx2 ; DIP("vsub.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } else { if ((C >> 1) == 0) { /* VPADD */ if (Q) return False; op = Iop_PwAdd32Fx2; DIP("vpadd.f32 d%u, d%u, d%u\n", dreg, nreg, mreg); } else { /* VABD */ if (Q) { assign(res, unop(Iop_Abs32Fx4, triop(Iop_Sub32Fx4, get_FAKE_roundingmode(), mkexpr(arg_n), mkexpr(arg_m)))); } else { assign(res, unop(Iop_Abs32Fx2, binop(Iop_Sub32Fx2, mkexpr(arg_n), mkexpr(arg_m)))); } DIP("vabd.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); break; } } assign(res, binop_w_fake_RM(op, mkexpr(arg_n), mkexpr(arg_m))); } else { if (U == 0) { /* VMLA, VMLS */ IROp op, op2; UInt P = (theInstr >> 21) & 1; if (P) { switch (size & 1) { case 0: op = Q ? Iop_Mul32Fx4 : Iop_Mul32Fx2; op2 = Q ? Iop_Sub32Fx4 : Iop_Sub32Fx2; break; case 1: return False; default: vassert(0); } } else { switch (size & 1) { case 0: op = Q ? Iop_Mul32Fx4 : Iop_Mul32Fx2; op2 = Q ? Iop_Add32Fx4 : Iop_Add32Fx2; break; case 1: return False; default: vassert(0); } } assign(res, binop_w_fake_RM( op2, Q ? getQReg(dreg) : getDRegI64(dreg), binop_w_fake_RM(op, mkexpr(arg_n), mkexpr(arg_m)))); DIP("vml%c.f32 %c%u, %c%u, %c%u\n", P ? 's' : 'a', Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VMUL */ IROp op; if ((C >> 1) != 0) return False; op = Q ? Iop_Mul32Fx4 : Iop_Mul32Fx2 ; assign(res, binop_w_fake_RM(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vmul.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } break; case 14: if (B == 0) { if (U == 0) { if ((C >> 1) == 0) { /* VCEQ */ IROp op; if ((theInstr >> 20) & 1) return False; op = Q ? Iop_CmpEQ32Fx4 : Iop_CmpEQ32Fx2; assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vceq.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { return False; } } else { if ((C >> 1) == 0) { /* VCGE */ IROp op; if ((theInstr >> 20) & 1) return False; op = Q ? Iop_CmpGE32Fx4 : Iop_CmpGE32Fx2; assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vcge.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VCGT */ IROp op; if ((theInstr >> 20) & 1) return False; op = Q ? Iop_CmpGT32Fx4 : Iop_CmpGT32Fx2; assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); DIP("vcgt.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } } else { if (U == 1) { /* VACGE, VACGT */ UInt op_bit = (theInstr >> 21) & 1; IROp op, op2; op2 = Q ? Iop_Abs32Fx4 : Iop_Abs32Fx2; if (op_bit) { op = Q ? Iop_CmpGT32Fx4 : Iop_CmpGT32Fx2; assign(res, binop(op, unop(op2, mkexpr(arg_n)), unop(op2, mkexpr(arg_m)))); } else { op = Q ? Iop_CmpGE32Fx4 : Iop_CmpGE32Fx2; assign(res, binop(op, unop(op2, mkexpr(arg_n)), unop(op2, mkexpr(arg_m)))); } DIP("vacg%c.f32 %c%u, %c%u, %c%u\n", op_bit ? 't' : 'e', Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { return False; } } break; case 15: if (B == 0) { if (U == 0) { /* VMAX, VMIN */ IROp op; if ((theInstr >> 20) & 1) return False; if ((theInstr >> 21) & 1) { op = Q ? Iop_Min32Fx4 : Iop_Min32Fx2; DIP("vmin.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { op = Q ? Iop_Max32Fx4 : Iop_Max32Fx2; DIP("vmax.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); } else { /* VPMAX, VPMIN */ IROp op; if (Q) return False; if ((theInstr >> 20) & 1) return False; if ((theInstr >> 21) & 1) { op = Iop_PwMin32Fx2; DIP("vpmin.f32 d%u, d%u, d%u\n", dreg, nreg, mreg); } else { op = Iop_PwMax32Fx2; DIP("vpmax.f32 d%u, d%u, d%u\n", dreg, nreg, mreg); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); } } else { if (U == 0) { if ((C >> 1) == 0) { /* VRECPS */ if ((theInstr >> 20) & 1) return False; assign(res, binop(Q ? Iop_RecipStep32Fx4 : Iop_RecipStep32Fx2, mkexpr(arg_n), mkexpr(arg_m))); DIP("vrecps.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } else { /* VRSQRTS */ if ((theInstr >> 20) & 1) return False; assign(res, binop(Q ? Iop_RSqrtStep32Fx4 : Iop_RSqrtStep32Fx2, mkexpr(arg_n), mkexpr(arg_m))); DIP("vrsqrts.f32 %c%u, %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, Q ? 'q' : 'd', mreg); } } else { return False; } } break; default: /*NOTREACHED*/ vassert(0); } if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } return True; } /* A7.4.2 Three registers of different length */ static Bool dis_neon_data_3diff ( UInt theInstr, IRTemp condT ) { /* In paths where this returns False, indicating a non-decodable instruction, there may still be some IR assignments to temporaries generated. This is inconvenient but harmless, and the post-front-end IR optimisation pass will just remove them anyway. So there's no effort made here to tidy it up. */ UInt A = (theInstr >> 8) & 0xf; UInt B = (theInstr >> 20) & 3; UInt U = (theInstr >> 24) & 1; UInt P = (theInstr >> 9) & 1; UInt mreg = get_neon_m_regno(theInstr); UInt nreg = get_neon_n_regno(theInstr); UInt dreg = get_neon_d_regno(theInstr); UInt size = B; ULong imm; IRTemp res, arg_m, arg_n, cond, tmp; IROp cvt, cvt2, cmp, op, op2, sh, add; switch (A) { case 0: case 1: case 2: case 3: /* VADDL, VADDW, VSUBL, VSUBW */ if (dreg & 1) return False; dreg >>= 1; size = B; switch (size) { case 0: cvt = U ? Iop_Widen8Uto16x8 : Iop_Widen8Sto16x8; op = (A & 2) ? Iop_Sub16x8 : Iop_Add16x8; break; case 1: cvt = U ? Iop_Widen16Uto32x4 : Iop_Widen16Sto32x4; op = (A & 2) ? Iop_Sub32x4 : Iop_Add32x4; break; case 2: cvt = U ? Iop_Widen32Uto64x2 : Iop_Widen32Sto64x2; op = (A & 2) ? Iop_Sub64x2 : Iop_Add64x2; break; case 3: return False; default: vassert(0); } arg_n = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); if (A & 1) { if (nreg & 1) return False; nreg >>= 1; assign(arg_n, getQReg(nreg)); } else { assign(arg_n, unop(cvt, getDRegI64(nreg))); } assign(arg_m, unop(cvt, getDRegI64(mreg))); putQReg(dreg, binop(op, mkexpr(arg_n), mkexpr(arg_m)), condT); DIP("v%s%c.%c%d q%u, %c%u, d%u\n", (A & 2) ? "sub" : "add", (A & 1) ? 'w' : 'l', U ? 'u' : 's', 8 << size, dreg, (A & 1) ? 'q' : 'd', nreg, mreg); return True; case 4: /* VADDHN, VRADDHN */ if (mreg & 1) return False; mreg >>= 1; if (nreg & 1) return False; nreg >>= 1; size = B; switch (size) { case 0: op = Iop_Add16x8; cvt = Iop_NarrowUn16to8x8; sh = Iop_ShrN16x8; imm = 1U << 7; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 1: op = Iop_Add32x4; cvt = Iop_NarrowUn32to16x4; sh = Iop_ShrN32x4; imm = 1U << 15; imm = (imm << 32) | imm; break; case 2: op = Iop_Add64x2; cvt = Iop_NarrowUn64to32x2; sh = Iop_ShrN64x2; imm = 1U << 31; break; case 3: return False; default: vassert(0); } tmp = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(tmp, binop(op, getQReg(nreg), getQReg(mreg))); if (U) { /* VRADDHN */ assign(res, binop(op, mkexpr(tmp), binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)))); } else { assign(res, mkexpr(tmp)); } putDRegI64(dreg, unop(cvt, binop(sh, mkexpr(res), mkU8(8 << size))), condT); DIP("v%saddhn.i%d d%u, q%u, q%u\n", U ? "r" : "", 16 << size, dreg, nreg, mreg); return True; case 5: /* VABAL */ if (!((theInstr >> 23) & 1)) { vpanic("VABA should not be in dis_neon_data_3diff\n"); } if (dreg & 1) return False; dreg >>= 1; switch (size) { case 0: cmp = U ? Iop_CmpGT8Ux8 : Iop_CmpGT8Sx8; cvt = U ? Iop_Widen8Uto16x8 : Iop_Widen8Sto16x8; cvt2 = Iop_Widen8Sto16x8; op = Iop_Sub16x8; op2 = Iop_Add16x8; break; case 1: cmp = U ? Iop_CmpGT16Ux4 : Iop_CmpGT16Sx4; cvt = U ? Iop_Widen16Uto32x4 : Iop_Widen16Sto32x4; cvt2 = Iop_Widen16Sto32x4; op = Iop_Sub32x4; op2 = Iop_Add32x4; break; case 2: cmp = U ? Iop_CmpGT32Ux2 : Iop_CmpGT32Sx2; cvt = U ? Iop_Widen32Uto64x2 : Iop_Widen32Sto64x2; cvt2 = Iop_Widen32Sto64x2; op = Iop_Sub64x2; op2 = Iop_Add64x2; break; case 3: return False; default: vassert(0); } arg_n = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); cond = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(arg_n, unop(cvt, getDRegI64(nreg))); assign(arg_m, unop(cvt, getDRegI64(mreg))); assign(cond, unop(cvt2, binop(cmp, getDRegI64(nreg), getDRegI64(mreg)))); assign(res, binop(op2, binop(Iop_OrV128, binop(Iop_AndV128, binop(op, mkexpr(arg_n), mkexpr(arg_m)), mkexpr(cond)), binop(Iop_AndV128, binop(op, mkexpr(arg_m), mkexpr(arg_n)), unop(Iop_NotV128, mkexpr(cond)))), getQReg(dreg))); putQReg(dreg, mkexpr(res), condT); DIP("vabal.%c%d q%u, d%u, d%u\n", U ? 'u' : 's', 8 << size, dreg, nreg, mreg); return True; case 6: /* VSUBHN, VRSUBHN */ if (mreg & 1) return False; mreg >>= 1; if (nreg & 1) return False; nreg >>= 1; size = B; switch (size) { case 0: op = Iop_Sub16x8; op2 = Iop_Add16x8; cvt = Iop_NarrowUn16to8x8; sh = Iop_ShrN16x8; imm = 1U << 7; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 1: op = Iop_Sub32x4; op2 = Iop_Add32x4; cvt = Iop_NarrowUn32to16x4; sh = Iop_ShrN32x4; imm = 1U << 15; imm = (imm << 32) | imm; break; case 2: op = Iop_Sub64x2; op2 = Iop_Add64x2; cvt = Iop_NarrowUn64to32x2; sh = Iop_ShrN64x2; imm = 1U << 31; break; case 3: return False; default: vassert(0); } tmp = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(tmp, binop(op, getQReg(nreg), getQReg(mreg))); if (U) { /* VRSUBHN */ assign(res, binop(op2, mkexpr(tmp), binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)))); } else { assign(res, mkexpr(tmp)); } putDRegI64(dreg, unop(cvt, binop(sh, mkexpr(res), mkU8(8 << size))), condT); DIP("v%ssubhn.i%d d%u, q%u, q%u\n", U ? "r" : "", 16 << size, dreg, nreg, mreg); return True; case 7: /* VABDL */ if (!((theInstr >> 23) & 1)) { vpanic("VABL should not be in dis_neon_data_3diff\n"); } if (dreg & 1) return False; dreg >>= 1; switch (size) { case 0: cmp = U ? Iop_CmpGT8Ux8 : Iop_CmpGT8Sx8; cvt = U ? Iop_Widen8Uto16x8 : Iop_Widen8Sto16x8; cvt2 = Iop_Widen8Sto16x8; op = Iop_Sub16x8; break; case 1: cmp = U ? Iop_CmpGT16Ux4 : Iop_CmpGT16Sx4; cvt = U ? Iop_Widen16Uto32x4 : Iop_Widen16Sto32x4; cvt2 = Iop_Widen16Sto32x4; op = Iop_Sub32x4; break; case 2: cmp = U ? Iop_CmpGT32Ux2 : Iop_CmpGT32Sx2; cvt = U ? Iop_Widen32Uto64x2 : Iop_Widen32Sto64x2; cvt2 = Iop_Widen32Sto64x2; op = Iop_Sub64x2; break; case 3: return False; default: vassert(0); } arg_n = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); cond = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(arg_n, unop(cvt, getDRegI64(nreg))); assign(arg_m, unop(cvt, getDRegI64(mreg))); assign(cond, unop(cvt2, binop(cmp, getDRegI64(nreg), getDRegI64(mreg)))); assign(res, binop(Iop_OrV128, binop(Iop_AndV128, binop(op, mkexpr(arg_n), mkexpr(arg_m)), mkexpr(cond)), binop(Iop_AndV128, binop(op, mkexpr(arg_m), mkexpr(arg_n)), unop(Iop_NotV128, mkexpr(cond))))); putQReg(dreg, mkexpr(res), condT); DIP("vabdl.%c%d q%u, d%u, d%u\n", U ? 'u' : 's', 8 << size, dreg, nreg, mreg); return True; case 8: case 10: /* VMLAL, VMLSL (integer) */ if (dreg & 1) return False; dreg >>= 1; size = B; switch (size) { case 0: op = U ? Iop_Mull8Ux8 : Iop_Mull8Sx8; op2 = P ? Iop_Sub16x8 : Iop_Add16x8; break; case 1: op = U ? Iop_Mull16Ux4 : Iop_Mull16Sx4; op2 = P ? Iop_Sub32x4 : Iop_Add32x4; break; case 2: op = U ? Iop_Mull32Ux2 : Iop_Mull32Sx2; op2 = P ? Iop_Sub64x2 : Iop_Add64x2; break; case 3: return False; default: vassert(0); } res = newTemp(Ity_V128); assign(res, binop(op, getDRegI64(nreg),getDRegI64(mreg))); putQReg(dreg, binop(op2, getQReg(dreg), mkexpr(res)), condT); DIP("vml%cl.%c%d q%u, d%u, d%u\n", P ? 's' : 'a', U ? 'u' : 's', 8 << size, dreg, nreg, mreg); return True; case 9: case 11: /* VQDMLAL, VQDMLSL */ if (U) return False; if (dreg & 1) return False; dreg >>= 1; size = B; switch (size) { case 0: case 3: return False; case 1: op = Iop_QDMull16Sx4; cmp = Iop_CmpEQ16x4; add = P ? Iop_QSub32Sx4 : Iop_QAdd32Sx4; op2 = P ? Iop_Sub32x4 : Iop_Add32x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Iop_QDMull32Sx2; cmp = Iop_CmpEQ32x2; add = P ? Iop_QSub64Sx2 : Iop_QAdd64Sx2; op2 = P ? Iop_Sub64x2 : Iop_Add64x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } res = newTemp(Ity_V128); tmp = newTemp(Ity_V128); assign(res, binop(op, getDRegI64(nreg), getDRegI64(mreg))); assign(tmp, binop(op2, getQReg(dreg), mkexpr(res))); setFlag_QC(mkexpr(tmp), binop(add, getQReg(dreg), mkexpr(res)), True, condT); setFlag_QC(binop(Iop_And64, binop(cmp, getDRegI64(nreg), mkU64(imm)), binop(cmp, getDRegI64(mreg), mkU64(imm))), mkU64(0), False, condT); putQReg(dreg, binop(add, getQReg(dreg), mkexpr(res)), condT); DIP("vqdml%cl.s%d q%u, d%u, d%u\n", P ? 's' : 'a', 8 << size, dreg, nreg, mreg); return True; case 12: case 14: /* VMULL (integer or polynomial) */ if (dreg & 1) return False; dreg >>= 1; size = B; switch (size) { case 0: op = (U) ? Iop_Mull8Ux8 : Iop_Mull8Sx8; if (P) op = Iop_PolynomialMull8x8; break; case 1: if (P) return False; op = (U) ? Iop_Mull16Ux4 : Iop_Mull16Sx4; break; case 2: if (P) return False; op = (U) ? Iop_Mull32Ux2 : Iop_Mull32Sx2; break; case 3: return False; default: vassert(0); } putQReg(dreg, binop(op, getDRegI64(nreg), getDRegI64(mreg)), condT); DIP("vmull.%c%d q%u, d%u, d%u\n", P ? 'p' : (U ? 'u' : 's'), 8 << size, dreg, nreg, mreg); return True; case 13: /* VQDMULL */ if (U) return False; if (dreg & 1) return False; dreg >>= 1; size = B; switch (size) { case 0: case 3: return False; case 1: op = Iop_QDMull16Sx4; op2 = Iop_CmpEQ16x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Iop_QDMull32Sx2; op2 = Iop_CmpEQ32x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } putQReg(dreg, binop(op, getDRegI64(nreg), getDRegI64(mreg)), condT); setFlag_QC(binop(Iop_And64, binop(op2, getDRegI64(nreg), mkU64(imm)), binop(op2, getDRegI64(mreg), mkU64(imm))), mkU64(0), False, condT); DIP("vqdmull.s%d q%u, d%u, d%u\n", 8 << size, dreg, nreg, mreg); return True; default: return False; } return False; } /* A7.4.3 Two registers and a scalar */ static Bool dis_neon_data_2reg_and_scalar ( UInt theInstr, IRTemp condT ) { # define INSN(_bMax,_bMin) SLICE_UInt(theInstr, (_bMax), (_bMin)) UInt U = INSN(24,24); UInt dreg = get_neon_d_regno(theInstr & ~(1 << 6)); UInt nreg = get_neon_n_regno(theInstr & ~(1 << 6)); UInt mreg = get_neon_m_regno(theInstr & ~(1 << 6)); UInt size = INSN(21,20); UInt index; UInt Q = INSN(24,24); if (INSN(27,25) != 1 || INSN(23,23) != 1 || INSN(6,6) != 1 || INSN(4,4) != 0) return False; /* VMLA, VMLS (scalar) */ if ((INSN(11,8) & BITS4(1,0,1,0)) == BITS4(0,0,0,0)) { IRTemp res, arg_m, arg_n; IROp dup, get, op, op2, add, sub; if (Q) { if ((dreg & 1) || (nreg & 1)) return False; dreg >>= 1; nreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); arg_n = newTemp(Ity_V128); assign(arg_n, getQReg(nreg)); switch(size) { case 1: dup = Iop_Dup16x8; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x4; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } else { res = newTemp(Ity_I64); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } if (INSN(8,8)) { switch (size) { case 2: op = Q ? Iop_Mul32Fx4 : Iop_Mul32Fx2; add = Q ? Iop_Add32Fx4 : Iop_Add32Fx2; sub = Q ? Iop_Sub32Fx4 : Iop_Sub32Fx2; break; case 0: case 1: case 3: return False; default: vassert(0); } } else { switch (size) { case 1: op = Q ? Iop_Mul16x8 : Iop_Mul16x4; add = Q ? Iop_Add16x8 : Iop_Add16x4; sub = Q ? Iop_Sub16x8 : Iop_Sub16x4; break; case 2: op = Q ? Iop_Mul32x4 : Iop_Mul32x2; add = Q ? Iop_Add32x4 : Iop_Add32x2; sub = Q ? Iop_Sub32x4 : Iop_Sub32x2; break; case 0: case 3: return False; default: vassert(0); } } op2 = INSN(10,10) ? sub : add; assign(res, binop_w_fake_RM(op, mkexpr(arg_n), mkexpr(arg_m))); if (Q) putQReg(dreg, binop_w_fake_RM(op2, getQReg(dreg), mkexpr(res)), condT); else putDRegI64(dreg, binop(op2, getDRegI64(dreg), mkexpr(res)), condT); DIP("vml%c.%c%d %c%u, %c%u, d%u[%u]\n", INSN(10,10) ? 's' : 'a', INSN(8,8) ? 'f' : 'i', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, mreg, index); return True; } /* VMLAL, VMLSL (scalar) */ if ((INSN(11,8) & BITS4(1,0,1,1)) == BITS4(0,0,1,0)) { IRTemp res, arg_m, arg_n; IROp dup, get, op, op2, add, sub; if (dreg & 1) return False; dreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); switch (size) { case 1: op = U ? Iop_Mull16Ux4 : Iop_Mull16Sx4; add = Iop_Add32x4; sub = Iop_Sub32x4; break; case 2: op = U ? Iop_Mull32Ux2 : Iop_Mull32Sx2; add = Iop_Add64x2; sub = Iop_Sub64x2; break; case 0: case 3: return False; default: vassert(0); } op2 = INSN(10,10) ? sub : add; assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); putQReg(dreg, binop(op2, getQReg(dreg), mkexpr(res)), condT); DIP("vml%cl.%c%d q%u, d%u, d%u[%u]\n", INSN(10,10) ? 's' : 'a', U ? 'u' : 's', 8 << size, dreg, nreg, mreg, index); return True; } /* VQDMLAL, VQDMLSL (scalar) */ if ((INSN(11,8) & BITS4(1,0,1,1)) == BITS4(0,0,1,1) && !U) { IRTemp res, arg_m, arg_n, tmp; IROp dup, get, op, op2, add, cmp; UInt P = INSN(10,10); ULong imm; if (dreg & 1) return False; dreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); switch (size) { case 0: case 3: return False; case 1: op = Iop_QDMull16Sx4; cmp = Iop_CmpEQ16x4; add = P ? Iop_QSub32Sx4 : Iop_QAdd32Sx4; op2 = P ? Iop_Sub32x4 : Iop_Add32x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Iop_QDMull32Sx2; cmp = Iop_CmpEQ32x2; add = P ? Iop_QSub64Sx2 : Iop_QAdd64Sx2; op2 = P ? Iop_Sub64x2 : Iop_Add64x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } res = newTemp(Ity_V128); tmp = newTemp(Ity_V128); assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); assign(tmp, binop(op2, getQReg(dreg), mkexpr(res))); setFlag_QC(binop(Iop_And64, binop(cmp, mkexpr(arg_n), mkU64(imm)), binop(cmp, mkexpr(arg_m), mkU64(imm))), mkU64(0), False, condT); setFlag_QC(mkexpr(tmp), binop(add, getQReg(dreg), mkexpr(res)), True, condT); putQReg(dreg, binop(add, getQReg(dreg), mkexpr(res)), condT); DIP("vqdml%cl.s%d q%u, d%u, d%u[%u]\n", P ? 's' : 'a', 8 << size, dreg, nreg, mreg, index); return True; } /* VMUL (by scalar) */ if ((INSN(11,8) & BITS4(1,1,1,0)) == BITS4(1,0,0,0)) { IRTemp res, arg_m, arg_n; IROp dup, get, op; if (Q) { if ((dreg & 1) || (nreg & 1)) return False; dreg >>= 1; nreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); arg_n = newTemp(Ity_V128); assign(arg_n, getQReg(nreg)); switch(size) { case 1: dup = Iop_Dup16x8; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x4; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } else { res = newTemp(Ity_I64); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } if (INSN(8,8)) { switch (size) { case 2: op = Q ? Iop_Mul32Fx4 : Iop_Mul32Fx2; break; case 0: case 1: case 3: return False; default: vassert(0); } } else { switch (size) { case 1: op = Q ? Iop_Mul16x8 : Iop_Mul16x4; break; case 2: op = Q ? Iop_Mul32x4 : Iop_Mul32x2; break; case 0: case 3: return False; default: vassert(0); } } assign(res, binop_w_fake_RM(op, mkexpr(arg_n), mkexpr(arg_m))); if (Q) putQReg(dreg, mkexpr(res), condT); else putDRegI64(dreg, mkexpr(res), condT); DIP("vmul.%c%d %c%u, %c%u, d%u[%u]\n", INSN(8,8) ? 'f' : 'i', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, mreg, index); return True; } /* VMULL (scalar) */ if (INSN(11,8) == BITS4(1,0,1,0)) { IRTemp res, arg_m, arg_n; IROp dup, get, op; if (dreg & 1) return False; dreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); switch (size) { case 1: op = U ? Iop_Mull16Ux4 : Iop_Mull16Sx4; break; case 2: op = U ? Iop_Mull32Ux2 : Iop_Mull32Sx2; break; case 0: case 3: return False; default: vassert(0); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); putQReg(dreg, mkexpr(res), condT); DIP("vmull.%c%d q%u, d%u, d%u[%u]\n", U ? 'u' : 's', 8 << size, dreg, nreg, mreg, index); return True; } /* VQDMULL */ if (INSN(11,8) == BITS4(1,0,1,1) && !U) { IROp op ,op2, dup, get; ULong imm; IRTemp arg_m, arg_n; if (dreg & 1) return False; dreg >>= 1; arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); switch (size) { case 0: case 3: return False; case 1: op = Iop_QDMull16Sx4; op2 = Iop_CmpEQ16x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Iop_QDMull32Sx2; op2 = Iop_CmpEQ32x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } putQReg(dreg, binop(op, mkexpr(arg_n), mkexpr(arg_m)), condT); setFlag_QC(binop(Iop_And64, binop(op2, mkexpr(arg_n), mkU64(imm)), binop(op2, mkexpr(arg_m), mkU64(imm))), mkU64(0), False, condT); DIP("vqdmull.s%d q%u, d%u, d%u[%u]\n", 8 << size, dreg, nreg, mreg, index); return True; } /* VQDMULH */ if (INSN(11,8) == BITS4(1,1,0,0)) { IROp op ,op2, dup, get; ULong imm; IRTemp res, arg_m, arg_n; if (Q) { if ((dreg & 1) || (nreg & 1)) return False; dreg >>= 1; nreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); arg_n = newTemp(Ity_V128); assign(arg_n, getQReg(nreg)); switch(size) { case 1: dup = Iop_Dup16x8; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x4; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } else { res = newTemp(Ity_I64); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } switch (size) { case 0: case 3: return False; case 1: op = Q ? Iop_QDMulHi16Sx8 : Iop_QDMulHi16Sx4; op2 = Q ? Iop_CmpEQ16x8 : Iop_CmpEQ16x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Q ? Iop_QDMulHi32Sx4 : Iop_QDMulHi32Sx2; op2 = Q ? Iop_CmpEQ32x4 : Iop_CmpEQ32x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, binop(op2, mkexpr(arg_n), Q ? mkU128(imm) : mkU64(imm)), binop(op2, mkexpr(arg_m), Q ? mkU128(imm) : mkU64(imm))), Q ? mkU128(0) : mkU64(0), Q, condT); if (Q) putQReg(dreg, mkexpr(res), condT); else putDRegI64(dreg, mkexpr(res), condT); DIP("vqdmulh.s%d %c%u, %c%u, d%u[%u]\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, mreg, index); return True; } /* VQRDMULH (scalar) */ if (INSN(11,8) == BITS4(1,1,0,1)) { IROp op ,op2, dup, get; ULong imm; IRTemp res, arg_m, arg_n; if (Q) { if ((dreg & 1) || (nreg & 1)) return False; dreg >>= 1; nreg >>= 1; res = newTemp(Ity_V128); arg_m = newTemp(Ity_V128); arg_n = newTemp(Ity_V128); assign(arg_n, getQReg(nreg)); switch(size) { case 1: dup = Iop_Dup16x8; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x4; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } else { res = newTemp(Ity_I64); arg_m = newTemp(Ity_I64); arg_n = newTemp(Ity_I64); assign(arg_n, getDRegI64(nreg)); switch(size) { case 1: dup = Iop_Dup16x4; get = Iop_GetElem16x4; index = mreg >> 3; mreg &= 7; break; case 2: dup = Iop_Dup32x2; get = Iop_GetElem32x2; index = mreg >> 4; mreg &= 0xf; break; case 0: case 3: return False; default: vassert(0); } assign(arg_m, unop(dup, binop(get, getDRegI64(mreg), mkU8(index)))); } switch (size) { case 0: case 3: return False; case 1: op = Q ? Iop_QRDMulHi16Sx8 : Iop_QRDMulHi16Sx4; op2 = Q ? Iop_CmpEQ16x8 : Iop_CmpEQ16x4; imm = 1LL << 15; imm = (imm << 16) | imm; imm = (imm << 32) | imm; break; case 2: op = Q ? Iop_QRDMulHi32Sx4 : Iop_QRDMulHi32Sx2; op2 = Q ? Iop_CmpEQ32x4 : Iop_CmpEQ32x2; imm = 1LL << 31; imm = (imm << 32) | imm; break; default: vassert(0); } assign(res, binop(op, mkexpr(arg_n), mkexpr(arg_m))); setFlag_QC(binop(Q ? Iop_AndV128 : Iop_And64, binop(op2, mkexpr(arg_n), Q ? mkU128(imm) : mkU64(imm)), binop(op2, mkexpr(arg_m), Q ? mkU128(imm) : mkU64(imm))), Q ? mkU128(0) : mkU64(0), Q, condT); if (Q) putQReg(dreg, mkexpr(res), condT); else putDRegI64(dreg, mkexpr(res), condT); DIP("vqrdmulh.s%d %c%u, %c%u, d%u[%u]\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', nreg, mreg, index); return True; } return False; # undef INSN } /* A7.4.4 Two registers and a shift amount */ static Bool dis_neon_data_2reg_and_shift ( UInt theInstr, IRTemp condT ) { UInt A = (theInstr >> 8) & 0xf; UInt B = (theInstr >> 6) & 1; UInt L = (theInstr >> 7) & 1; UInt U = (theInstr >> 24) & 1; UInt Q = B; UInt imm6 = (theInstr >> 16) & 0x3f; UInt shift_imm; UInt size = 4; UInt tmp; UInt mreg = get_neon_m_regno(theInstr); UInt dreg = get_neon_d_regno(theInstr); ULong imm = 0; IROp op, cvt, add = Iop_INVALID, cvt2, op_rev; IRTemp reg_m, res, mask; if (L == 0 && ((theInstr >> 19) & 7) == 0) /* It is one reg and immediate */ return False; tmp = (L << 6) | imm6; if (tmp & 0x40) { size = 3; shift_imm = 64 - imm6; } else if (tmp & 0x20) { size = 2; shift_imm = 64 - imm6; } else if (tmp & 0x10) { size = 1; shift_imm = 32 - imm6; } else if (tmp & 0x8) { size = 0; shift_imm = 16 - imm6; } else { return False; } switch (A) { case 3: case 2: /* VRSHR, VRSRA */ if (shift_imm > 0) { IRExpr *imm_val; imm = 1L; switch (size) { case 0: imm = (imm << 8) | imm; /* fall through */ case 1: imm = (imm << 16) | imm; /* fall through */ case 2: imm = (imm << 32) | imm; /* fall through */ case 3: break; default: vassert(0); } if (Q) { reg_m = newTemp(Ity_V128); res = newTemp(Ity_V128); imm_val = binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)); assign(reg_m, getQReg(mreg)); switch (size) { case 0: add = Iop_Add8x16; op = U ? Iop_ShrN8x16 : Iop_SarN8x16; break; case 1: add = Iop_Add16x8; op = U ? Iop_ShrN16x8 : Iop_SarN16x8; break; case 2: add = Iop_Add32x4; op = U ? Iop_ShrN32x4 : Iop_SarN32x4; break; case 3: add = Iop_Add64x2; op = U ? Iop_ShrN64x2 : Iop_SarN64x2; break; default: vassert(0); } } else { reg_m = newTemp(Ity_I64); res = newTemp(Ity_I64); imm_val = mkU64(imm); assign(reg_m, getDRegI64(mreg)); switch (size) { case 0: add = Iop_Add8x8; op = U ? Iop_ShrN8x8 : Iop_SarN8x8; break; case 1: add = Iop_Add16x4; op = U ? Iop_ShrN16x4 : Iop_SarN16x4; break; case 2: add = Iop_Add32x2; op = U ? Iop_ShrN32x2 : Iop_SarN32x2; break; case 3: add = Iop_Add64; op = U ? Iop_Shr64 : Iop_Sar64; break; default: vassert(0); } } assign(res, binop(add, binop(op, mkexpr(reg_m), mkU8(shift_imm)), binop(Q ? Iop_AndV128 : Iop_And64, binop(op, mkexpr(reg_m), mkU8(shift_imm - 1)), imm_val))); } else { if (Q) { res = newTemp(Ity_V128); assign(res, getQReg(mreg)); } else { res = newTemp(Ity_I64); assign(res, getDRegI64(mreg)); } } if (A == 3) { if (Q) { putQReg(dreg, binop(add, mkexpr(res), getQReg(dreg)), condT); } else { putDRegI64(dreg, binop(add, mkexpr(res), getDRegI64(dreg)), condT); } DIP("vrsra.%c%d %c%u, %c%u, #%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } else { if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } DIP("vrshr.%c%d %c%u, %c%u, #%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } return True; case 1: case 0: /* VSHR, VSRA */ if (Q) { reg_m = newTemp(Ity_V128); assign(reg_m, getQReg(mreg)); res = newTemp(Ity_V128); } else { reg_m = newTemp(Ity_I64); assign(reg_m, getDRegI64(mreg)); res = newTemp(Ity_I64); } if (Q) { switch (size) { case 0: op = U ? Iop_ShrN8x16 : Iop_SarN8x16; add = Iop_Add8x16; break; case 1: op = U ? Iop_ShrN16x8 : Iop_SarN16x8; add = Iop_Add16x8; break; case 2: op = U ? Iop_ShrN32x4 : Iop_SarN32x4; add = Iop_Add32x4; break; case 3: op = U ? Iop_ShrN64x2 : Iop_SarN64x2; add = Iop_Add64x2; break; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_ShrN8x8 : Iop_SarN8x8; add = Iop_Add8x8; break; case 1: op = U ? Iop_ShrN16x4 : Iop_SarN16x4; add = Iop_Add16x4; break; case 2: op = U ? Iop_ShrN32x2 : Iop_SarN32x2; add = Iop_Add32x2; break; case 3: op = U ? Iop_Shr64 : Iop_Sar64; add = Iop_Add64; break; default: vassert(0); } } assign(res, binop(op, mkexpr(reg_m), mkU8(shift_imm))); if (A == 1) { if (Q) { putQReg(dreg, binop(add, mkexpr(res), getQReg(dreg)), condT); } else { putDRegI64(dreg, binop(add, mkexpr(res), getDRegI64(dreg)), condT); } DIP("vsra.%c%d %c%u, %c%u, #%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } else { if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } DIP("vshr.%c%d %c%u, %c%u, #%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } return True; case 4: /* VSRI */ if (!U) return False; if (Q) { res = newTemp(Ity_V128); mask = newTemp(Ity_V128); } else { res = newTemp(Ity_I64); mask = newTemp(Ity_I64); } switch (size) { case 0: op = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; break; case 1: op = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; break; case 2: op = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; break; case 3: op = Q ? Iop_ShrN64x2 : Iop_Shr64; break; default: vassert(0); } if (Q) { assign(mask, binop(op, binop(Iop_64HLtoV128, mkU64(0xFFFFFFFFFFFFFFFFLL), mkU64(0xFFFFFFFFFFFFFFFFLL)), mkU8(shift_imm))); assign(res, binop(Iop_OrV128, binop(Iop_AndV128, getQReg(dreg), unop(Iop_NotV128, mkexpr(mask))), binop(op, getQReg(mreg), mkU8(shift_imm)))); putQReg(dreg, mkexpr(res), condT); } else { assign(mask, binop(op, mkU64(0xFFFFFFFFFFFFFFFFLL), mkU8(shift_imm))); assign(res, binop(Iop_Or64, binop(Iop_And64, getDRegI64(dreg), unop(Iop_Not64, mkexpr(mask))), binop(op, getDRegI64(mreg), mkU8(shift_imm)))); putDRegI64(dreg, mkexpr(res), condT); } DIP("vsri.%d %c%u, %c%u, #%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); return True; case 5: if (U) { /* VSLI */ shift_imm = 8 * (1 << size) - shift_imm; if (Q) { res = newTemp(Ity_V128); mask = newTemp(Ity_V128); } else { res = newTemp(Ity_I64); mask = newTemp(Ity_I64); } switch (size) { case 0: op = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } if (Q) { assign(mask, binop(op, binop(Iop_64HLtoV128, mkU64(0xFFFFFFFFFFFFFFFFLL), mkU64(0xFFFFFFFFFFFFFFFFLL)), mkU8(shift_imm))); assign(res, binop(Iop_OrV128, binop(Iop_AndV128, getQReg(dreg), unop(Iop_NotV128, mkexpr(mask))), binop(op, getQReg(mreg), mkU8(shift_imm)))); putQReg(dreg, mkexpr(res), condT); } else { assign(mask, binop(op, mkU64(0xFFFFFFFFFFFFFFFFLL), mkU8(shift_imm))); assign(res, binop(Iop_Or64, binop(Iop_And64, getDRegI64(dreg), unop(Iop_Not64, mkexpr(mask))), binop(op, getDRegI64(mreg), mkU8(shift_imm)))); putDRegI64(dreg, mkexpr(res), condT); } DIP("vsli.%d %c%u, %c%u, #%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); return True; } else { /* VSHL #imm */ shift_imm = 8 * (1 << size) - shift_imm; if (Q) { res = newTemp(Ity_V128); } else { res = newTemp(Ity_I64); } switch (size) { case 0: op = Q ? Iop_ShlN8x16 : Iop_ShlN8x8; break; case 1: op = Q ? Iop_ShlN16x8 : Iop_ShlN16x4; break; case 2: op = Q ? Iop_ShlN32x4 : Iop_ShlN32x2; break; case 3: op = Q ? Iop_ShlN64x2 : Iop_Shl64; break; default: vassert(0); } assign(res, binop(op, Q ? getQReg(mreg) : getDRegI64(mreg), mkU8(shift_imm))); if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } DIP("vshl.i%d %c%u, %c%u, #%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); return True; } break; case 6: case 7: /* VQSHL, VQSHLU */ shift_imm = 8 * (1 << size) - shift_imm; if (U) { if (A & 1) { switch (size) { case 0: op = Q ? Iop_QShlNsatUU8x16 : Iop_QShlNsatUU8x8; op_rev = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; break; case 1: op = Q ? Iop_QShlNsatUU16x8 : Iop_QShlNsatUU16x4; op_rev = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; break; case 2: op = Q ? Iop_QShlNsatUU32x4 : Iop_QShlNsatUU32x2; op_rev = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; break; case 3: op = Q ? Iop_QShlNsatUU64x2 : Iop_QShlNsatUU64x1; op_rev = Q ? Iop_ShrN64x2 : Iop_Shr64; break; default: vassert(0); } DIP("vqshl.u%d %c%u, %c%u, #%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } else { switch (size) { case 0: op = Q ? Iop_QShlNsatSU8x16 : Iop_QShlNsatSU8x8; op_rev = Q ? Iop_ShrN8x16 : Iop_ShrN8x8; break; case 1: op = Q ? Iop_QShlNsatSU16x8 : Iop_QShlNsatSU16x4; op_rev = Q ? Iop_ShrN16x8 : Iop_ShrN16x4; break; case 2: op = Q ? Iop_QShlNsatSU32x4 : Iop_QShlNsatSU32x2; op_rev = Q ? Iop_ShrN32x4 : Iop_ShrN32x2; break; case 3: op = Q ? Iop_QShlNsatSU64x2 : Iop_QShlNsatSU64x1; op_rev = Q ? Iop_ShrN64x2 : Iop_Shr64; break; default: vassert(0); } DIP("vqshlu.s%d %c%u, %c%u, #%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } } else { if (!(A & 1)) return False; switch (size) { case 0: op = Q ? Iop_QShlNsatSS8x16 : Iop_QShlNsatSS8x8; op_rev = Q ? Iop_SarN8x16 : Iop_SarN8x8; break; case 1: op = Q ? Iop_QShlNsatSS16x8 : Iop_QShlNsatSS16x4; op_rev = Q ? Iop_SarN16x8 : Iop_SarN16x4; break; case 2: op = Q ? Iop_QShlNsatSS32x4 : Iop_QShlNsatSS32x2; op_rev = Q ? Iop_SarN32x4 : Iop_SarN32x2; break; case 3: op = Q ? Iop_QShlNsatSS64x2 : Iop_QShlNsatSS64x1; op_rev = Q ? Iop_SarN64x2 : Iop_Sar64; break; default: vassert(0); } DIP("vqshl.s%d %c%u, %c%u, #%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, shift_imm); } if (Q) { tmp = newTemp(Ity_V128); res = newTemp(Ity_V128); reg_m = newTemp(Ity_V128); assign(reg_m, getQReg(mreg)); } else { tmp = newTemp(Ity_I64); res = newTemp(Ity_I64); reg_m = newTemp(Ity_I64); assign(reg_m, getDRegI64(mreg)); } assign(res, binop(op, mkexpr(reg_m), mkU8(shift_imm))); assign(tmp, binop(op_rev, mkexpr(res), mkU8(shift_imm))); setFlag_QC(mkexpr(tmp), mkexpr(reg_m), Q, condT); if (Q) putQReg(dreg, mkexpr(res), condT); else putDRegI64(dreg, mkexpr(res), condT); return True; case 8: if (!U) { if (L == 1) return False; size++; dreg = ((theInstr >> 18) & 0x10) | ((theInstr >> 12) & 0xF); mreg = ((theInstr >> 1) & 0x10) | (theInstr & 0xF); if (mreg & 1) return False; mreg >>= 1; if (!B) { /* VSHRN*/ IROp narOp; reg_m = newTemp(Ity_V128); assign(reg_m, getQReg(mreg)); res = newTemp(Ity_I64); switch (size) { case 1: op = Iop_ShrN16x8; narOp = Iop_NarrowUn16to8x8; break; case 2: op = Iop_ShrN32x4; narOp = Iop_NarrowUn32to16x4; break; case 3: op = Iop_ShrN64x2; narOp = Iop_NarrowUn64to32x2; break; default: vassert(0); } assign(res, unop(narOp, binop(op, mkexpr(reg_m), mkU8(shift_imm)))); putDRegI64(dreg, mkexpr(res), condT); DIP("vshrn.i%d d%u, q%u, #%u\n", 8 << size, dreg, mreg, shift_imm); return True; } else { /* VRSHRN */ IROp addOp, shOp, narOp; IRExpr *imm_val; reg_m = newTemp(Ity_V128); assign(reg_m, getQReg(mreg)); res = newTemp(Ity_I64); imm = 1L; switch (size) { case 0: imm = (imm << 8) | imm; /* fall through */ case 1: imm = (imm << 16) | imm; /* fall through */ case 2: imm = (imm << 32) | imm; /* fall through */ case 3: break; default: vassert(0); } imm_val = binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)); switch (size) { case 1: addOp = Iop_Add16x8; shOp = Iop_ShrN16x8; narOp = Iop_NarrowUn16to8x8; break; case 2: addOp = Iop_Add32x4; shOp = Iop_ShrN32x4; narOp = Iop_NarrowUn32to16x4; break; case 3: addOp = Iop_Add64x2; shOp = Iop_ShrN64x2; narOp = Iop_NarrowUn64to32x2; break; default: vassert(0); } assign(res, unop(narOp, binop(addOp, binop(shOp, mkexpr(reg_m), mkU8(shift_imm)), binop(Iop_AndV128, binop(shOp, mkexpr(reg_m), mkU8(shift_imm - 1)), imm_val)))); putDRegI64(dreg, mkexpr(res), condT); if (shift_imm == 0) { DIP("vmov%d d%u, q%u, #%u\n", 8 << size, dreg, mreg, shift_imm); } else { DIP("vrshrn.i%d d%u, q%u, #%u\n", 8 << size, dreg, mreg, shift_imm); } return True; } } else { /* fall through */ } case 9: dreg = ((theInstr >> 18) & 0x10) | ((theInstr >> 12) & 0xF); mreg = ((theInstr >> 1) & 0x10) | (theInstr & 0xF); if (mreg & 1) return False; mreg >>= 1; size++; if ((theInstr >> 8) & 1) { switch (size) { case 1: op = U ? Iop_ShrN16x8 : Iop_SarN16x8; cvt = U ? Iop_QNarrowUn16Uto8Ux8 : Iop_QNarrowUn16Sto8Sx8; cvt2 = U ? Iop_Widen8Uto16x8 : Iop_Widen8Sto16x8; break; case 2: op = U ? Iop_ShrN32x4 : Iop_SarN32x4; cvt = U ? Iop_QNarrowUn32Uto16Ux4 : Iop_QNarrowUn32Sto16Sx4; cvt2 = U ? Iop_Widen16Uto32x4 : Iop_Widen16Sto32x4; break; case 3: op = U ? Iop_ShrN64x2 : Iop_SarN64x2; cvt = U ? Iop_QNarrowUn64Uto32Ux2 : Iop_QNarrowUn64Sto32Sx2; cvt2 = U ? Iop_Widen32Uto64x2 : Iop_Widen32Sto64x2; break; default: vassert(0); } DIP("vq%sshrn.%c%d d%u, q%u, #%u\n", B ? "r" : "", U ? 'u' : 's', 8 << size, dreg, mreg, shift_imm); } else { vassert(U); switch (size) { case 1: op = Iop_SarN16x8; cvt = Iop_QNarrowUn16Sto8Ux8; cvt2 = Iop_Widen8Uto16x8; break; case 2: op = Iop_SarN32x4; cvt = Iop_QNarrowUn32Sto16Ux4; cvt2 = Iop_Widen16Uto32x4; break; case 3: op = Iop_SarN64x2; cvt = Iop_QNarrowUn64Sto32Ux2; cvt2 = Iop_Widen32Uto64x2; break; default: vassert(0); } DIP("vq%sshrun.s%d d%u, q%u, #%u\n", B ? "r" : "", 8 << size, dreg, mreg, shift_imm); } if (B) { if (shift_imm > 0) { imm = 1; switch (size) { case 1: imm = (imm << 16) | imm; /* fall through */ case 2: imm = (imm << 32) | imm; /* fall through */ case 3: break; case 0: default: vassert(0); } switch (size) { case 1: add = Iop_Add16x8; break; case 2: add = Iop_Add32x4; break; case 3: add = Iop_Add64x2; break; case 0: default: vassert(0); } } } reg_m = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(reg_m, getQReg(mreg)); if (B) { /* VQRSHRN, VQRSHRUN */ assign(res, binop(add, binop(op, mkexpr(reg_m), mkU8(shift_imm)), binop(Iop_AndV128, binop(op, mkexpr(reg_m), mkU8(shift_imm - 1)), mkU128(imm)))); } else { /* VQSHRN, VQSHRUN */ assign(res, binop(op, mkexpr(reg_m), mkU8(shift_imm))); } setFlag_QC(unop(cvt2, unop(cvt, mkexpr(res))), mkexpr(res), True, condT); putDRegI64(dreg, unop(cvt, mkexpr(res)), condT); return True; case 10: /* VSHLL VMOVL ::= VSHLL #0 */ if (B) return False; if (dreg & 1) return False; dreg >>= 1; shift_imm = (8 << size) - shift_imm; res = newTemp(Ity_V128); switch (size) { case 0: op = Iop_ShlN16x8; cvt = U ? Iop_Widen8Uto16x8 : Iop_Widen8Sto16x8; break; case 1: op = Iop_ShlN32x4; cvt = U ? Iop_Widen16Uto32x4 : Iop_Widen16Sto32x4; break; case 2: op = Iop_ShlN64x2; cvt = U ? Iop_Widen32Uto64x2 : Iop_Widen32Sto64x2; break; case 3: return False; default: vassert(0); } assign(res, binop(op, unop(cvt, getDRegI64(mreg)), mkU8(shift_imm))); putQReg(dreg, mkexpr(res), condT); if (shift_imm == 0) { DIP("vmovl.%c%d q%u, d%u\n", U ? 'u' : 's', 8 << size, dreg, mreg); } else { DIP("vshll.%c%d q%u, d%u, #%u\n", U ? 'u' : 's', 8 << size, dreg, mreg, shift_imm); } return True; case 14: case 15: /* VCVT floating-point <-> fixed-point */ if ((theInstr >> 8) & 1) { if (U) { op = Q ? Iop_F32ToFixed32Ux4_RZ : Iop_F32ToFixed32Ux2_RZ; } else { op = Q ? Iop_F32ToFixed32Sx4_RZ : Iop_F32ToFixed32Sx2_RZ; } DIP("vcvt.%c32.f32 %c%u, %c%u, #%u\n", U ? 'u' : 's', Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, 64 - ((theInstr >> 16) & 0x3f)); } else { if (U) { op = Q ? Iop_Fixed32UToF32x4_RN : Iop_Fixed32UToF32x2_RN; } else { op = Q ? Iop_Fixed32SToF32x4_RN : Iop_Fixed32SToF32x2_RN; } DIP("vcvt.f32.%c32 %c%u, %c%u, #%u\n", U ? 'u' : 's', Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg, 64 - ((theInstr >> 16) & 0x3f)); } if (((theInstr >> 21) & 1) == 0) return False; if (Q) { putQReg(dreg, binop(op, getQReg(mreg), mkU8(64 - ((theInstr >> 16) & 0x3f))), condT); } else { putDRegI64(dreg, binop(op, getDRegI64(mreg), mkU8(64 - ((theInstr >> 16) & 0x3f))), condT); } return True; default: return False; } return False; } /* A7.4.5 Two registers, miscellaneous */ static Bool dis_neon_data_2reg_misc ( UInt theInstr, IRTemp condT ) { UInt A = (theInstr >> 16) & 3; UInt B = (theInstr >> 6) & 0x1f; UInt Q = (theInstr >> 6) & 1; UInt U = (theInstr >> 24) & 1; UInt size = (theInstr >> 18) & 3; UInt dreg = get_neon_d_regno(theInstr); UInt mreg = get_neon_m_regno(theInstr); UInt F = (theInstr >> 10) & 1; IRTemp arg_d = IRTemp_INVALID; IRTemp arg_m = IRTemp_INVALID; IRTemp res = IRTemp_INVALID; switch (A) { case 0: if (Q) { arg_m = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(arg_m, getQReg(mreg)); } else { arg_m = newTemp(Ity_I64); res = newTemp(Ity_I64); assign(arg_m, getDRegI64(mreg)); } switch (B >> 1) { case 0: { /* VREV64 */ IROp op; switch (size) { case 0: op = Q ? Iop_Reverse8sIn64_x2 : Iop_Reverse8sIn64_x1; break; case 1: op = Q ? Iop_Reverse16sIn64_x2 : Iop_Reverse16sIn64_x1; break; case 2: op = Q ? Iop_Reverse32sIn64_x2 : Iop_Reverse32sIn64_x1; break; case 3: return False; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); DIP("vrev64.%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 1: { /* VREV32 */ IROp op; switch (size) { case 0: op = Q ? Iop_Reverse8sIn32_x4 : Iop_Reverse8sIn32_x2; break; case 1: op = Q ? Iop_Reverse16sIn32_x4 : Iop_Reverse16sIn32_x2; break; case 2: case 3: return False; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); DIP("vrev32.%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 2: { /* VREV16 */ IROp op; switch (size) { case 0: op = Q ? Iop_Reverse8sIn16_x8 : Iop_Reverse8sIn16_x4; break; case 1: case 2: case 3: return False; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); DIP("vrev16.%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 3: return False; case 4: case 5: { /* VPADDL */ IROp op; U = (theInstr >> 7) & 1; if (Q) { switch (size) { case 0: op = U ? Iop_PwAddL8Ux16 : Iop_PwAddL8Sx16; break; case 1: op = U ? Iop_PwAddL16Ux8 : Iop_PwAddL16Sx8; break; case 2: op = U ? Iop_PwAddL32Ux4 : Iop_PwAddL32Sx4; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_PwAddL8Ux8 : Iop_PwAddL8Sx8; break; case 1: op = U ? Iop_PwAddL16Ux4 : Iop_PwAddL16Sx4; break; case 2: op = U ? Iop_PwAddL32Ux2 : Iop_PwAddL32Sx2; break; case 3: return False; default: vassert(0); } } assign(res, unop(op, mkexpr(arg_m))); DIP("vpaddl.%c%d %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 6: case 7: return False; case 8: { /* VCLS */ IROp op; switch (size) { case 0: op = Q ? Iop_Cls8x16 : Iop_Cls8x8; break; case 1: op = Q ? Iop_Cls16x8 : Iop_Cls16x4; break; case 2: op = Q ? Iop_Cls32x4 : Iop_Cls32x2; break; case 3: return False; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); DIP("vcls.s%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 9: { /* VCLZ */ IROp op; switch (size) { case 0: op = Q ? Iop_Clz8x16 : Iop_Clz8x8; break; case 1: op = Q ? Iop_Clz16x8 : Iop_Clz16x4; break; case 2: op = Q ? Iop_Clz32x4 : Iop_Clz32x2; break; case 3: return False; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); DIP("vclz.i%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 10: /* VCNT */ assign(res, unop(Q ? Iop_Cnt8x16 : Iop_Cnt8x8, mkexpr(arg_m))); DIP("vcnt.8 %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; case 11: /* VMVN */ if (Q) assign(res, unop(Iop_NotV128, mkexpr(arg_m))); else assign(res, unop(Iop_Not64, mkexpr(arg_m))); DIP("vmvn %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; case 12: case 13: { /* VPADAL */ IROp op, add_op; U = (theInstr >> 7) & 1; if (Q) { switch (size) { case 0: op = U ? Iop_PwAddL8Ux16 : Iop_PwAddL8Sx16; add_op = Iop_Add16x8; break; case 1: op = U ? Iop_PwAddL16Ux8 : Iop_PwAddL16Sx8; add_op = Iop_Add32x4; break; case 2: op = U ? Iop_PwAddL32Ux4 : Iop_PwAddL32Sx4; add_op = Iop_Add64x2; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op = U ? Iop_PwAddL8Ux8 : Iop_PwAddL8Sx8; add_op = Iop_Add16x4; break; case 1: op = U ? Iop_PwAddL16Ux4 : Iop_PwAddL16Sx4; add_op = Iop_Add32x2; break; case 2: op = U ? Iop_PwAddL32Ux2 : Iop_PwAddL32Sx2; add_op = Iop_Add64; break; case 3: return False; default: vassert(0); } } if (Q) { arg_d = newTemp(Ity_V128); assign(arg_d, getQReg(dreg)); } else { arg_d = newTemp(Ity_I64); assign(arg_d, getDRegI64(dreg)); } assign(res, binop(add_op, unop(op, mkexpr(arg_m)), mkexpr(arg_d))); DIP("vpadal.%c%d %c%u, %c%u\n", U ? 'u' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 14: { /* VQABS */ IROp op_sub, op_qsub, op_cmp; IRTemp mask, tmp; IRExpr *zero1, *zero2; IRExpr *neg, *neg2; if (Q) { zero1 = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); zero2 = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); mask = newTemp(Ity_V128); tmp = newTemp(Ity_V128); } else { zero1 = mkU64(0); zero2 = mkU64(0); mask = newTemp(Ity_I64); tmp = newTemp(Ity_I64); } switch (size) { case 0: op_sub = Q ? Iop_Sub8x16 : Iop_Sub8x8; op_qsub = Q ? Iop_QSub8Sx16 : Iop_QSub8Sx8; op_cmp = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; break; case 1: op_sub = Q ? Iop_Sub16x8 : Iop_Sub16x4; op_qsub = Q ? Iop_QSub16Sx8 : Iop_QSub16Sx4; op_cmp = Q ? Iop_CmpGT16Sx8 : Iop_CmpGT16Sx4; break; case 2: op_sub = Q ? Iop_Sub32x4 : Iop_Sub32x2; op_qsub = Q ? Iop_QSub32Sx4 : Iop_QSub32Sx2; op_cmp = Q ? Iop_CmpGT32Sx4 : Iop_CmpGT32Sx2; break; case 3: return False; default: vassert(0); } assign(mask, binop(op_cmp, mkexpr(arg_m), zero1)); neg = binop(op_qsub, zero2, mkexpr(arg_m)); neg2 = binop(op_sub, zero2, mkexpr(arg_m)); assign(res, binop(Q ? Iop_OrV128 : Iop_Or64, binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(mask), mkexpr(arg_m)), binop(Q ? Iop_AndV128 : Iop_And64, unop(Q ? Iop_NotV128 : Iop_Not64, mkexpr(mask)), neg))); assign(tmp, binop(Q ? Iop_OrV128 : Iop_Or64, binop(Q ? Iop_AndV128 : Iop_And64, mkexpr(mask), mkexpr(arg_m)), binop(Q ? Iop_AndV128 : Iop_And64, unop(Q ? Iop_NotV128 : Iop_Not64, mkexpr(mask)), neg2))); setFlag_QC(mkexpr(res), mkexpr(tmp), Q, condT); DIP("vqabs.s%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 15: { /* VQNEG */ IROp op, op2; IRExpr *zero; if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } switch (size) { case 0: op = Q ? Iop_QSub8Sx16 : Iop_QSub8Sx8; op2 = Q ? Iop_Sub8x16 : Iop_Sub8x8; break; case 1: op = Q ? Iop_QSub16Sx8 : Iop_QSub16Sx4; op2 = Q ? Iop_Sub16x8 : Iop_Sub16x4; break; case 2: op = Q ? Iop_QSub32Sx4 : Iop_QSub32Sx2; op2 = Q ? Iop_Sub32x4 : Iop_Sub32x2; break; case 3: return False; default: vassert(0); } assign(res, binop(op, zero, mkexpr(arg_m))); setFlag_QC(mkexpr(res), binop(op2, zero, mkexpr(arg_m)), Q, condT); DIP("vqneg.s%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } default: vassert(0); } if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } return True; case 1: if (Q) { arg_m = newTemp(Ity_V128); res = newTemp(Ity_V128); assign(arg_m, getQReg(mreg)); } else { arg_m = newTemp(Ity_I64); res = newTemp(Ity_I64); assign(arg_m, getDRegI64(mreg)); } switch ((B >> 1) & 0x7) { case 0: { /* VCGT #0 */ IRExpr *zero; IROp op; if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } if (F) { switch (size) { case 0: case 1: case 3: return False; case 2: op = Q ? Iop_CmpGT32Fx4 : Iop_CmpGT32Fx2; break; default: vassert(0); } } else { switch (size) { case 0: op = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; break; case 1: op = Q ? Iop_CmpGT16Sx8 : Iop_CmpGT16Sx4; break; case 2: op = Q ? Iop_CmpGT32Sx4 : Iop_CmpGT32Sx2; break; case 3: return False; default: vassert(0); } } assign(res, binop(op, mkexpr(arg_m), zero)); DIP("vcgt.%c%d %c%u, %c%u, #0\n", F ? 'f' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 1: { /* VCGE #0 */ IROp op; IRExpr *zero; if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } if (F) { switch (size) { case 0: case 1: case 3: return False; case 2: op = Q ? Iop_CmpGE32Fx4 : Iop_CmpGE32Fx2; break; default: vassert(0); } assign(res, binop(op, mkexpr(arg_m), zero)); } else { switch (size) { case 0: op = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; break; case 1: op = Q ? Iop_CmpGT16Sx8 : Iop_CmpGT16Sx4; break; case 2: op = Q ? Iop_CmpGT32Sx4 : Iop_CmpGT32Sx2; break; case 3: return False; default: vassert(0); } assign(res, unop(Q ? Iop_NotV128 : Iop_Not64, binop(op, zero, mkexpr(arg_m)))); } DIP("vcge.%c%d %c%u, %c%u, #0\n", F ? 'f' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 2: { /* VCEQ #0 */ IROp op; IRExpr *zero; if (F) { if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } switch (size) { case 0: case 1: case 3: return False; case 2: op = Q ? Iop_CmpEQ32Fx4 : Iop_CmpEQ32Fx2; break; default: vassert(0); } assign(res, binop(op, zero, mkexpr(arg_m))); } else { switch (size) { case 0: op = Q ? Iop_CmpNEZ8x16 : Iop_CmpNEZ8x8; break; case 1: op = Q ? Iop_CmpNEZ16x8 : Iop_CmpNEZ16x4; break; case 2: op = Q ? Iop_CmpNEZ32x4 : Iop_CmpNEZ32x2; break; case 3: return False; default: vassert(0); } assign(res, unop(Q ? Iop_NotV128 : Iop_Not64, unop(op, mkexpr(arg_m)))); } DIP("vceq.%c%d %c%u, %c%u, #0\n", F ? 'f' : 'i', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 3: { /* VCLE #0 */ IRExpr *zero; IROp op; if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } if (F) { switch (size) { case 0: case 1: case 3: return False; case 2: op = Q ? Iop_CmpGE32Fx4 : Iop_CmpGE32Fx2; break; default: vassert(0); } assign(res, binop(op, zero, mkexpr(arg_m))); } else { switch (size) { case 0: op = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; break; case 1: op = Q ? Iop_CmpGT16Sx8 : Iop_CmpGT16Sx4; break; case 2: op = Q ? Iop_CmpGT32Sx4 : Iop_CmpGT32Sx2; break; case 3: return False; default: vassert(0); } assign(res, unop(Q ? Iop_NotV128 : Iop_Not64, binop(op, mkexpr(arg_m), zero))); } DIP("vcle.%c%d %c%u, %c%u, #0\n", F ? 'f' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 4: { /* VCLT #0 */ IROp op; IRExpr *zero; if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } if (F) { switch (size) { case 0: case 1: case 3: return False; case 2: op = Q ? Iop_CmpGT32Fx4 : Iop_CmpGT32Fx2; break; default: vassert(0); } assign(res, binop(op, zero, mkexpr(arg_m))); } else { switch (size) { case 0: op = Q ? Iop_CmpGT8Sx16 : Iop_CmpGT8Sx8; break; case 1: op = Q ? Iop_CmpGT16Sx8 : Iop_CmpGT16Sx4; break; case 2: op = Q ? Iop_CmpGT32Sx4 : Iop_CmpGT32Sx2; break; case 3: return False; default: vassert(0); } assign(res, binop(op, zero, mkexpr(arg_m))); } DIP("vclt.%c%d %c%u, %c%u, #0\n", F ? 'f' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 5: return False; case 6: { /* VABS */ if (!F) { IROp op; switch(size) { case 0: op = Q ? Iop_Abs8x16 : Iop_Abs8x8; break; case 1: op = Q ? Iop_Abs16x8 : Iop_Abs16x4; break; case 2: op = Q ? Iop_Abs32x4 : Iop_Abs32x2; break; case 3: return False; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); } else { assign(res, unop(Q ? Iop_Abs32Fx4 : Iop_Abs32Fx2, mkexpr(arg_m))); } DIP("vabs.%c%d %c%u, %c%u\n", F ? 'f' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } case 7: { /* VNEG */ IROp op; IRExpr *zero; if (F) { switch (size) { case 0: case 1: case 3: return False; case 2: op = Q ? Iop_Neg32Fx4 : Iop_Neg32Fx2; break; default: vassert(0); } assign(res, unop(op, mkexpr(arg_m))); } else { if (Q) { zero = binop(Iop_64HLtoV128, mkU64(0), mkU64(0)); } else { zero = mkU64(0); } switch (size) { case 0: op = Q ? Iop_Sub8x16 : Iop_Sub8x8; break; case 1: op = Q ? Iop_Sub16x8 : Iop_Sub16x4; break; case 2: op = Q ? Iop_Sub32x4 : Iop_Sub32x2; break; case 3: return False; default: vassert(0); } assign(res, binop(op, zero, mkexpr(arg_m))); } DIP("vneg.%c%d %c%u, %c%u\n", F ? 'f' : 's', 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; } default: vassert(0); } if (Q) { putQReg(dreg, mkexpr(res), condT); } else { putDRegI64(dreg, mkexpr(res), condT); } return True; case 2: if ((B >> 1) == 0) { /* VSWP */ if (Q) { arg_m = newTemp(Ity_V128); assign(arg_m, getQReg(mreg)); putQReg(mreg, getQReg(dreg), condT); putQReg(dreg, mkexpr(arg_m), condT); } else { arg_m = newTemp(Ity_I64); assign(arg_m, getDRegI64(mreg)); putDRegI64(mreg, getDRegI64(dreg), condT); putDRegI64(dreg, mkexpr(arg_m), condT); } DIP("vswp %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); return True; } else if ((B >> 1) == 1) { /* VTRN */ IROp op_odd = Iop_INVALID, op_even = Iop_INVALID; IRTemp old_m, old_d, new_d, new_m; if (Q) { old_m = newTemp(Ity_V128); old_d = newTemp(Ity_V128); new_m = newTemp(Ity_V128); new_d = newTemp(Ity_V128); assign(old_m, getQReg(mreg)); assign(old_d, getQReg(dreg)); } else { old_m = newTemp(Ity_I64); old_d = newTemp(Ity_I64); new_m = newTemp(Ity_I64); new_d = newTemp(Ity_I64); assign(old_m, getDRegI64(mreg)); assign(old_d, getDRegI64(dreg)); } if (Q) { switch (size) { case 0: op_odd = Iop_InterleaveOddLanes8x16; op_even = Iop_InterleaveEvenLanes8x16; break; case 1: op_odd = Iop_InterleaveOddLanes16x8; op_even = Iop_InterleaveEvenLanes16x8; break; case 2: op_odd = Iop_InterleaveOddLanes32x4; op_even = Iop_InterleaveEvenLanes32x4; break; case 3: return False; default: vassert(0); } } else { switch (size) { case 0: op_odd = Iop_InterleaveOddLanes8x8; op_even = Iop_InterleaveEvenLanes8x8; break; case 1: op_odd = Iop_InterleaveOddLanes16x4; op_even = Iop_InterleaveEvenLanes16x4; break; case 2: op_odd = Iop_InterleaveHI32x2; op_even = Iop_InterleaveLO32x2; break; case 3: return False; default: vassert(0); } } assign(new_d, binop(op_even, mkexpr(old_m), mkexpr(old_d))); assign(new_m, binop(op_odd, mkexpr(old_m), mkexpr(old_d))); if (Q) { putQReg(dreg, mkexpr(new_d), condT); putQReg(mreg, mkexpr(new_m), condT); } else { putDRegI64(dreg, mkexpr(new_d), condT); putDRegI64(mreg, mkexpr(new_m), condT); } DIP("vtrn.%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); return True; } else if ((B >> 1) == 2) { /* VUZP */ IROp op_even, op_odd; IRTemp old_m, old_d, new_m, new_d; if (!Q && size == 2) return False; if (Q) { old_m = newTemp(Ity_V128); old_d = newTemp(Ity_V128); new_m = newTemp(Ity_V128); new_d = newTemp(Ity_V128); assign(old_m, getQReg(mreg)); assign(old_d, getQReg(dreg)); } else { old_m = newTemp(Ity_I64); old_d = newTemp(Ity_I64); new_m = newTemp(Ity_I64); new_d = newTemp(Ity_I64); assign(old_m, getDRegI64(mreg)); assign(old_d, getDRegI64(dreg)); } switch (size) { case 0: op_odd = Q ? Iop_CatOddLanes8x16 : Iop_CatOddLanes8x8; op_even = Q ? Iop_CatEvenLanes8x16 : Iop_CatEvenLanes8x8; break; case 1: op_odd = Q ? Iop_CatOddLanes16x8 : Iop_CatOddLanes16x4; op_even = Q ? Iop_CatEvenLanes16x8 : Iop_CatEvenLanes16x4; break; case 2: op_odd = Iop_CatOddLanes32x4; op_even = Iop_CatEvenLanes32x4; break; case 3: return False; default: vassert(0); } assign(new_d, binop(op_even, mkexpr(old_m), mkexpr(old_d))); assign(new_m, binop(op_odd, mkexpr(old_m), mkexpr(old_d))); if (Q) { putQReg(dreg, mkexpr(new_d), condT); putQReg(mreg, mkexpr(new_m), condT); } else { putDRegI64(dreg, mkexpr(new_d), condT); putDRegI64(mreg, mkexpr(new_m), condT); } DIP("vuzp.%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); return True; } else if ((B >> 1) == 3) { /* VZIP */ IROp op_lo, op_hi; IRTemp old_m, old_d, new_m, new_d; if (!Q && size == 2) return False; if (Q) { old_m = newTemp(Ity_V128); old_d = newTemp(Ity_V128); new_m = newTemp(Ity_V128); new_d = newTemp(Ity_V128); assign(old_m, getQReg(mreg)); assign(old_d, getQReg(dreg)); } else { old_m = newTemp(Ity_I64); old_d = newTemp(Ity_I64); new_m = newTemp(Ity_I64); new_d = newTemp(Ity_I64); assign(old_m, getDRegI64(mreg)); assign(old_d, getDRegI64(dreg)); } switch (size) { case 0: op_hi = Q ? Iop_InterleaveHI8x16 : Iop_InterleaveHI8x8; op_lo = Q ? Iop_InterleaveLO8x16 : Iop_InterleaveLO8x8; break; case 1: op_hi = Q ? Iop_InterleaveHI16x8 : Iop_InterleaveHI16x4; op_lo = Q ? Iop_InterleaveLO16x8 : Iop_InterleaveLO16x4; break; case 2: op_hi = Iop_InterleaveHI32x4; op_lo = Iop_InterleaveLO32x4; break; case 3: return False; default: vassert(0); } assign(new_d, binop(op_lo, mkexpr(old_m), mkexpr(old_d))); assign(new_m, binop(op_hi, mkexpr(old_m), mkexpr(old_d))); if (Q) { putQReg(dreg, mkexpr(new_d), condT); putQReg(mreg, mkexpr(new_m), condT); } else { putDRegI64(dreg, mkexpr(new_d), condT); putDRegI64(mreg, mkexpr(new_m), condT); } DIP("vzip.%d %c%u, %c%u\n", 8 << size, Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); return True; } else if (B == 8) { /* VMOVN */ IROp op; mreg >>= 1; switch (size) { case 0: op = Iop_NarrowUn16to8x8; break; case 1: op = Iop_NarrowUn32to16x4; break; case 2: op = Iop_NarrowUn64to32x2; break; case 3: return False; default: vassert(0); } putDRegI64(dreg, unop(op, getQReg(mreg)), condT); DIP("vmovn.i%d d%u, q%u\n", 16 << size, dreg, mreg); return True; } else if (B == 9 || (B >> 1) == 5) { /* VQMOVN, VQMOVUN */ IROp op, op2; IRTemp tmp; dreg = ((theInstr >> 18) & 0x10) | ((theInstr >> 12) & 0xF); mreg = ((theInstr >> 1) & 0x10) | (theInstr & 0xF); if (mreg & 1) return False; mreg >>= 1; switch (size) { case 0: op2 = Iop_NarrowUn16to8x8; break; case 1: op2 = Iop_NarrowUn32to16x4; break; case 2: op2 = Iop_NarrowUn64to32x2; break; case 3: return False; default: vassert(0); } switch (B & 3) { case 0: vassert(0); case 1: switch (size) { case 0: op = Iop_QNarrowUn16Sto8Ux8; break; case 1: op = Iop_QNarrowUn32Sto16Ux4; break; case 2: op = Iop_QNarrowUn64Sto32Ux2; break; case 3: return False; default: vassert(0); } DIP("vqmovun.s%d d%u, q%u\n", 16 << size, dreg, mreg); break; case 2: switch (size) { case 0: op = Iop_QNarrowUn16Sto8Sx8; break; case 1: op = Iop_QNarrowUn32Sto16Sx4; break; case 2: op = Iop_QNarrowUn64Sto32Sx2; break; case 3: return False; default: vassert(0); } DIP("vqmovn.s%d d%u, q%u\n", 16 << size, dreg, mreg); break; case 3: switch (size) { case 0: op = Iop_QNarrowUn16Uto8Ux8; break; case 1: op = Iop_QNarrowUn32Uto16Ux4; break; case 2: op = Iop_QNarrowUn64Uto32Ux2; break; case 3: return False; default: vassert(0); } DIP("vqmovn.u%d d%u, q%u\n", 16 << size, dreg, mreg); break; default: vassert(0); } res = newTemp(Ity_I64); tmp = newTemp(Ity_I64); assign(res, unop(op, getQReg(mreg))); assign(tmp, unop(op2, getQReg(mreg))); setFlag_QC(mkexpr(res), mkexpr(tmp), False, condT); putDRegI64(dreg, mkexpr(res), condT); return True; } else if (B == 12) { /* VSHLL (maximum shift) */ IROp op, cvt; UInt shift_imm; if (Q) return False; if (dreg & 1) return False; dreg >>= 1; shift_imm = 8 << size; res = newTemp(Ity_V128); switch (size) { case 0: op = Iop_ShlN16x8; cvt = Iop_Widen8Uto16x8; break; case 1: op = Iop_ShlN32x4; cvt = Iop_Widen16Uto32x4; break; case 2: op = Iop_ShlN64x2; cvt = Iop_Widen32Uto64x2; break; case 3: return False; default: vassert(0); } assign(res, binop(op, unop(cvt, getDRegI64(mreg)), mkU8(shift_imm))); putQReg(dreg, mkexpr(res), condT); DIP("vshll.i%d q%u, d%u, #%d\n", 8 << size, dreg, mreg, 8 << size); return True; } else if ((B >> 3) == 3 && (B & 3) == 0) { /* VCVT (half<->single) */ /* Half-precision extensions are needed to run this */ vassert(0); // ATC if (((theInstr >> 18) & 3) != 1) return False; if ((theInstr >> 8) & 1) { if (dreg & 1) return False; dreg >>= 1; putQReg(dreg, unop(Iop_F16toF32x4, getDRegI64(mreg)), condT); DIP("vcvt.f32.f16 q%u, d%u\n", dreg, mreg); } else { if (mreg & 1) return False; mreg >>= 1; putDRegI64(dreg, unop(Iop_F32toF16x4, getQReg(mreg)), condT); DIP("vcvt.f16.f32 d%u, q%u\n", dreg, mreg); } return True; } else { return False; } vassert(0); return True; case 3: if (((B >> 1) & BITS4(1,1,0,1)) == BITS4(1,0,0,0)) { /* VRECPE */ IROp op; F = (theInstr >> 8) & 1; if (size != 2) return False; if (Q) { op = F ? Iop_RecipEst32Fx4 : Iop_RecipEst32Ux4; putQReg(dreg, unop(op, getQReg(mreg)), condT); DIP("vrecpe.%c32 q%u, q%u\n", F ? 'f' : 'u', dreg, mreg); } else { op = F ? Iop_RecipEst32Fx2 : Iop_RecipEst32Ux2; putDRegI64(dreg, unop(op, getDRegI64(mreg)), condT); DIP("vrecpe.%c32 d%u, d%u\n", F ? 'f' : 'u', dreg, mreg); } return True; } else if (((B >> 1) & BITS4(1,1,0,1)) == BITS4(1,0,0,1)) { /* VRSQRTE */ IROp op; F = (B >> 2) & 1; if (size != 2) return False; if (F) { /* fp */ op = Q ? Iop_RSqrtEst32Fx4 : Iop_RSqrtEst32Fx2; } else { /* unsigned int */ op = Q ? Iop_RSqrtEst32Ux4 : Iop_RSqrtEst32Ux2; } if (Q) { putQReg(dreg, unop(op, getQReg(mreg)), condT); DIP("vrsqrte.%c32 q%u, q%u\n", F ? 'f' : 'u', dreg, mreg); } else { putDRegI64(dreg, unop(op, getDRegI64(mreg)), condT); DIP("vrsqrte.%c32 d%u, d%u\n", F ? 'f' : 'u', dreg, mreg); } return True; } else if ((B >> 3) == 3) { /* VCVT (fp<->integer) */ IROp op; if (size != 2) return False; switch ((B >> 1) & 3) { case 0: op = Q ? Iop_I32StoFx4 : Iop_I32StoFx2; DIP("vcvt.f32.s32 %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; case 1: op = Q ? Iop_I32UtoFx4 : Iop_I32UtoFx2; DIP("vcvt.f32.u32 %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; case 2: op = Q ? Iop_FtoI32Sx4_RZ : Iop_FtoI32Sx2_RZ; DIP("vcvt.s32.f32 %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; case 3: op = Q ? Iop_FtoI32Ux4_RZ : Iop_FtoI32Ux2_RZ; DIP("vcvt.u32.f32 %c%u, %c%u\n", Q ? 'q' : 'd', dreg, Q ? 'q' : 'd', mreg); break; default: vassert(0); } if (Q) { putQReg(dreg, unop(op, getQReg(mreg)), condT); } else { putDRegI64(dreg, unop(op, getDRegI64(mreg)), condT); } return True; } else { return False; } vassert(0); return True; default: vassert(0); } return False; } /* A7.4.6 One register and a modified immediate value */ static void ppNeonImm(UInt imm, UInt cmode, UInt op) { int i; switch (cmode) { case 0: case 1: case 8: case 9: vex_printf("0x%x", imm); break; case 2: case 3: case 10: case 11: vex_printf("0x%x00", imm); break; case 4: case 5: vex_printf("0x%x0000", imm); break; case 6: case 7: vex_printf("0x%x000000", imm); break; case 12: vex_printf("0x%xff", imm); break; case 13: vex_printf("0x%xffff", imm); break; case 14: if (op) { vex_printf("0x"); for (i = 7; i >= 0; i--) vex_printf("%s", (imm & (1 << i)) ? "ff" : "00"); } else { vex_printf("0x%x", imm); } break; case 15: vex_printf("0x%x", imm); break; } } static const char *ppNeonImmType(UInt cmode, UInt op) { switch (cmode) { case 0 ... 7: case 12: case 13: return "i32"; case 8 ... 11: return "i16"; case 14: if (op) return "i64"; else return "i8"; case 15: if (op) vassert(0); else return "f32"; default: vassert(0); } } static void DIPimm(UInt imm, UInt cmode, UInt op, const char *instr, UInt Q, UInt dreg) { if (vex_traceflags & VEX_TRACE_FE) { vex_printf("%s.%s %c%u, #", instr, ppNeonImmType(cmode, op), Q ? 'q' : 'd', dreg); ppNeonImm(imm, cmode, op); vex_printf("\n"); } } static Bool dis_neon_data_1reg_and_imm ( UInt theInstr, IRTemp condT ) { UInt dreg = get_neon_d_regno(theInstr); ULong imm_raw = ((theInstr >> 17) & 0x80) | ((theInstr >> 12) & 0x70) | (theInstr & 0xf); ULong imm_raw_pp = imm_raw; UInt cmode = (theInstr >> 8) & 0xf; UInt op_bit = (theInstr >> 5) & 1; ULong imm = 0; UInt Q = (theInstr >> 6) & 1; int i, j; UInt tmp; IRExpr *imm_val; IRExpr *expr; IRTemp tmp_var; switch(cmode) { case 7: case 6: imm_raw = imm_raw << 8; /* fallthrough */ case 5: case 4: imm_raw = imm_raw << 8; /* fallthrough */ case 3: case 2: imm_raw = imm_raw << 8; /* fallthrough */ case 0: case 1: imm = (imm_raw << 32) | imm_raw; break; case 11: case 10: imm_raw = imm_raw << 8; /* fallthrough */ case 9: case 8: imm_raw = (imm_raw << 16) | imm_raw; imm = (imm_raw << 32) | imm_raw; break; case 13: imm_raw = (imm_raw << 8) | 0xff; /* fallthrough */ case 12: imm_raw = (imm_raw << 8) | 0xff; imm = (imm_raw << 32) | imm_raw; break; case 14: if (! op_bit) { for(i = 0; i < 8; i++) { imm = (imm << 8) | imm_raw; } } else { for(i = 7; i >= 0; i--) { tmp = 0; for(j = 0; j < 8; j++) { tmp = (tmp << 1) | ((imm_raw >> i) & 1); } imm = (imm << 8) | tmp; } } break; case 15: imm = (imm_raw & 0x80) << 5; imm |= ((~imm_raw & 0x40) << 5); for(i = 1; i <= 4; i++) imm |= (imm_raw & 0x40) << i; imm |= (imm_raw & 0x7f); imm = imm << 19; imm = (imm << 32) | imm; break; default: return False; } if (Q) { imm_val = binop(Iop_64HLtoV128, mkU64(imm), mkU64(imm)); } else { imm_val = mkU64(imm); } if (((op_bit == 0) && (((cmode & 9) == 0) || ((cmode & 13) == 8) || ((cmode & 12) == 12))) || ((op_bit == 1) && (cmode == 14))) { /* VMOV (immediate) */ if (Q) { putQReg(dreg, imm_val, condT); } else { putDRegI64(dreg, imm_val, condT); } DIPimm(imm_raw_pp, cmode, op_bit, "vmov", Q, dreg); return True; } if ((op_bit == 1) && (((cmode & 9) == 0) || ((cmode & 13) == 8) || ((cmode & 14) == 12))) { /* VMVN (immediate) */ if (Q) { putQReg(dreg, unop(Iop_NotV128, imm_val), condT); } else { putDRegI64(dreg, unop(Iop_Not64, imm_val), condT); } DIPimm(imm_raw_pp, cmode, op_bit, "vmvn", Q, dreg); return True; } if (Q) { tmp_var = newTemp(Ity_V128); assign(tmp_var, getQReg(dreg)); } else { tmp_var = newTemp(Ity_I64); assign(tmp_var, getDRegI64(dreg)); } if ((op_bit == 0) && (((cmode & 9) == 1) || ((cmode & 13) == 9))) { /* VORR (immediate) */ if (Q) expr = binop(Iop_OrV128, mkexpr(tmp_var), imm_val); else expr = binop(Iop_Or64, mkexpr(tmp_var), imm_val); DIPimm(imm_raw_pp, cmode, op_bit, "vorr", Q, dreg); } else if ((op_bit == 1) && (((cmode & 9) == 1) || ((cmode & 13) == 9))) { /* VBIC (immediate) */ if (Q) expr = binop(Iop_AndV128, mkexpr(tmp_var), unop(Iop_NotV128, imm_val)); else expr = binop(Iop_And64, mkexpr(tmp_var), unop(Iop_Not64, imm_val)); DIPimm(imm_raw_pp, cmode, op_bit, "vbic", Q, dreg); } else { return False; } if (Q) putQReg(dreg, expr, condT); else putDRegI64(dreg, expr, condT); return True; } /* A7.4 Advanced SIMD data-processing instructions */ static Bool dis_neon_data_processing ( UInt theInstr, IRTemp condT ) { UInt A = (theInstr >> 19) & 0x1F; UInt B = (theInstr >> 8) & 0xF; UInt C = (theInstr >> 4) & 0xF; UInt U = (theInstr >> 24) & 0x1; if (! (A & 0x10)) { return dis_neon_data_3same(theInstr, condT); } if (((A & 0x17) == 0x10) && ((C & 0x9) == 0x1)) { return dis_neon_data_1reg_and_imm(theInstr, condT); } if ((C & 1) == 1) { return dis_neon_data_2reg_and_shift(theInstr, condT); } if (((C & 5) == 0) && (((A & 0x14) == 0x10) || ((A & 0x16) == 0x14))) { return dis_neon_data_3diff(theInstr, condT); } if (((C & 5) == 4) && (((A & 0x14) == 0x10) || ((A & 0x16) == 0x14))) { return dis_neon_data_2reg_and_scalar(theInstr, condT); } if ((A & 0x16) == 0x16) { if ((U == 0) && ((C & 1) == 0)) { return dis_neon_vext(theInstr, condT); } if ((U != 1) || ((C & 1) == 1)) return False; if ((B & 8) == 0) { return dis_neon_data_2reg_misc(theInstr, condT); } if ((B & 12) == 8) { return dis_neon_vtb(theInstr, condT); } if ((B == 12) && ((C & 9) == 0)) { return dis_neon_vdup(theInstr, condT); } } return False; } /*------------------------------------------------------------*/ /*--- NEON loads and stores ---*/ /*------------------------------------------------------------*/ /* For NEON memory operations, we use the standard scheme to handle conditionalisation: generate a jump around the instruction if the condition is false. That's only necessary in Thumb mode, however, since in ARM mode NEON instructions are unconditional. */ /* A helper function for what follows. It assumes we already went uncond as per comments at the top of this section. */ static void mk_neon_elem_load_to_one_lane( UInt rD, UInt inc, UInt index, UInt N, UInt size, IRTemp addr ) { UInt i; switch (size) { case 0: putDRegI64(rD, triop(Iop_SetElem8x8, getDRegI64(rD), mkU8(index), loadLE(Ity_I8, mkexpr(addr))), IRTemp_INVALID); break; case 1: putDRegI64(rD, triop(Iop_SetElem16x4, getDRegI64(rD), mkU8(index), loadLE(Ity_I16, mkexpr(addr))), IRTemp_INVALID); break; case 2: putDRegI64(rD, triop(Iop_SetElem32x2, getDRegI64(rD), mkU8(index), loadLE(Ity_I32, mkexpr(addr))), IRTemp_INVALID); break; default: vassert(0); } for (i = 1; i <= N; i++) { switch (size) { case 0: putDRegI64(rD + i * inc, triop(Iop_SetElem8x8, getDRegI64(rD + i * inc), mkU8(index), loadLE(Ity_I8, binop(Iop_Add32, mkexpr(addr), mkU32(i * 1)))), IRTemp_INVALID); break; case 1: putDRegI64(rD + i * inc, triop(Iop_SetElem16x4, getDRegI64(rD + i * inc), mkU8(index), loadLE(Ity_I16, binop(Iop_Add32, mkexpr(addr), mkU32(i * 2)))), IRTemp_INVALID); break; case 2: putDRegI64(rD + i * inc, triop(Iop_SetElem32x2, getDRegI64(rD + i * inc), mkU8(index), loadLE(Ity_I32, binop(Iop_Add32, mkexpr(addr), mkU32(i * 4)))), IRTemp_INVALID); break; default: vassert(0); } } } /* A(nother) helper function for what follows. It assumes we already went uncond as per comments at the top of this section. */ static void mk_neon_elem_store_from_one_lane( UInt rD, UInt inc, UInt index, UInt N, UInt size, IRTemp addr ) { UInt i; switch (size) { case 0: storeLE(mkexpr(addr), binop(Iop_GetElem8x8, getDRegI64(rD), mkU8(index))); break; case 1: storeLE(mkexpr(addr), binop(Iop_GetElem16x4, getDRegI64(rD), mkU8(index))); break; case 2: storeLE(mkexpr(addr), binop(Iop_GetElem32x2, getDRegI64(rD), mkU8(index))); break; default: vassert(0); } for (i = 1; i <= N; i++) { switch (size) { case 0: storeLE(binop(Iop_Add32, mkexpr(addr), mkU32(i * 1)), binop(Iop_GetElem8x8, getDRegI64(rD + i * inc), mkU8(index))); break; case 1: storeLE(binop(Iop_Add32, mkexpr(addr), mkU32(i * 2)), binop(Iop_GetElem16x4, getDRegI64(rD + i * inc), mkU8(index))); break; case 2: storeLE(binop(Iop_Add32, mkexpr(addr), mkU32(i * 4)), binop(Iop_GetElem32x2, getDRegI64(rD + i * inc), mkU8(index))); break; default: vassert(0); } } } /* Generate 2x64 -> 2x64 deinterleave code, for VLD2. Caller must make *u0 and *u1 be valid IRTemps before the call. */ static void math_DEINTERLEAVE_2 (/*OUT*/IRTemp* u0, /*OUT*/IRTemp* u1, IRTemp i0, IRTemp i1, Int laneszB) { /* The following assumes that the guest is little endian, and hence that the memory-side (interleaved) data is stored little-endianly. */ vassert(u0 && u1); /* This is pretty easy, since we have primitives directly to hand. */ if (laneszB == 4) { // memLE(128 bits) == A0 B0 A1 B1 // i0 == B0 A0, i1 == B1 A1 // u0 == A1 A0, u1 == B1 B0 assign(*u0, binop(Iop_InterleaveLO32x2, mkexpr(i1), mkexpr(i0))); assign(*u1, binop(Iop_InterleaveHI32x2, mkexpr(i1), mkexpr(i0))); } else if (laneszB == 2) { // memLE(128 bits) == A0 B0 A1 B1 A2 B2 A3 B3 // i0 == B1 A1 B0 A0, i1 == B3 A3 B2 A2 // u0 == A3 A2 A1 A0, u1 == B3 B2 B1 B0 assign(*u0, binop(Iop_CatEvenLanes16x4, mkexpr(i1), mkexpr(i0))); assign(*u1, binop(Iop_CatOddLanes16x4, mkexpr(i1), mkexpr(i0))); } else if (laneszB == 1) { // memLE(128 bits) == A0 B0 A1 B1 A2 B2 A3 B3 A4 B4 A5 B5 A6 B6 A7 B7 // i0 == B3 A3 B2 A2 B1 A1 B0 A0, i1 == B7 A7 B6 A6 B5 A5 B4 A4 // u0 == A7 A6 A5 A4 A3 A2 A1 A0, u1 == B7 B6 B5 B4 B3 B2 B1 B0 assign(*u0, binop(Iop_CatEvenLanes8x8, mkexpr(i1), mkexpr(i0))); assign(*u1, binop(Iop_CatOddLanes8x8, mkexpr(i1), mkexpr(i0))); } else { // Can never happen, since VLD2 only has valid lane widths of 32, // 16 or 8 bits. vpanic("math_DEINTERLEAVE_2"); } } /* Generate 2x64 -> 2x64 interleave code, for VST2. Caller must make *u0 and *u1 be valid IRTemps before the call. */ static void math_INTERLEAVE_2 (/*OUT*/IRTemp* i0, /*OUT*/IRTemp* i1, IRTemp u0, IRTemp u1, Int laneszB) { /* The following assumes that the guest is little endian, and hence that the memory-side (interleaved) data is stored little-endianly. */ vassert(i0 && i1); /* This is pretty easy, since we have primitives directly to hand. */ if (laneszB == 4) { // memLE(128 bits) == A0 B0 A1 B1 // i0 == B0 A0, i1 == B1 A1 // u0 == A1 A0, u1 == B1 B0 assign(*i0, binop(Iop_InterleaveLO32x2, mkexpr(u1), mkexpr(u0))); assign(*i1, binop(Iop_InterleaveHI32x2, mkexpr(u1), mkexpr(u0))); } else if (laneszB == 2) { // memLE(128 bits) == A0 B0 A1 B1 A2 B2 A3 B3 // i0 == B1 A1 B0 A0, i1 == B3 A3 B2 A2 // u0 == A3 A2 A1 A0, u1 == B3 B2 B1 B0 assign(*i0, binop(Iop_InterleaveLO16x4, mkexpr(u1), mkexpr(u0))); assign(*i1, binop(Iop_InterleaveHI16x4, mkexpr(u1), mkexpr(u0))); } else if (laneszB == 1) { // memLE(128 bits) == A0 B0 A1 B1 A2 B2 A3 B3 A4 B4 A5 B5 A6 B6 A7 B7 // i0 == B3 A3 B2 A2 B1 A1 B0 A0, i1 == B7 A7 B6 A6 B5 A5 B4 A4 // u0 == A7 A6 A5 A4 A3 A2 A1 A0, u1 == B7 B6 B5 B4 B3 B2 B1 B0 assign(*i0, binop(Iop_InterleaveLO8x8, mkexpr(u1), mkexpr(u0))); assign(*i1, binop(Iop_InterleaveHI8x8, mkexpr(u1), mkexpr(u0))); } else { // Can never happen, since VST2 only has valid lane widths of 32, // 16 or 8 bits. vpanic("math_INTERLEAVE_2"); } } // Helper function for generating arbitrary slicing 'n' dicing of // 3 8x8 vectors, as needed for VLD3.8 and VST3.8. static IRExpr* math_PERM_8x8x3(const UChar* desc, IRTemp s0, IRTemp s1, IRTemp s2) { // desc is an array of 8 pairs, encoded as 16 bytes, // that describe how to assemble the result lanes, starting with // lane 7. Each pair is: first component (0..2) says which of // s0/s1/s2 to use. Second component (0..7) is the lane number // in the source to use. UInt si; for (si = 0; si < 7; si++) { vassert(desc[2 * si + 0] <= 2); vassert(desc[2 * si + 1] <= 7); } IRTemp h3 = newTemp(Ity_I64); IRTemp h2 = newTemp(Ity_I64); IRTemp h1 = newTemp(Ity_I64); IRTemp h0 = newTemp(Ity_I64); IRTemp srcs[3] = {s0, s1, s2}; # define SRC_VEC(_lane) mkexpr(srcs[desc[2 * (7-(_lane)) + 0]]) # define SRC_SHIFT(_lane) mkU8(56-8*(desc[2 * (7-(_lane)) + 1])) assign(h3, binop(Iop_InterleaveHI8x8, binop(Iop_Shl64, SRC_VEC(7), SRC_SHIFT(7)), binop(Iop_Shl64, SRC_VEC(6), SRC_SHIFT(6)))); assign(h2, binop(Iop_InterleaveHI8x8, binop(Iop_Shl64, SRC_VEC(5), SRC_SHIFT(5)), binop(Iop_Shl64, SRC_VEC(4), SRC_SHIFT(4)))); assign(h1, binop(Iop_InterleaveHI8x8, binop(Iop_Shl64, SRC_VEC(3), SRC_SHIFT(3)), binop(Iop_Shl64, SRC_VEC(2), SRC_SHIFT(2)))); assign(h0, binop(Iop_InterleaveHI8x8, binop(Iop_Shl64, SRC_VEC(1), SRC_SHIFT(1)), binop(Iop_Shl64, SRC_VEC(0), SRC_SHIFT(0)))); # undef SRC_VEC # undef SRC_SHIFT // Now h3..h0 are 64 bit vectors with useful information only // in the top 16 bits. We now concatentate those four 16-bit // groups so as to produce the final result. IRTemp w1 = newTemp(Ity_I64); IRTemp w0 = newTemp(Ity_I64); assign(w1, binop(Iop_InterleaveHI16x4, mkexpr(h3), mkexpr(h2))); assign(w0, binop(Iop_InterleaveHI16x4, mkexpr(h1), mkexpr(h0))); return binop(Iop_InterleaveHI32x2, mkexpr(w1), mkexpr(w0)); } /* Generate 3x64 -> 3x64 deinterleave code, for VLD3. Caller must make *u0, *u1 and *u2 be valid IRTemps before the call. */ static void math_DEINTERLEAVE_3 ( /*OUT*/IRTemp* u0, /*OUT*/IRTemp* u1, /*OUT*/IRTemp* u2, IRTemp i0, IRTemp i1, IRTemp i2, Int laneszB ) { # define IHI32x2(_e1, _e2) binop(Iop_InterleaveHI32x2, (_e1), (_e2)) # define IHI16x4(_e1, _e2) binop(Iop_InterleaveHI16x4, (_e1), (_e2)) # define SHL64(_tmp, _amt) binop(Iop_Shl64, mkexpr(_tmp), mkU8(_amt)) /* The following assumes that the guest is little endian, and hence that the memory-side (interleaved) data is stored little-endianly. */ vassert(u0 && u1 && u2); if (laneszB == 4) { // memLE(192 bits) == A0 B0 C0 A1 B1 C1 // i0 == B0 A0, i1 == A1 C0, i2 == C1 B1 // u0 == A1 A0, u1 == B1 B0, u2 == C1 C0 assign(*u0, IHI32x2(SHL64(i1, 0), SHL64(i0, 32))); assign(*u1, IHI32x2(SHL64(i2, 32), SHL64(i0, 0))); assign(*u2, IHI32x2(SHL64(i2, 0), SHL64(i1, 32))); } else if (laneszB == 2) { // memLE(192 bits) == A0 B0 C0 A1, B1 C1 A2 B2, C2 A3 B3 C3 // i0 == A1 C0 B0 A0, i1 == B2 A2 C1 B1, i2 == C3 B3 A3 C2 // u0 == A3 A2 A1 A0, u1 == B3 B2 B1 B0, u2 == C3 C2 C1 C0 # define XXX(_tmp3,_la3,_tmp2,_la2,_tmp1,_la1,_tmp0,_la0) \ IHI32x2( \ IHI16x4(SHL64((_tmp3),48-16*(_la3)), \ SHL64((_tmp2),48-16*(_la2))), \ IHI16x4(SHL64((_tmp1),48-16*(_la1)), \ SHL64((_tmp0),48-16*(_la0)))) assign(*u0, XXX(i2,1, i1,2, i0,3, i0,0)); assign(*u1, XXX(i2,2, i1,3, i1,0, i0,1)); assign(*u2, XXX(i2,3, i2,0, i1,1, i0,2)); # undef XXX } else if (laneszB == 1) { // These describe how the result vectors [7..0] are // assembled from the source vectors. Each pair is // (source vector number, lane number). static const UChar de0[16] = {2,5, 2,2, 1,7, 1,4, 1,1, 0,6, 0,3, 0,0}; static const UChar de1[16] = {2,6, 2,3, 2,0, 1,5, 1,2, 0,7, 0,4, 0,1}; static const UChar de2[16] = {2,7, 2,4, 2,1, 1,6, 1,3, 1,0, 0,5, 0,2}; assign(*u0, math_PERM_8x8x3(de0, i0, i1, i2)); assign(*u1, math_PERM_8x8x3(de1, i0, i1, i2)); assign(*u2, math_PERM_8x8x3(de2, i0, i1, i2)); } else { // Can never happen, since VLD3 only has valid lane widths of 32, // 16 or 8 bits. vpanic("math_DEINTERLEAVE_3"); } # undef SHL64 # undef IHI16x4 # undef IHI32x2 } /* Generate 3x64 -> 3x64 interleave code, for VST3. Caller must make *i0, *i1 and *i2 be valid IRTemps before the call. */ static void math_INTERLEAVE_3 ( /*OUT*/IRTemp* i0, /*OUT*/IRTemp* i1, /*OUT*/IRTemp* i2, IRTemp u0, IRTemp u1, IRTemp u2, Int laneszB ) { # define IHI32x2(_e1, _e2) binop(Iop_InterleaveHI32x2, (_e1), (_e2)) # define IHI16x4(_e1, _e2) binop(Iop_InterleaveHI16x4, (_e1), (_e2)) # define SHL64(_tmp, _amt) binop(Iop_Shl64, mkexpr(_tmp), mkU8(_amt)) /* The following assumes that the guest is little endian, and hence that the memory-side (interleaved) data is stored little-endianly. */ vassert(i0 && i1 && i2); if (laneszB == 4) { // memLE(192 bits) == A0 B0 C0 A1 B1 C1 // i0 == B0 A0, i1 == A1 C0, i2 == C1 B1 // u0 == A1 A0, u1 == B1 B0, u2 == C1 C0 assign(*i0, IHI32x2(SHL64(u1, 32), SHL64(u0, 32))); assign(*i1, IHI32x2(SHL64(u0, 0), SHL64(u2, 32))); assign(*i2, IHI32x2(SHL64(u2, 0), SHL64(u1, 0))); } else if (laneszB == 2) { // memLE(192 bits) == A0 B0 C0 A1, B1 C1 A2 B2, C2 A3 B3 C3 // i0 == A1 C0 B0 A0, i1 == B2 A2 C1 B1, i2 == C3 B3 A3 C2 // u0 == A3 A2 A1 A0, u1 == B3 B2 B1 B0, u2 == C3 C2 C1 C0 # define XXX(_tmp3,_la3,_tmp2,_la2,_tmp1,_la1,_tmp0,_la0) \ IHI32x2( \ IHI16x4(SHL64((_tmp3),48-16*(_la3)), \ SHL64((_tmp2),48-16*(_la2))), \ IHI16x4(SHL64((_tmp1),48-16*(_la1)), \ SHL64((_tmp0),48-16*(_la0)))) assign(*i0, XXX(u0,1, u2,0, u1,0, u0,0)); assign(*i1, XXX(u1,2, u0,2, u2,1, u1,1)); assign(*i2, XXX(u2,3, u1,3, u0,3, u2,2)); # undef XXX } else if (laneszB == 1) { // These describe how the result vectors [7..0] are // assembled from the source vectors. Each pair is // (source vector number, lane number). static const UChar in0[16] = {1,2, 0,2, 2,1, 1,1, 0,1, 2,0, 1,0, 0,0}; static const UChar in1[16] = {0,5, 2,4, 1,4, 0,4, 2,3, 1,3, 0,3, 2,2}; static const UChar in2[16] = {2,7, 1,7, 0,7, 2,6, 1,6, 0,6, 2,5, 1,5}; assign(*i0, math_PERM_8x8x3(in0, u0, u1, u2)); assign(*i1, math_PERM_8x8x3(in1, u0, u1, u2)); assign(*i2, math_PERM_8x8x3(in2, u0, u1, u2)); } else { // Can never happen, since VST3 only has valid lane widths of 32, // 16 or 8 bits. vpanic("math_INTERLEAVE_3"); } # undef SHL64 # undef IHI16x4 # undef IHI32x2 } /* Generate 4x64 -> 4x64 deinterleave code, for VLD4. Caller must make *u0, *u1, *u2 and *u3 be valid IRTemps before the call. */ static void math_DEINTERLEAVE_4 ( /*OUT*/IRTemp* u0, /*OUT*/IRTemp* u1, /*OUT*/IRTemp* u2, /*OUT*/IRTemp* u3, IRTemp i0, IRTemp i1, IRTemp i2, IRTemp i3, Int laneszB ) { # define IHI32x2(_t1, _t2) \ binop(Iop_InterleaveHI32x2, mkexpr(_t1), mkexpr(_t2)) # define ILO32x2(_t1, _t2) \ binop(Iop_InterleaveLO32x2, mkexpr(_t1), mkexpr(_t2)) # define IHI16x4(_t1, _t2) \ binop(Iop_InterleaveHI16x4, mkexpr(_t1), mkexpr(_t2)) # define ILO16x4(_t1, _t2) \ binop(Iop_InterleaveLO16x4, mkexpr(_t1), mkexpr(_t2)) # define IHI8x8(_t1, _e2) \ binop(Iop_InterleaveHI8x8, mkexpr(_t1), _e2) # define SHL64(_tmp, _amt) \ binop(Iop_Shl64, mkexpr(_tmp), mkU8(_amt)) /* The following assumes that the guest is little endian, and hence that the memory-side (interleaved) data is stored little-endianly. */ vassert(u0 && u1 && u2 && u3); if (laneszB == 4) { assign(*u0, ILO32x2(i2, i0)); assign(*u1, IHI32x2(i2, i0)); assign(*u2, ILO32x2(i3, i1)); assign(*u3, IHI32x2(i3, i1)); } else if (laneszB == 2) { IRTemp b1b0a1a0 = newTemp(Ity_I64); IRTemp b3b2a3a2 = newTemp(Ity_I64); IRTemp d1d0c1c0 = newTemp(Ity_I64); IRTemp d3d2c3c2 = newTemp(Ity_I64); assign(b1b0a1a0, ILO16x4(i1, i0)); assign(b3b2a3a2, ILO16x4(i3, i2)); assign(d1d0c1c0, IHI16x4(i1, i0)); assign(d3d2c3c2, IHI16x4(i3, i2)); // And now do what we did for the 32-bit case. assign(*u0, ILO32x2(b3b2a3a2, b1b0a1a0)); assign(*u1, IHI32x2(b3b2a3a2, b1b0a1a0)); assign(*u2, ILO32x2(d3d2c3c2, d1d0c1c0)); assign(*u3, IHI32x2(d3d2c3c2, d1d0c1c0)); } else if (laneszB == 1) { // Deinterleave into 16-bit chunks, then do as the 16-bit case. IRTemp i0x = newTemp(Ity_I64); IRTemp i1x = newTemp(Ity_I64); IRTemp i2x = newTemp(Ity_I64); IRTemp i3x = newTemp(Ity_I64); assign(i0x, IHI8x8(i0, SHL64(i0, 32))); assign(i1x, IHI8x8(i1, SHL64(i1, 32))); assign(i2x, IHI8x8(i2, SHL64(i2, 32))); assign(i3x, IHI8x8(i3, SHL64(i3, 32))); // From here on is like the 16 bit case. IRTemp b1b0a1a0 = newTemp(Ity_I64); IRTemp b3b2a3a2 = newTemp(Ity_I64); IRTemp d1d0c1c0 = newTemp(Ity_I64); IRTemp d3d2c3c2 = newTemp(Ity_I64); assign(b1b0a1a0, ILO16x4(i1x, i0x)); assign(b3b2a3a2, ILO16x4(i3x, i2x)); assign(d1d0c1c0, IHI16x4(i1x, i0x)); assign(d3d2c3c2, IHI16x4(i3x, i2x)); // And now do what we did for the 32-bit case. assign(*u0, ILO32x2(b3b2a3a2, b1b0a1a0)); assign(*u1, IHI32x2(b3b2a3a2, b1b0a1a0)); assign(*u2, ILO32x2(d3d2c3c2, d1d0c1c0)); assign(*u3, IHI32x2(d3d2c3c2, d1d0c1c0)); } else { // Can never happen, since VLD4 only has valid lane widths of 32, // 16 or 8 bits. vpanic("math_DEINTERLEAVE_4"); } # undef SHL64 # undef IHI8x8 # undef ILO16x4 # undef IHI16x4 # undef ILO32x2 # undef IHI32x2 } /* Generate 4x64 -> 4x64 interleave code, for VST4. Caller must make *i0, *i1, *i2 and *i3 be valid IRTemps before the call. */ static void math_INTERLEAVE_4 ( /*OUT*/IRTemp* i0, /*OUT*/IRTemp* i1, /*OUT*/IRTemp* i2, /*OUT*/IRTemp* i3, IRTemp u0, IRTemp u1, IRTemp u2, IRTemp u3, Int laneszB ) { # define IHI32x2(_t1, _t2) \ binop(Iop_InterleaveHI32x2, mkexpr(_t1), mkexpr(_t2)) # define ILO32x2(_t1, _t2) \ binop(Iop_InterleaveLO32x2, mkexpr(_t1), mkexpr(_t2)) # define CEV16x4(_t1, _t2) \ binop(Iop_CatEvenLanes16x4, mkexpr(_t1), mkexpr(_t2)) # define COD16x4(_t1, _t2) \ binop(Iop_CatOddLanes16x4, mkexpr(_t1), mkexpr(_t2)) # define COD8x8(_t1, _e2) \ binop(Iop_CatOddLanes8x8, mkexpr(_t1), _e2) # define SHL64(_tmp, _amt) \ binop(Iop_Shl64, mkexpr(_tmp), mkU8(_amt)) /* The following assumes that the guest is little endian, and hence that the memory-side (interleaved) data is stored little-endianly. */ vassert(u0 && u1 && u2 && u3); if (laneszB == 4) { assign(*i0, ILO32x2(u1, u0)); assign(*i1, ILO32x2(u3, u2)); assign(*i2, IHI32x2(u1, u0)); assign(*i3, IHI32x2(u3, u2)); } else if (laneszB == 2) { // First, interleave at the 32-bit lane size. IRTemp b1b0a1a0 = newTemp(Ity_I64); IRTemp b3b2a3a2 = newTemp(Ity_I64); IRTemp d1d0c1c0 = newTemp(Ity_I64); IRTemp d3d2c3c2 = newTemp(Ity_I64); assign(b1b0a1a0, ILO32x2(u1, u0)); assign(b3b2a3a2, IHI32x2(u1, u0)); assign(d1d0c1c0, ILO32x2(u3, u2)); assign(d3d2c3c2, IHI32x2(u3, u2)); // And interleave (cat) at the 16 bit size. assign(*i0, CEV16x4(d1d0c1c0, b1b0a1a0)); assign(*i1, COD16x4(d1d0c1c0, b1b0a1a0)); assign(*i2, CEV16x4(d3d2c3c2, b3b2a3a2)); assign(*i3, COD16x4(d3d2c3c2, b3b2a3a2)); } else if (laneszB == 1) { // First, interleave at the 32-bit lane size. IRTemp b1b0a1a0 = newTemp(Ity_I64); IRTemp b3b2a3a2 = newTemp(Ity_I64); IRTemp d1d0c1c0 = newTemp(Ity_I64); IRTemp d3d2c3c2 = newTemp(Ity_I64); assign(b1b0a1a0, ILO32x2(u1, u0)); assign(b3b2a3a2, IHI32x2(u1, u0)); assign(d1d0c1c0, ILO32x2(u3, u2)); assign(d3d2c3c2, IHI32x2(u3, u2)); // And interleave (cat) at the 16 bit size. IRTemp i0x = newTemp(Ity_I64); IRTemp i1x = newTemp(Ity_I64); IRTemp i2x = newTemp(Ity_I64); IRTemp i3x = newTemp(Ity_I64); assign(i0x, CEV16x4(d1d0c1c0, b1b0a1a0)); assign(i1x, COD16x4(d1d0c1c0, b1b0a1a0)); assign(i2x, CEV16x4(d3d2c3c2, b3b2a3a2)); assign(i3x, COD16x4(d3d2c3c2, b3b2a3a2)); // And rearrange within each word, to get the right 8 bit lanes. assign(*i0, COD8x8(i0x, SHL64(i0x, 8))); assign(*i1, COD8x8(i1x, SHL64(i1x, 8))); assign(*i2, COD8x8(i2x, SHL64(i2x, 8))); assign(*i3, COD8x8(i3x, SHL64(i3x, 8))); } else { // Can never happen, since VLD4 only has valid lane widths of 32, // 16 or 8 bits. vpanic("math_DEINTERLEAVE_4"); } # undef SHL64 # undef COD8x8 # undef COD16x4 # undef CEV16x4 # undef ILO32x2 # undef IHI32x2 } /* A7.7 Advanced SIMD element or structure load/store instructions */ static Bool dis_neon_load_or_store ( UInt theInstr, Bool isT, IRTemp condT ) { # define INSN(_bMax,_bMin) SLICE_UInt(theInstr, (_bMax), (_bMin)) UInt bA = INSN(23,23); UInt fB = INSN(11,8); UInt bL = INSN(21,21); UInt rD = (INSN(22,22) << 4) | INSN(15,12); UInt rN = INSN(19,16); UInt rM = INSN(3,0); UInt N, size, i, j; UInt inc; UInt regs = 1; if (isT) { vassert(condT != IRTemp_INVALID); } else { vassert(condT == IRTemp_INVALID); } /* So now, if condT is not IRTemp_INVALID, we know we're dealing with Thumb code. */ if (INSN(20,20) != 0) return False; IRTemp initialRn = newTemp(Ity_I32); assign(initialRn, isT ? getIRegT(rN) : getIRegA(rN)); IRTemp initialRm = newTemp(Ity_I32); assign(initialRm, isT ? getIRegT(rM) : getIRegA(rM)); /* There are 3 cases: (1) VSTn / VLDn (n-element structure from/to one lane) (2) VLDn (single element to all lanes) (3) VSTn / VLDn (multiple n-element structures) */ if (bA) { N = fB & 3; if ((fB >> 2) < 3) { /* ------------ Case (1) ------------ VSTn / VLDn (n-element structure from/to one lane) */ size = fB >> 2; switch (size) { case 0: i = INSN(7,5); inc = 1; break; case 1: i = INSN(7,6); inc = INSN(5,5) ? 2 : 1; break; case 2: i = INSN(7,7); inc = INSN(6,6) ? 2 : 1; break; case 3: return False; default: vassert(0); } IRTemp addr = newTemp(Ity_I32); assign(addr, mkexpr(initialRn)); // go uncond if (condT != IRTemp_INVALID) mk_skip_over_T32_if_cond_is_false(condT); // now uncond if (bL) mk_neon_elem_load_to_one_lane(rD, inc, i, N, size, addr); else mk_neon_elem_store_from_one_lane(rD, inc, i, N, size, addr); DIP("v%s%u.%d {", bL ? "ld" : "st", N + 1, 8 << size); for (j = 0; j <= N; j++) { if (j) DIP(", "); DIP("d%u[%u]", rD + j * inc, i); } DIP("}, [r%u]", rN); if (rM != 13 && rM != 15) { DIP(", r%u\n", rM); } else { DIP("%s\n", (rM != 15) ? "!" : ""); } } else { /* ------------ Case (2) ------------ VLDn (single element to all lanes) */ UInt r; if (bL == 0) return False; inc = INSN(5,5) + 1; size = INSN(7,6); /* size == 3 and size == 2 cases differ in alignment constraints */ if (size == 3 && N == 3 && INSN(4,4) == 1) size = 2; if (size == 0 && N == 0 && INSN(4,4) == 1) return False; if (N == 2 && INSN(4,4) == 1) return False; if (size == 3) return False; // go uncond if (condT != IRTemp_INVALID) mk_skip_over_T32_if_cond_is_false(condT); // now uncond IRTemp addr = newTemp(Ity_I32); assign(addr, mkexpr(initialRn)); if (N == 0 && INSN(5,5)) regs = 2; for (r = 0; r < regs; r++) { switch (size) { case 0: putDRegI64(rD + r, unop(Iop_Dup8x8, loadLE(Ity_I8, mkexpr(addr))), IRTemp_INVALID); break; case 1: putDRegI64(rD + r, unop(Iop_Dup16x4, loadLE(Ity_I16, mkexpr(addr))), IRTemp_INVALID); break; case 2: putDRegI64(rD + r, unop(Iop_Dup32x2, loadLE(Ity_I32, mkexpr(addr))), IRTemp_INVALID); break; default: vassert(0); } for (i = 1; i <= N; i++) { switch (size) { case 0: putDRegI64(rD + r + i * inc, unop(Iop_Dup8x8, loadLE(Ity_I8, binop(Iop_Add32, mkexpr(addr), mkU32(i * 1)))), IRTemp_INVALID); break; case 1: putDRegI64(rD + r + i * inc, unop(Iop_Dup16x4, loadLE(Ity_I16, binop(Iop_Add32, mkexpr(addr), mkU32(i * 2)))), IRTemp_INVALID); break; case 2: putDRegI64(rD + r + i * inc, unop(Iop_Dup32x2, loadLE(Ity_I32, binop(Iop_Add32, mkexpr(addr), mkU32(i * 4)))), IRTemp_INVALID); break; default: vassert(0); } } } DIP("vld%u.%d {", N + 1, 8 << size); for (r = 0; r < regs; r++) { for (i = 0; i <= N; i++) { if (i || r) DIP(", "); DIP("d%u[]", rD + r + i * inc); } } DIP("}, [r%u]", rN); if (rM != 13 && rM != 15) { DIP(", r%u\n", rM); } else { DIP("%s\n", (rM != 15) ? "!" : ""); } } /* Writeback. We're uncond here, so no condT-ing. */ if (rM != 15) { if (rM == 13) { IRExpr* e = binop(Iop_Add32, mkexpr(initialRn), mkU32((1 << size) * (N + 1))); if (isT) putIRegT(rN, e, IRTemp_INVALID); else putIRegA(rN, e, IRTemp_INVALID, Ijk_Boring); } else { IRExpr* e = binop(Iop_Add32, mkexpr(initialRn), mkexpr(initialRm)); if (isT) putIRegT(rN, e, IRTemp_INVALID); else putIRegA(rN, e, IRTemp_INVALID, Ijk_Boring); } } return True; } else { /* ------------ Case (3) ------------ VSTn / VLDn (multiple n-element structures) */ inc = (fB & 1) + 1; if (fB == BITS4(0,0,1,0) // Dd, Dd+1, Dd+2, Dd+3 inc = 1 regs = 4 || fB == BITS4(0,1,1,0) // Dd, Dd+1, Dd+2 inc = 1 regs = 3 || fB == BITS4(0,1,1,1) // Dd inc = 2 regs = 1 || fB == BITS4(1,0,1,0)) { // Dd, Dd+1 inc = 1 regs = 2 N = 0; // VLD1/VST1. 'inc' does not appear to have any // meaning for the VLD1/VST1 cases. 'regs' is the number of // registers involved. if (rD + regs > 32) return False; } else if (fB == BITS4(0,0,1,1) // Dd, Dd+1, Dd+2, Dd+3 inc=2 regs = 2 || fB == BITS4(1,0,0,0) // Dd, Dd+1 inc=1 regs = 1 || fB == BITS4(1,0,0,1)) { // Dd, Dd+2 inc=2 regs = 1 N = 1; // VLD2/VST2. 'regs' is the number of register-pairs involved if (regs == 1 && inc == 1 && rD + 1 >= 32) return False; if (regs == 1 && inc == 2 && rD + 2 >= 32) return False; if (regs == 2 && inc == 2 && rD + 3 >= 32) return False; } else if (fB == BITS4(0,1,0,0) || fB == BITS4(0,1,0,1)) { N = 2; // VLD3/VST3 if (inc == 1 && rD + 2 >= 32) return False; if (inc == 2 && rD + 4 >= 32) return False; } else if (fB == BITS4(0,0,0,0) || fB == BITS4(0,0,0,1)) { N = 3; // VLD4/VST4 if (inc == 1 && rD + 3 >= 32) return False; if (inc == 2 && rD + 6 >= 32) return False; } else { return False; } if (N == 1 && fB == BITS4(0,0,1,1)) { regs = 2; } else if (N == 0) { if (fB == BITS4(1,0,1,0)) { regs = 2; } else if (fB == BITS4(0,1,1,0)) { regs = 3; } else if (fB == BITS4(0,0,1,0)) { regs = 4; } } size = INSN(7,6); if (N == 0 && size == 3) size = 2; if (size == 3) return False; // go uncond if (condT != IRTemp_INVALID) mk_skip_over_T32_if_cond_is_false(condT); // now uncond IRTemp addr = newTemp(Ity_I32); assign(addr, mkexpr(initialRn)); if (N == 0 /* No interleaving -- VLD1/VST1 */) { UInt r; vassert(regs == 1 || regs == 2 || regs == 3 || regs == 4); /* inc has no relevance here */ for (r = 0; r < regs; r++) { if (bL) putDRegI64(rD+r, loadLE(Ity_I64, mkexpr(addr)), IRTemp_INVALID); else storeLE(mkexpr(addr), getDRegI64(rD+r)); IRTemp tmp = newTemp(Ity_I32); assign(tmp, binop(Iop_Add32, mkexpr(addr), mkU32(8))); addr = tmp; } } else if (N == 1 /* 2-interleaving -- VLD2/VST2 */) { vassert( (regs == 1 && (inc == 1 || inc == 2)) || (regs == 2 && inc == 2) ); // Make 'nregs' be the number of registers and 'regstep' // equal the actual register-step. The ARM encoding, using 'regs' // and 'inc', is bizarre. After this, we have: // Dd, Dd+1 regs = 1, inc = 1, nregs = 2, regstep = 1 // Dd, Dd+2 regs = 1, inc = 2, nregs = 2, regstep = 2 // Dd, Dd+1, Dd+2, Dd+3 regs = 2, inc = 2, nregs = 4, regstep = 1 UInt nregs = 2; UInt regstep = 1; if (regs == 1 && inc == 1) { /* nothing */ } else if (regs == 1 && inc == 2) { regstep = 2; } else if (regs == 2 && inc == 2) { nregs = 4; } else { vassert(0); } // 'a' is address, // 'di' is interleaved data, 'du' is uninterleaved data if (nregs == 2) { IRExpr* a0 = binop(Iop_Add32, mkexpr(addr), mkU32(0)); IRExpr* a1 = binop(Iop_Add32, mkexpr(addr), mkU32(8)); IRTemp di0 = newTemp(Ity_I64); IRTemp di1 = newTemp(Ity_I64); IRTemp du0 = newTemp(Ity_I64); IRTemp du1 = newTemp(Ity_I64); if (bL) { assign(di0, loadLE(Ity_I64, a0)); assign(di1, loadLE(Ity_I64, a1)); math_DEINTERLEAVE_2(&du0, &du1, di0, di1, 1 << size); putDRegI64(rD + 0 * regstep, mkexpr(du0), IRTemp_INVALID); putDRegI64(rD + 1 * regstep, mkexpr(du1), IRTemp_INVALID); } else { assign(du0, getDRegI64(rD + 0 * regstep)); assign(du1, getDRegI64(rD + 1 * regstep)); math_INTERLEAVE_2(&di0, &di1, du0, du1, 1 << size); storeLE(a0, mkexpr(di0)); storeLE(a1, mkexpr(di1)); } IRTemp tmp = newTemp(Ity_I32); assign(tmp, binop(Iop_Add32, mkexpr(addr), mkU32(16))); addr = tmp; } else { vassert(nregs == 4); vassert(regstep == 1); IRExpr* a0 = binop(Iop_Add32, mkexpr(addr), mkU32(0)); IRExpr* a1 = binop(Iop_Add32, mkexpr(addr), mkU32(8)); IRExpr* a2 = binop(Iop_Add32, mkexpr(addr), mkU32(16)); IRExpr* a3 = binop(Iop_Add32, mkexpr(addr), mkU32(24)); IRTemp di0 = newTemp(Ity_I64); IRTemp di1 = newTemp(Ity_I64); IRTemp di2 = newTemp(Ity_I64); IRTemp di3 = newTemp(Ity_I64); IRTemp du0 = newTemp(Ity_I64); IRTemp du1 = newTemp(Ity_I64); IRTemp du2 = newTemp(Ity_I64); IRTemp du3 = newTemp(Ity_I64); if (bL) { assign(di0, loadLE(Ity_I64, a0)); assign(di1, loadLE(Ity_I64, a1)); assign(di2, loadLE(Ity_I64, a2)); assign(di3, loadLE(Ity_I64, a3)); // Note spooky interleaving: du0, du2, di0, di1 etc math_DEINTERLEAVE_2(&du0, &du2, di0, di1, 1 << size); math_DEINTERLEAVE_2(&du1, &du3, di2, di3, 1 << size); putDRegI64(rD + 0 * regstep, mkexpr(du0), IRTemp_INVALID); putDRegI64(rD + 1 * regstep, mkexpr(du1), IRTemp_INVALID); putDRegI64(rD + 2 * regstep, mkexpr(du2), IRTemp_INVALID); putDRegI64(rD + 3 * regstep, mkexpr(du3), IRTemp_INVALID); } else { assign(du0, getDRegI64(rD + 0 * regstep)); assign(du1, getDRegI64(rD + 1 * regstep)); assign(du2, getDRegI64(rD + 2 * regstep)); assign(du3, getDRegI64(rD + 3 * regstep)); // Note spooky interleaving: du0, du2, di0, di1 etc math_INTERLEAVE_2(&di0, &di1, du0, du2, 1 << size); math_INTERLEAVE_2(&di2, &di3, du1, du3, 1 << size); storeLE(a0, mkexpr(di0)); storeLE(a1, mkexpr(di1)); storeLE(a2, mkexpr(di2)); storeLE(a3, mkexpr(di3)); } IRTemp tmp = newTemp(Ity_I32); assign(tmp, binop(Iop_Add32, mkexpr(addr), mkU32(32))); addr = tmp; } } else if (N == 2 /* 3-interleaving -- VLD3/VST3 */) { // Dd, Dd+1, Dd+2 regs = 1, inc = 1 // Dd, Dd+2, Dd+4 regs = 1, inc = 2 vassert(regs == 1 && (inc == 1 || inc == 2)); IRExpr* a0 = binop(Iop_Add32, mkexpr(addr), mkU32(0)); IRExpr* a1 = binop(Iop_Add32, mkexpr(addr), mkU32(8)); IRExpr* a2 = binop(Iop_Add32, mkexpr(addr), mkU32(16)); IRTemp di0 = newTemp(Ity_I64); IRTemp di1 = newTemp(Ity_I64); IRTemp di2 = newTemp(Ity_I64); IRTemp du0 = newTemp(Ity_I64); IRTemp du1 = newTemp(Ity_I64); IRTemp du2 = newTemp(Ity_I64); if (bL) { assign(di0, loadLE(Ity_I64, a0)); assign(di1, loadLE(Ity_I64, a1)); assign(di2, loadLE(Ity_I64, a2)); math_DEINTERLEAVE_3(&du0, &du1, &du2, di0, di1, di2, 1 << size); putDRegI64(rD + 0 * inc, mkexpr(du0), IRTemp_INVALID); putDRegI64(rD + 1 * inc, mkexpr(du1), IRTemp_INVALID); putDRegI64(rD + 2 * inc, mkexpr(du2), IRTemp_INVALID); } else { assign(du0, getDRegI64(rD + 0 * inc)); assign(du1, getDRegI64(rD + 1 * inc)); assign(du2, getDRegI64(rD + 2 * inc)); math_INTERLEAVE_3(&di0, &di1, &di2, du0, du1, du2, 1 << size); storeLE(a0, mkexpr(di0)); storeLE(a1, mkexpr(di1)); storeLE(a2, mkexpr(di2)); } IRTemp tmp = newTemp(Ity_I32); assign(tmp, binop(Iop_Add32, mkexpr(addr), mkU32(24))); addr = tmp; } else if (N == 3 /* 4-interleaving -- VLD4/VST4 */) { // Dd, Dd+1, Dd+2, Dd+3 regs = 1, inc = 1 // Dd, Dd+2, Dd+4, Dd+6 regs = 1, inc = 2 vassert(regs == 1 && (inc == 1 || inc == 2)); IRExpr* a0 = binop(Iop_Add32, mkexpr(addr), mkU32(0)); IRExpr* a1 = binop(Iop_Add32, mkexpr(addr), mkU32(8)); IRExpr* a2 = binop(Iop_Add32, mkexpr(addr), mkU32(16)); IRExpr* a3 = binop(Iop_Add32, mkexpr(addr), mkU32(24)); IRTemp di0 = newTemp(Ity_I64); IRTemp di1 = newTemp(Ity_I64); IRTemp di2 = newTemp(Ity_I64); IRTemp di3 = newTemp(Ity_I64); IRTemp du0 = newTemp(Ity_I64); IRTemp du1 = newTemp(Ity_I64); IRTemp du2 = newTemp(Ity_I64); IRTemp du3 = newTemp(Ity_I64); if (bL) { assign(di0, loadLE(Ity_I64, a0)); assign(di1, loadLE(Ity_I64, a1)); assign(di2, loadLE(Ity_I64, a2)); assign(di3, loadLE(Ity_I64, a3)); math_DEINTERLEAVE_4(&du0, &du1, &du2, &du3, di0, di1, di2, di3, 1 << size); putDRegI64(rD + 0 * inc, mkexpr(du0), IRTemp_INVALID); putDRegI64(rD + 1 * inc, mkexpr(du1), IRTemp_INVALID); putDRegI64(rD + 2 * inc, mkexpr(du2), IRTemp_INVALID); putDRegI64(rD + 3 * inc, mkexpr(du3), IRTemp_INVALID); } else { assign(du0, getDRegI64(rD + 0 * inc)); assign(du1, getDRegI64(rD + 1 * inc)); assign(du2, getDRegI64(rD + 2 * inc)); assign(du3, getDRegI64(rD + 3 * inc)); math_INTERLEAVE_4(&di0, &di1, &di2, &di3, du0, du1, du2, du3, 1 << size); storeLE(a0, mkexpr(di0)); storeLE(a1, mkexpr(di1)); storeLE(a2, mkexpr(di2)); storeLE(a3, mkexpr(di3)); } IRTemp tmp = newTemp(Ity_I32); assign(tmp, binop(Iop_Add32, mkexpr(addr), mkU32(32))); addr = tmp; } else { vassert(0); } /* Writeback */ if (rM != 15) { IRExpr* e; if (rM == 13) { e = binop(Iop_Add32, mkexpr(initialRn), mkU32(8 * (N + 1) * regs)); } else { e = binop(Iop_Add32, mkexpr(initialRn), mkexpr(initialRm)); } if (isT) putIRegT(rN, e, IRTemp_INVALID); else putIRegA(rN, e, IRTemp_INVALID, Ijk_Boring); } DIP("v%s%u.%d {", bL ? "ld" : "st", N + 1, 8 << INSN(7,6)); if ((inc == 1 && regs * (N + 1) > 1) || (inc == 2 && regs > 1 && N > 0)) { DIP("d%u-d%u", rD, rD + regs * (N + 1) - 1); } else { UInt r; for (r = 0; r < regs; r++) { for (i = 0; i <= N; i++) { if (i || r) DIP(", "); DIP("d%u", rD + r + i * inc); } } } DIP("}, [r%u]", rN); if (rM != 13 && rM != 15) { DIP(", r%u\n", rM); } else { DIP("%s\n", (rM != 15) ? "!" : ""); } return True; } # undef INSN } /*------------------------------------------------------------*/ /*--- NEON, top level control ---*/ /*------------------------------------------------------------*/ /* Both ARM and Thumb */ /* Translate a NEON instruction. If successful, returns True and *dres may or may not be updated. If failure, returns False and doesn't change *dres nor create any IR. The Thumb and ARM encodings are similar for the 24 bottom bits, but the top 8 bits are slightly different. In both cases, the caller must pass the entire 32 bits. Callers may pass any instruction; this ignores non-NEON ones. Caller must supply an IRTemp 'condT' holding the gating condition, or IRTemp_INVALID indicating the insn is always executed. In ARM code, this must always be IRTemp_INVALID because NEON insns are unconditional for ARM. Finally, the caller must indicate whether this occurs in ARM or in Thumb code. This only handles NEON for ARMv7 and below. The NEON extensions for v8 are handled by decode_V8_instruction. */ static Bool decode_NEON_instruction_ARMv7_and_below ( /*MOD*/DisResult* dres, UInt insn32, IRTemp condT, Bool isT ) { # define INSN(_bMax,_bMin) SLICE_UInt(insn32, (_bMax), (_bMin)) /* There are two kinds of instruction to deal with: load/store and data processing. In each case, in ARM mode we merely identify the kind, and pass it on to the relevant sub-handler. In Thumb mode we identify the kind, swizzle the bits around to make it have the same encoding as in ARM, and hand it on to the sub-handler. */ /* In ARM mode, NEON instructions can't be conditional. */ if (!isT) vassert(condT == IRTemp_INVALID); /* Data processing: Thumb: 111U 1111 AAAA Axxx xxxx BBBB CCCC xxxx ARM: 1111 001U AAAA Axxx xxxx BBBB CCCC xxxx */ if (!isT && INSN(31,25) == BITS7(1,1,1,1,0,0,1)) { // ARM, DP return dis_neon_data_processing(INSN(31,0), condT); } if (isT && INSN(31,29) == BITS3(1,1,1) && INSN(27,24) == BITS4(1,1,1,1)) { // Thumb, DP UInt reformatted = INSN(23,0); reformatted |= (((UInt)INSN(28,28)) << 24); // U bit reformatted |= (((UInt)BITS7(1,1,1,1,0,0,1)) << 25); return dis_neon_data_processing(reformatted, condT); } /* Load/store: Thumb: 1111 1001 AxL0 xxxx xxxx BBBB xxxx xxxx ARM: 1111 0100 AxL0 xxxx xxxx BBBB xxxx xxxx */ if (!isT && INSN(31,24) == BITS8(1,1,1,1,0,1,0,0)) { // ARM, memory return dis_neon_load_or_store(INSN(31,0), isT, condT); } if (isT && INSN(31,24) == BITS8(1,1,1,1,1,0,0,1)) { UInt reformatted = INSN(23,0); reformatted |= (((UInt)BITS8(1,1,1,1,0,1,0,0)) << 24); return dis_neon_load_or_store(reformatted, isT, condT); } /* Doesn't match. */ return False; # undef INSN } /*------------------------------------------------------------*/ /*--- V6 MEDIA instructions ---*/ /*------------------------------------------------------------*/ /* Both ARM and Thumb */ /* Translate a V6 media instruction. If successful, returns True and *dres may or may not be updated. If failure, returns False and doesn't change *dres nor create any IR. The Thumb and ARM encodings are completely different. In Thumb mode, the caller must pass the entire 32 bits. In ARM mode it must pass the lower 28 bits. Apart from that, callers may pass any instruction; this function ignores anything it doesn't recognise. Caller must supply an IRTemp 'condT' holding the gating condition, or IRTemp_INVALID indicating the insn is always executed. Caller must also supply an ARMCondcode 'conq'. This is only used for debug printing, no other purpose. For ARM, this is simply the top 4 bits of the original instruction. For Thumb, the condition is not (really) known until run time, and so ARMCondAL should be passed, only so that printing of these instructions does not show any condition. Finally, the caller must indicate whether this occurs in ARM or in Thumb code. */ static Bool decode_V6MEDIA_instruction ( /*MOD*/DisResult* dres, UInt insnv6m, IRTemp condT, ARMCondcode conq, Bool isT ) { # define INSNA(_bMax,_bMin) SLICE_UInt(insnv6m, (_bMax), (_bMin)) # define INSNT0(_bMax,_bMin) SLICE_UInt( ((insnv6m >> 16) & 0xFFFF), \ (_bMax), (_bMin) ) # define INSNT1(_bMax,_bMin) SLICE_UInt( ((insnv6m >> 0) & 0xFFFF), \ (_bMax), (_bMin) ) HChar dis_buf[128]; dis_buf[0] = 0; if (isT) { vassert(conq == ARMCondAL); } else { vassert(INSNA(31,28) == BITS4(0,0,0,0)); // caller's obligation vassert(conq >= ARMCondEQ && conq <= ARMCondAL); } /* ----------- smulbb, smulbt, smultb, smultt ----------- */ { UInt regD = 99, regM = 99, regN = 99, bitM = 0, bitN = 0; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFB1 && INSNT1(15,12) == BITS4(1,1,1,1) && INSNT1(7,6) == BITS2(0,0)) { regD = INSNT1(11,8); regM = INSNT1(3,0); regN = INSNT0(3,0); bitM = INSNT1(4,4); bitN = INSNT1(5,5); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (BITS8(0,0,0,1,0,1,1,0) == INSNA(27,20) && BITS4(0,0,0,0) == INSNA(15,12) && BITS4(1,0,0,0) == (INSNA(7,4) & BITS4(1,0,0,1)) ) { regD = INSNA(19,16); regM = INSNA(11,8); regN = INSNA(3,0); bitM = INSNA(6,6); bitN = INSNA(5,5); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp srcN = newTemp(Ity_I32); IRTemp srcM = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); assign( srcN, binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regN) : getIRegA(regN), mkU8(bitN ? 0 : 16)), mkU8(16)) ); assign( srcM, binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regM) : getIRegA(regM), mkU8(bitM ? 0 : 16)), mkU8(16)) ); assign( res, binop(Iop_Mul32, mkexpr(srcN), mkexpr(srcM)) ); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); DIP( "smul%c%c%s r%u, r%u, r%u\n", bitN ? 't' : 'b', bitM ? 't' : 'b', nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------ smulwb
,
,
------------- */ /* ------------ smulwt
,
,
------------- */ { UInt regD = 99, regN = 99, regM = 99, bitM = 0; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFB3 && INSNT1(15,12) == BITS4(1,1,1,1) && INSNT1(7,5) == BITS3(0,0,0)) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); bitM = INSNT1(4,4); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,0,1,0) && INSNA(15,12) == BITS4(0,0,0,0) && (INSNA(7,4) & BITS4(1,0,1,1)) == BITS4(1,0,1,0)) { regD = INSNA(19,16); regN = INSNA(3,0); regM = INSNA(11,8); bitM = INSNA(6,6); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_prod = newTemp(Ity_I64); assign( irt_prod, binop(Iop_MullS32, isT ? getIRegT(regN) : getIRegA(regN), binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regM) : getIRegA(regM), mkU8(bitM ? 0 : 16)), mkU8(16))) ); IRExpr* ire_result = binop(Iop_Or32, binop( Iop_Shl32, unop(Iop_64HIto32, mkexpr(irt_prod)), mkU8(16) ), binop( Iop_Shr32, unop(Iop_64to32, mkexpr(irt_prod)), mkU8(16) ) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP("smulw%c%s r%u, r%u, r%u\n", bitM ? 't' : 'b', nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------ pkhbt
Rd, Rn, Rm {,LSL #imm} ------------- */ /* ------------ pkhtb
Rd, Rn, Rm {,ASR #imm} ------------- */ { UInt regD = 99, regN = 99, regM = 99, imm5 = 99, shift_type = 99; Bool tbform = False; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xEAC && INSNT1(15,15) == 0 && INSNT1(4,4) == 0) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); imm5 = (INSNT1(14,12) << 2) | INSNT1(7,6); shift_type = (INSNT1(5,5) << 1) | 0; tbform = (INSNT1(5,5) == 0) ? False : True; if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,1,0,0,0) && INSNA(5,4) == BITS2(0,1) && (INSNA(6,6) == 0 || INSNA(6,6) == 1) ) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); imm5 = INSNA(11,7); shift_type = (INSNA(6,6) << 1) | 0; tbform = (INSNA(6,6) == 0) ? False : True; if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_regM_shift = newTemp(Ity_I32); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); compute_result_and_C_after_shift_by_imm5( dis_buf, &irt_regM_shift, NULL, irt_regM, shift_type, imm5, regM ); UInt mask = (tbform == True) ? 0x0000FFFF : 0xFFFF0000; IRExpr* ire_result = binop( Iop_Or32, binop(Iop_And32, mkexpr(irt_regM_shift), mkU32(mask)), binop(Iop_And32, isT ? getIRegT(regN) : getIRegA(regN), unop(Iop_Not32, mkU32(mask))) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "pkh%s%s r%u, r%u, r%u %s\n", tbform ? "tb" : "bt", nCC(conq), regD, regN, regM, dis_buf ); return True; } /* fall through */ } /* ---------- usat
,#
,
{,
} ----------- */ { UInt regD = 99, regN = 99, shift_type = 99, imm5 = 99, sat_imm = 99; Bool gate = False; if (isT) { if (INSNT0(15,6) == BITS10(1,1,1,1,0,0,1,1,1,0) && INSNT0(4,4) == 0 && INSNT1(15,15) == 0 && INSNT1(5,5) == 0) { regD = INSNT1(11,8); regN = INSNT0(3,0); shift_type = (INSNT0(5,5) << 1) | 0; imm5 = (INSNT1(14,12) << 2) | INSNT1(7,6); sat_imm = INSNT1(4,0); if (!isBadRegT(regD) && !isBadRegT(regN)) gate = True; if (shift_type == BITS2(1,0) && imm5 == 0) gate = False; } } else { if (INSNA(27,21) == BITS7(0,1,1,0,1,1,1) && INSNA(5,4) == BITS2(0,1)) { regD = INSNA(15,12); regN = INSNA(3,0); shift_type = (INSNA(6,6) << 1) | 0; imm5 = INSNA(11,7); sat_imm = INSNA(20,16); if (regD != 15 && regN != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regN_shift = newTemp(Ity_I32); IRTemp irt_sat_Q = newTemp(Ity_I32); IRTemp irt_result = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); compute_result_and_C_after_shift_by_imm5( dis_buf, &irt_regN_shift, NULL, irt_regN, shift_type, imm5, regN ); armUnsignedSatQ( &irt_result, &irt_sat_Q, irt_regN_shift, sat_imm ); or_into_QFLAG32( mkexpr(irt_sat_Q), condT ); if (isT) putIRegT( regD, mkexpr(irt_result), condT ); else putIRegA( regD, mkexpr(irt_result), condT, Ijk_Boring ); DIP("usat%s r%u, #0x%04x, %s\n", nCC(conq), regD, imm5, dis_buf); return True; } /* fall through */ } /* ----------- ssat
,#
,
{,
} ----------- */ { UInt regD = 99, regN = 99, shift_type = 99, imm5 = 99, sat_imm = 99; Bool gate = False; if (isT) { if (INSNT0(15,6) == BITS10(1,1,1,1,0,0,1,1,0,0) && INSNT0(4,4) == 0 && INSNT1(15,15) == 0 && INSNT1(5,5) == 0) { regD = INSNT1(11,8); regN = INSNT0(3,0); shift_type = (INSNT0(5,5) << 1) | 0; imm5 = (INSNT1(14,12) << 2) | INSNT1(7,6); sat_imm = INSNT1(4,0) + 1; if (!isBadRegT(regD) && !isBadRegT(regN)) gate = True; if (shift_type == BITS2(1,0) && imm5 == 0) gate = False; } } else { if (INSNA(27,21) == BITS7(0,1,1,0,1,0,1) && INSNA(5,4) == BITS2(0,1)) { regD = INSNA(15,12); regN = INSNA(3,0); shift_type = (INSNA(6,6) << 1) | 0; imm5 = INSNA(11,7); sat_imm = INSNA(20,16) + 1; if (regD != 15 && regN != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regN_shift = newTemp(Ity_I32); IRTemp irt_sat_Q = newTemp(Ity_I32); IRTemp irt_result = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); compute_result_and_C_after_shift_by_imm5( dis_buf, &irt_regN_shift, NULL, irt_regN, shift_type, imm5, regN ); armSignedSatQ( irt_regN_shift, sat_imm, &irt_result, &irt_sat_Q ); or_into_QFLAG32( mkexpr(irt_sat_Q), condT ); if (isT) putIRegT( regD, mkexpr(irt_result), condT ); else putIRegA( regD, mkexpr(irt_result), condT, Ijk_Boring ); DIP( "ssat%s r%u, #0x%04x, %s\n", nCC(conq), regD, imm5, dis_buf); return True; } /* fall through */ } /* ----------- ssat16
,#
,
----------- */ { UInt regD = 99, regN = 99, sat_imm = 99; Bool gate = False; if (isT) { if (INSNT0(15,6) == BITS10(1,1,1,1,0,0,1,1,0,0) && INSNT0(5,4) == BITS2(1,0) && INSNT1(15,12) == BITS4(0,0,0,0) && INSNT1(7,4) == BITS4(0,0,0,0)) { regD = INSNT1(11,8); regN = INSNT0(3,0); sat_imm = INSNT1(3,0) + 1; if (!isBadRegT(regD) && !isBadRegT(regN)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,1,0,1,0) && INSNA(11,4) == BITS8(1,1,1,1,0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(3,0); sat_imm = INSNA(19,16) + 1; if (regD != 15 && regN != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regN_lo = newTemp(Ity_I32); IRTemp irt_regN_hi = newTemp(Ity_I32); IRTemp irt_Q_lo = newTemp(Ity_I32); IRTemp irt_Q_hi = newTemp(Ity_I32); IRTemp irt_res_lo = newTemp(Ity_I32); IRTemp irt_res_hi = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regN_lo, binop( Iop_Sar32, binop(Iop_Shl32, mkexpr(irt_regN), mkU8(16)), mkU8(16)) ); assign( irt_regN_hi, binop(Iop_Sar32, mkexpr(irt_regN), mkU8(16)) ); armSignedSatQ( irt_regN_lo, sat_imm, &irt_res_lo, &irt_Q_lo ); or_into_QFLAG32( mkexpr(irt_Q_lo), condT ); armSignedSatQ( irt_regN_hi, sat_imm, &irt_res_hi, &irt_Q_hi ); or_into_QFLAG32( mkexpr(irt_Q_hi), condT ); IRExpr* ire_result = binop(Iop_Or32, binop(Iop_And32, mkexpr(irt_res_lo), mkU32(0xFFFF)), binop(Iop_Shl32, mkexpr(irt_res_hi), mkU8(16))); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "ssat16%s r%u, #0x%04x, r%u\n", nCC(conq), regD, sat_imm, regN ); return True; } /* fall through */ } /* -------------- usat16
,#
,
--------------- */ { UInt regD = 99, regN = 99, sat_imm = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xF3A && (INSNT1(15,0) & 0xF0F0) == 0x0000) { regN = INSNT0(3,0); regD = INSNT1(11,8); sat_imm = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,1,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(3,0); sat_imm = INSNA(19,16); if (regD != 15 && regN != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regN_lo = newTemp(Ity_I32); IRTemp irt_regN_hi = newTemp(Ity_I32); IRTemp irt_Q_lo = newTemp(Ity_I32); IRTemp irt_Q_hi = newTemp(Ity_I32); IRTemp irt_res_lo = newTemp(Ity_I32); IRTemp irt_res_hi = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regN_lo, binop( Iop_Sar32, binop(Iop_Shl32, mkexpr(irt_regN), mkU8(16)), mkU8(16)) ); assign( irt_regN_hi, binop(Iop_Sar32, mkexpr(irt_regN), mkU8(16)) ); armUnsignedSatQ( &irt_res_lo, &irt_Q_lo, irt_regN_lo, sat_imm ); or_into_QFLAG32( mkexpr(irt_Q_lo), condT ); armUnsignedSatQ( &irt_res_hi, &irt_Q_hi, irt_regN_hi, sat_imm ); or_into_QFLAG32( mkexpr(irt_Q_hi), condT ); IRExpr* ire_result = binop( Iop_Or32, binop(Iop_Shl32, mkexpr(irt_res_hi), mkU8(16)), mkexpr(irt_res_lo) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "usat16%s r%u, #0x%04x, r%u\n", nCC(conq), regD, sat_imm, regN ); return True; } /* fall through */ } /* -------------- uadd16
,
,
-------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA9 && (INSNT1(15,0) & 0xF0F0) == 0xF040) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Add16x2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, binop(Iop_HAdd16Ux2, mkexpr(rNt), mkexpr(rMt))); set_GE_32_10_from_bits_31_15(reso, condT); DIP("uadd16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* -------------- sadd16
,
,
-------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA9 && (INSNT1(15,0) & 0xF0F0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Add16x2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, unop(Iop_Not32, binop(Iop_HAdd16Sx2, mkexpr(rNt), mkexpr(rMt)))); set_GE_32_10_from_bits_31_15(reso, condT); DIP("sadd16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ---------------- usub16
,
,
---------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAD && (INSNT1(15,0) & 0xF0F0) == 0xF040) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Sub16x2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, unop(Iop_Not32, binop(Iop_HSub16Ux2, mkexpr(rNt), mkexpr(rMt)))); set_GE_32_10_from_bits_31_15(reso, condT); DIP("usub16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* -------------- ssub16
,
,
-------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAD && (INSNT1(15,0) & 0xF0F0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Sub16x2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, unop(Iop_Not32, binop(Iop_HSub16Sx2, mkexpr(rNt), mkexpr(rMt)))); set_GE_32_10_from_bits_31_15(reso, condT); DIP("ssub16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uadd8
,
,
---------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF040) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && (INSNA(7,4) == BITS4(1,0,0,1))) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Add8x4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, binop(Iop_HAdd8Ux4, mkexpr(rNt), mkexpr(rMt))); set_GE_3_2_1_0_from_bits_31_23_15_7(reso, condT); DIP("uadd8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------- sadd8
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && (INSNA(7,4) == BITS4(1,0,0,1))) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Add8x4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, unop(Iop_Not32, binop(Iop_HAdd8Sx4, mkexpr(rNt), mkexpr(rMt)))); set_GE_3_2_1_0_from_bits_31_23_15_7(reso, condT); DIP("sadd8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------- usub8
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAC && (INSNT1(15,0) & 0xF0F0) == 0xF040) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && (INSNA(7,4) == BITS4(1,1,1,1))) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Sub8x4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, unop(Iop_Not32, binop(Iop_HSub8Ux4, mkexpr(rNt), mkexpr(rMt)))); set_GE_3_2_1_0_from_bits_31_23_15_7(reso, condT); DIP("usub8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------- ssub8
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAC && (INSNT1(15,0) & 0xF0F0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res = newTemp(Ity_I32); IRTemp reso = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res, binop(Iop_Sub8x4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res), condT ); else putIRegA( regD, mkexpr(res), condT, Ijk_Boring ); assign(reso, unop(Iop_Not32, binop(Iop_HSub8Sx4, mkexpr(rNt), mkexpr(rMt)))); set_GE_3_2_1_0_from_bits_31_23_15_7(reso, condT); DIP("ssub8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ qadd8
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF010) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QAdd8Sx4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("qadd8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ qsub8
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAC && (INSNT1(15,0) & 0xF0F0) == 0xF010) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QSub8Sx4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("qsub8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ uqadd8
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF050) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && (INSNA(7,4) == BITS4(1,0,0,1))) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QAdd8Ux4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uqadd8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ uqsub8
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAC && (INSNT1(15,0) & 0xF0F0) == 0xF050) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && (INSNA(7,4) == BITS4(1,1,1,1))) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QSub8Ux4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uqsub8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uhadd8
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF060) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HAdd8Ux4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uhadd8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uhadd16
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA9 && (INSNT1(15,0) & 0xF0F0) == 0xF060) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HAdd16Ux2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uhadd16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- shadd8
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF020) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HAdd8Sx4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("shadd8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ qadd16
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA9 && (INSNT1(15,0) & 0xF0F0) == 0xF010) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QAdd16Sx2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("qadd16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ qsub16
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAD && (INSNT1(15,0) & 0xF0F0) == 0xF010) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QSub16Sx2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("qsub16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------- qsax
,
,
------------------- */ /* note: the hardware seems to construct the result differently from wot the manual says. */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAE && (INSNT1(15,0) & 0xF0F0) == 0xF010) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_sum_res = newTemp(Ity_I32); IRTemp irt_diff_res = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Sar32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Sar32, binop(Iop_Shl32, mkexpr(irt_regM), mkU8(16)), mkU8(16) ) ) ); armSignedSatQ( irt_diff, 0x10, &irt_diff_res, NULL); assign( irt_sum, binop( Iop_Add32, binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16) ), binop( Iop_Sar32, mkexpr(irt_regM), mkU8(16) )) ); armSignedSatQ( irt_sum, 0x10, &irt_sum_res, NULL ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_diff_res), mkU8(16) ), binop( Iop_And32, mkexpr(irt_sum_res), mkU32(0xFFFF)) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "qsax%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------------- qasx
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF010) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_res_sum = newTemp(Ity_I32); IRTemp irt_res_diff = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16) ), binop( Iop_Sar32, mkexpr(irt_regM), mkU8(16) ) ) ); armSignedSatQ( irt_diff, 0x10, &irt_res_diff, NULL ); assign( irt_sum, binop( Iop_Add32, binop( Iop_Sar32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regM), mkU8(16) ), mkU8(16) ) ) ); armSignedSatQ( irt_sum, 0x10, &irt_res_sum, NULL ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_res_sum), mkU8(16) ), binop( Iop_And32, mkexpr(irt_res_diff), mkU32(0xFFFF) ) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "qasx%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------------- sasx
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16) ), binop( Iop_Sar32, mkexpr(irt_regM), mkU8(16) ) ) ); assign( irt_sum, binop( Iop_Add32, binop( Iop_Sar32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regM), mkU8(16) ), mkU8(16) ) ) ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_sum), mkU8(16) ), binop( Iop_And32, mkexpr(irt_diff), mkU32(0xFFFF) ) ); IRTemp ge10 = newTemp(Ity_I32); assign(ge10, unop(Iop_Not32, mkexpr(irt_diff))); put_GEFLAG32( 0, 31, mkexpr(ge10), condT ); put_GEFLAG32( 1, 31, mkexpr(ge10), condT ); IRTemp ge32 = newTemp(Ity_I32); assign(ge32, unop(Iop_Not32, mkexpr(irt_sum))); put_GEFLAG32( 2, 31, mkexpr(ge32), condT ); put_GEFLAG32( 3, 31, mkexpr(ge32), condT ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "sasx%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* --------------- smuad, smuadx
,
,
--------------- */ /* --------------- smsad, smsadx
,
,
--------------- */ { UInt regD = 99, regN = 99, regM = 99, bitM = 99; Bool gate = False, isAD = False; if (isT) { if ((INSNT0(15,4) == 0xFB2 || INSNT0(15,4) == 0xFB4) && (INSNT1(15,0) & 0xF0E0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); bitM = INSNT1(4,4); isAD = INSNT0(15,4) == 0xFB2; if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,1,0,0,0,0) && INSNA(15,12) == BITS4(1,1,1,1) && (INSNA(7,4) & BITS4(1,0,0,1)) == BITS4(0,0,0,1) ) { regD = INSNA(19,16); regN = INSNA(3,0); regM = INSNA(11,8); bitM = INSNA(5,5); isAD = INSNA(6,6) == 0; if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_prod_lo = newTemp(Ity_I32); IRTemp irt_prod_hi = newTemp(Ity_I32); IRTemp tmpM = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( tmpM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_regM, genROR32(tmpM, (bitM & 1) ? 16 : 0) ); assign( irt_prod_lo, binop( Iop_Mul32, binop( Iop_Sar32, binop(Iop_Shl32, mkexpr(irt_regN), mkU8(16)), mkU8(16) ), binop( Iop_Sar32, binop(Iop_Shl32, mkexpr(irt_regM), mkU8(16)), mkU8(16) ) ) ); assign( irt_prod_hi, binop(Iop_Mul32, binop(Iop_Sar32, mkexpr(irt_regN), mkU8(16)), binop(Iop_Sar32, mkexpr(irt_regM), mkU8(16))) ); IRExpr* ire_result = binop( isAD ? Iop_Add32 : Iop_Sub32, mkexpr(irt_prod_lo), mkexpr(irt_prod_hi) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); if (isAD) { or_into_QFLAG32( signed_overflow_after_Add32( ire_result, irt_prod_lo, irt_prod_hi ), condT ); } DIP("smu%cd%s%s r%u, r%u, r%u\n", isAD ? 'a' : 's', bitM ? "x" : "", nCC(conq), regD, regN, regM); return True; } /* fall through */ } /* --------------- smlad{X}
,
,
,
-------------- */ /* --------------- smlsd{X}
,
,
,
-------------- */ { UInt regD = 99, regN = 99, regM = 99, regA = 99, bitM = 99; Bool gate = False, isAD = False; if (isT) { if ((INSNT0(15,4) == 0xFB2 || INSNT0(15,4) == 0xFB4) && INSNT1(7,5) == BITS3(0,0,0)) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); regA = INSNT1(15,12); bitM = INSNT1(4,4); isAD = INSNT0(15,4) == 0xFB2; if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM) && !isBadRegT(regA)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,1,0,0,0,0) && (INSNA(7,4) & BITS4(1,0,0,1)) == BITS4(0,0,0,1)) { regD = INSNA(19,16); regA = INSNA(15,12); regN = INSNA(3,0); regM = INSNA(11,8); bitM = INSNA(5,5); isAD = INSNA(6,6) == 0; if (regD != 15 && regN != 15 && regM != 15 && regA != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_regA = newTemp(Ity_I32); IRTemp irt_prod_lo = newTemp(Ity_I32); IRTemp irt_prod_hi = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp tmpM = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regA, isT ? getIRegT(regA) : getIRegA(regA) ); assign( tmpM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_regM, genROR32(tmpM, (bitM & 1) ? 16 : 0) ); assign( irt_prod_lo, binop(Iop_Mul32, binop(Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16)), binop(Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regM), mkU8(16) ), mkU8(16))) ); assign( irt_prod_hi, binop( Iop_Mul32, binop( Iop_Sar32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Sar32, mkexpr(irt_regM), mkU8(16) ) ) ); assign( irt_sum, binop( isAD ? Iop_Add32 : Iop_Sub32, mkexpr(irt_prod_lo), mkexpr(irt_prod_hi) ) ); IRExpr* ire_result = binop(Iop_Add32, mkexpr(irt_sum), mkexpr(irt_regA)); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); if (isAD) { or_into_QFLAG32( signed_overflow_after_Add32( mkexpr(irt_sum), irt_prod_lo, irt_prod_hi ), condT ); } or_into_QFLAG32( signed_overflow_after_Add32( ire_result, irt_sum, irt_regA ), condT ); DIP("sml%cd%s%s r%u, r%u, r%u, r%u\n", isAD ? 'a' : 's', bitM ? "x" : "", nCC(conq), regD, regN, regM, regA); return True; } /* fall through */ } /* ----- smlabb, smlabt, smlatb, smlatt
,
,
,
----- */ { UInt regD = 99, regN = 99, regM = 99, regA = 99, bitM = 99, bitN = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFB1 && INSNT1(7,6) == BITS2(0,0)) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); regA = INSNT1(15,12); bitM = INSNT1(4,4); bitN = INSNT1(5,5); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM) && !isBadRegT(regA)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,0,0,0) && (INSNA(7,4) & BITS4(1,0,0,1)) == BITS4(1,0,0,0)) { regD = INSNA(19,16); regN = INSNA(3,0); regM = INSNA(11,8); regA = INSNA(15,12); bitM = INSNA(6,6); bitN = INSNA(5,5); if (regD != 15 && regN != 15 && regM != 15 && regA != 15) gate = True; } } if (gate) { IRTemp irt_regA = newTemp(Ity_I32); IRTemp irt_prod = newTemp(Ity_I32); assign( irt_prod, binop(Iop_Mul32, binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regN) : getIRegA(regN), mkU8(bitN ? 0 : 16)), mkU8(16)), binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regM) : getIRegA(regM), mkU8(bitM ? 0 : 16)), mkU8(16))) ); assign( irt_regA, isT ? getIRegT(regA) : getIRegA(regA) ); IRExpr* ire_result = binop(Iop_Add32, mkexpr(irt_prod), mkexpr(irt_regA)); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); or_into_QFLAG32( signed_overflow_after_Add32( ire_result, irt_prod, irt_regA ), condT ); DIP( "smla%c%c%s r%u, r%u, r%u, r%u\n", bitN ? 't' : 'b', bitM ? 't' : 'b', nCC(conq), regD, regN, regM, regA ); return True; } /* fall through */ } /* ----- smlalbb, smlalbt, smlaltb, smlaltt
,
,
,
----- */ { UInt regDHi = 99, regN = 99, regM = 99, regDLo = 99, bitM = 99, bitN = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFBC && INSNT1(7,6) == BITS2(1,0)) { regN = INSNT0(3,0); regDHi = INSNT1(11,8); regM = INSNT1(3,0); regDLo = INSNT1(15,12); bitM = INSNT1(4,4); bitN = INSNT1(5,5); if (!isBadRegT(regDHi) && !isBadRegT(regN) && !isBadRegT(regM) && !isBadRegT(regDLo) && regDHi != regDLo) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,1,0,0) && (INSNA(7,4) & BITS4(1,0,0,1)) == BITS4(1,0,0,0)) { regDHi = INSNA(19,16); regN = INSNA(3,0); regM = INSNA(11,8); regDLo = INSNA(15,12); bitM = INSNA(6,6); bitN = INSNA(5,5); if (regDHi != 15 && regN != 15 && regM != 15 && regDLo != 15 && regDHi != regDLo) gate = True; } } if (gate) { IRTemp irt_regD = newTemp(Ity_I64); IRTemp irt_prod = newTemp(Ity_I64); IRTemp irt_res = newTemp(Ity_I64); IRTemp irt_resHi = newTemp(Ity_I32); IRTemp irt_resLo = newTemp(Ity_I32); assign( irt_prod, binop(Iop_MullS32, binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regN) : getIRegA(regN), mkU8(bitN ? 0 : 16)), mkU8(16)), binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regM) : getIRegA(regM), mkU8(bitM ? 0 : 16)), mkU8(16))) ); assign( irt_regD, binop(Iop_32HLto64, isT ? getIRegT(regDHi) : getIRegA(regDHi), isT ? getIRegT(regDLo) : getIRegA(regDLo)) ); assign( irt_res, binop(Iop_Add64, mkexpr(irt_regD), mkexpr(irt_prod)) ); assign( irt_resHi, unop(Iop_64HIto32, mkexpr(irt_res)) ); assign( irt_resLo, unop(Iop_64to32, mkexpr(irt_res)) ); if (isT) { putIRegT( regDHi, mkexpr(irt_resHi), condT ); putIRegT( regDLo, mkexpr(irt_resLo), condT ); } else { putIRegA( regDHi, mkexpr(irt_resHi), condT, Ijk_Boring ); putIRegA( regDLo, mkexpr(irt_resLo), condT, Ijk_Boring ); } DIP( "smlal%c%c%s r%u, r%u, r%u, r%u\n", bitN ? 't' : 'b', bitM ? 't' : 'b', nCC(conq), regDHi, regN, regM, regDLo ); return True; } /* fall through */ } /* ----- smlawb, smlawt
,
,
,
----- */ { UInt regD = 99, regN = 99, regM = 99, regA = 99, bitM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFB3 && INSNT1(7,5) == BITS3(0,0,0)) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); regA = INSNT1(15,12); bitM = INSNT1(4,4); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM) && !isBadRegT(regA)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,0,1,0) && (INSNA(7,4) & BITS4(1,0,1,1)) == BITS4(1,0,0,0)) { regD = INSNA(19,16); regN = INSNA(3,0); regM = INSNA(11,8); regA = INSNA(15,12); bitM = INSNA(6,6); if (regD != 15 && regN != 15 && regM != 15 && regA != 15) gate = True; } } if (gate) { IRTemp irt_regA = newTemp(Ity_I32); IRTemp irt_prod = newTemp(Ity_I64); assign( irt_prod, binop(Iop_MullS32, isT ? getIRegT(regN) : getIRegA(regN), binop(Iop_Sar32, binop(Iop_Shl32, isT ? getIRegT(regM) : getIRegA(regM), mkU8(bitM ? 0 : 16)), mkU8(16))) ); assign( irt_regA, isT ? getIRegT(regA) : getIRegA(regA) ); IRTemp prod32 = newTemp(Ity_I32); assign(prod32, binop(Iop_Or32, binop(Iop_Shl32, unop(Iop_64HIto32, mkexpr(irt_prod)), mkU8(16)), binop(Iop_Shr32, unop(Iop_64to32, mkexpr(irt_prod)), mkU8(16)) )); IRExpr* ire_result = binop(Iop_Add32, mkexpr(prod32), mkexpr(irt_regA)); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); or_into_QFLAG32( signed_overflow_after_Add32( ire_result, prod32, irt_regA ), condT ); DIP( "smlaw%c%s r%u, r%u, r%u, r%u\n", bitM ? 't' : 'b', nCC(conq), regD, regN, regM, regA ); return True; } /* fall through */ } /* ------------------- sel
,
,
-------------------- */ /* fixme: fix up the test in v6media.c so that we can pass the ge flags as part of the test. */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF080) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,1,0,0,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_ge_flag0 = newTemp(Ity_I32); IRTemp irt_ge_flag1 = newTemp(Ity_I32); IRTemp irt_ge_flag2 = newTemp(Ity_I32); IRTemp irt_ge_flag3 = newTemp(Ity_I32); assign( irt_ge_flag0, get_GEFLAG32(0) ); assign( irt_ge_flag1, get_GEFLAG32(1) ); assign( irt_ge_flag2, get_GEFLAG32(2) ); assign( irt_ge_flag3, get_GEFLAG32(3) ); IRExpr* ire_ge_flag0_or = binop(Iop_Or32, mkexpr(irt_ge_flag0), binop(Iop_Sub32, mkU32(0), mkexpr(irt_ge_flag0))); IRExpr* ire_ge_flag1_or = binop(Iop_Or32, mkexpr(irt_ge_flag1), binop(Iop_Sub32, mkU32(0), mkexpr(irt_ge_flag1))); IRExpr* ire_ge_flag2_or = binop(Iop_Or32, mkexpr(irt_ge_flag2), binop(Iop_Sub32, mkU32(0), mkexpr(irt_ge_flag2))); IRExpr* ire_ge_flag3_or = binop(Iop_Or32, mkexpr(irt_ge_flag3), binop(Iop_Sub32, mkU32(0), mkexpr(irt_ge_flag3))); IRExpr* ire_ge_flags = binop( Iop_Or32, binop(Iop_Or32, binop(Iop_And32, binop(Iop_Sar32, ire_ge_flag0_or, mkU8(31)), mkU32(0x000000ff)), binop(Iop_And32, binop(Iop_Sar32, ire_ge_flag1_or, mkU8(31)), mkU32(0x0000ff00))), binop(Iop_Or32, binop(Iop_And32, binop(Iop_Sar32, ire_ge_flag2_or, mkU8(31)), mkU32(0x00ff0000)), binop(Iop_And32, binop(Iop_Sar32, ire_ge_flag3_or, mkU8(31)), mkU32(0xff000000))) ); IRExpr* ire_result = binop(Iop_Or32, binop(Iop_And32, isT ? getIRegT(regN) : getIRegA(regN), ire_ge_flags ), binop(Iop_And32, isT ? getIRegT(regM) : getIRegA(regM), unop(Iop_Not32, ire_ge_flags))); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP("sel%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ----------------- uxtab16
Rd,Rn,Rm{,rot} ------------------ */ { UInt regD = 99, regN = 99, regM = 99, rotate = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA3 && (INSNT1(15,0) & 0xF0C0) == 0xF080) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); rotate = INSNT1(5,4); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,1,1,0,0) && INSNA(9,4) == BITS6(0,0,0,1,1,1) ) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); rotate = INSNA(11,10); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); IRTemp irt_regM = newTemp(Ity_I32); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); IRTemp irt_rot = newTemp(Ity_I32); assign( irt_rot, binop(Iop_And32, genROR32(irt_regM, 8 * rotate), mkU32(0x00FF00FF)) ); IRExpr* resLo = binop(Iop_And32, binop(Iop_Add32, mkexpr(irt_regN), mkexpr(irt_rot)), mkU32(0x0000FFFF)); IRExpr* resHi = binop(Iop_Add32, binop(Iop_And32, mkexpr(irt_regN), mkU32(0xFFFF0000)), binop(Iop_And32, mkexpr(irt_rot), mkU32(0xFFFF0000))); IRExpr* ire_result = binop( Iop_Or32, resHi, resLo ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "uxtab16%s r%u, r%u, r%u, ROR #%u\n", nCC(conq), regD, regN, regM, 8 * rotate ); return True; } /* fall through */ } /* --------------- usad8 Rd,Rn,Rm ---------------- */ /* --------------- usada8 Rd,Rn,Rm,Ra ---------------- */ { UInt rD = 99, rN = 99, rM = 99, rA = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFB7 && INSNT1(7,4) == BITS4(0,0,0,0)) { rN = INSNT0(3,0); rA = INSNT1(15,12); rD = INSNT1(11,8); rM = INSNT1(3,0); if (!isBadRegT(rD) && !isBadRegT(rN) && !isBadRegT(rM) && rA != 13) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,1,1,0,0,0) && INSNA(7,4) == BITS4(0,0,0,1) ) { rD = INSNA(19,16); rA = INSNA(15,12); rM = INSNA(11,8); rN = INSNA(3,0); if (rD != 15 && rN != 15 && rM != 15 /* but rA can be 15 */) gate = True; } } /* We allow rA == 15, to denote the usad8 (no accumulator) case. */ if (gate) { IRExpr* rNe = isT ? getIRegT(rN) : getIRegA(rN); IRExpr* rMe = isT ? getIRegT(rM) : getIRegA(rM); IRExpr* rAe = rA == 15 ? mkU32(0) : (isT ? getIRegT(rA) : getIRegA(rA)); IRExpr* res = binop(Iop_Add32, binop(Iop_Sad8Ux4, rNe, rMe), rAe); if (isT) putIRegT( rD, res, condT ); else putIRegA( rD, res, condT, Ijk_Boring ); if (rA == 15) { DIP( "usad8%s r%u, r%u, r%u\n", nCC(conq), rD, rN, rM ); } else { DIP( "usada8%s r%u, r%u, r%u, r%u\n", nCC(conq), rD, rN, rM, rA ); } return True; } /* fall through */ } /* ------------------ qadd
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF080) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,0,0,0) && INSNA(11,8) == BITS4(0,0,0,0) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QAdd32S, mkexpr(rMt), mkexpr(rNt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); or_into_QFLAG32( signed_overflow_after_Add32( binop(Iop_Add32, mkexpr(rMt), mkexpr(rNt)), rMt, rNt), condT ); DIP("qadd%s r%u, r%u, r%u\n", nCC(conq),regD,regM,regN); return True; } /* fall through */ } /* ------------------ qdadd
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF090) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,1,0,0) && INSNA(11,8) == BITS4(0,0,0,0) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp rN_d = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); or_into_QFLAG32( signed_overflow_after_Add32( binop(Iop_Add32, mkexpr(rNt), mkexpr(rNt)), rNt, rNt), condT ); assign(rN_d, binop(Iop_QAdd32S, mkexpr(rNt), mkexpr(rNt))); assign(res_q, binop(Iop_QAdd32S, mkexpr(rMt), mkexpr(rN_d))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); or_into_QFLAG32( signed_overflow_after_Add32( binop(Iop_Add32, mkexpr(rMt), mkexpr(rN_d)), rMt, rN_d), condT ); DIP("qdadd%s r%u, r%u, r%u\n", nCC(conq),regD,regM,regN); return True; } /* fall through */ } /* ------------------ qsub
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF0A0) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,0,1,0) && INSNA(11,8) == BITS4(0,0,0,0) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QSub32S, mkexpr(rMt), mkexpr(rNt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); or_into_QFLAG32( signed_overflow_after_Sub32( binop(Iop_Sub32, mkexpr(rMt), mkexpr(rNt)), rMt, rNt), condT ); DIP("qsub%s r%u, r%u, r%u\n", nCC(conq),regD,regM,regN); return True; } /* fall through */ } /* ------------------ qdsub
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA8 && (INSNT1(15,0) & 0xF0F0) == 0xF0B0) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,0,0,1,0,1,1,0) && INSNA(11,8) == BITS4(0,0,0,0) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp rN_d = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); or_into_QFLAG32( signed_overflow_after_Add32( binop(Iop_Add32, mkexpr(rNt), mkexpr(rNt)), rNt, rNt), condT ); assign(rN_d, binop(Iop_QAdd32S, mkexpr(rNt), mkexpr(rNt))); assign(res_q, binop(Iop_QSub32S, mkexpr(rMt), mkexpr(rN_d))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); or_into_QFLAG32( signed_overflow_after_Sub32( binop(Iop_Sub32, mkexpr(rMt), mkexpr(rN_d)), rMt, rN_d), condT ); DIP("qdsub%s r%u, r%u, r%u\n", nCC(conq),regD,regM,regN); return True; } /* fall through */ } /* ------------------ uqsub16
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAD && (INSNT1(15,0) & 0xF0F0) == 0xF050) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QSub16Ux2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uqsub16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- shadd16
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA9 && (INSNT1(15,0) & 0xF0F0) == 0xF020) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HAdd16Sx2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("shadd16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uhsub8
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAC && (INSNT1(15,0) & 0xF0F0) == 0xF060) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HSub8Ux4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uhsub8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uhsub16
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAD && (INSNT1(15,0) & 0xF0F0) == 0xF060) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HSub16Ux2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uhsub16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------ uqadd16
,
,
------------------ */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA9 && (INSNT1(15,0) & 0xF0F0) == 0xF050) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_QAdd16Ux2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uqadd16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ------------------- uqsax
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAE && (INSNT1(15,0) & 0xF0F0) == 0xF050) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_sum_res = newTemp(Ity_I32); IRTemp irt_diff_res = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Shr32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Shr32, binop(Iop_Shl32, mkexpr(irt_regM), mkU8(16)), mkU8(16) ) ) ); armUnsignedSatQ( &irt_diff_res, NULL, irt_diff, 0x10); assign( irt_sum, binop( Iop_Add32, binop( Iop_Shr32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16) ), binop( Iop_Shr32, mkexpr(irt_regM), mkU8(16) )) ); armUnsignedSatQ( &irt_sum_res, NULL, irt_sum, 0x10 ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_diff_res), mkU8(16) ), binop( Iop_And32, mkexpr(irt_sum_res), mkU32(0xFFFF)) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "uqsax%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------------- uqasx
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF050) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,0) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_res_sum = newTemp(Ity_I32); IRTemp irt_res_diff = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Shr32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16) ), binop( Iop_Shr32, mkexpr(irt_regM), mkU8(16) ) ) ); armUnsignedSatQ( &irt_res_diff, NULL, irt_diff, 0x10 ); assign( irt_sum, binop( Iop_Add32, binop( Iop_Shr32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Shr32, binop( Iop_Shl32, mkexpr(irt_regM), mkU8(16) ), mkU8(16) ) ) ); armUnsignedSatQ( &irt_res_sum, NULL, irt_sum, 0x10 ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_res_sum), mkU8(16) ), binop( Iop_And32, mkexpr(irt_res_diff), mkU32(0xFFFF) ) ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "uqasx%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------------- usax
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAE && (INSNT1(15,0) & 0xF0F0) == 0xF040) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_sum, binop( Iop_Add32, unop( Iop_16Uto32, unop( Iop_32to16, mkexpr(irt_regN) ) ), binop( Iop_Shr32, mkexpr(irt_regM), mkU8(16) ) ) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Shr32, mkexpr(irt_regN), mkU8(16) ), unop( Iop_16Uto32, unop( Iop_32to16, mkexpr(irt_regM) ) ) ) ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_diff), mkU8(16) ), binop( Iop_And32, mkexpr(irt_sum), mkU32(0xFFFF) ) ); IRTemp ge10 = newTemp(Ity_I32); assign( ge10, IRExpr_ITE( binop( Iop_CmpLE32U, mkU32(0x10000), mkexpr(irt_sum) ), mkU32(1), mkU32(0) ) ); put_GEFLAG32( 0, 0, mkexpr(ge10), condT ); put_GEFLAG32( 1, 0, mkexpr(ge10), condT ); IRTemp ge32 = newTemp(Ity_I32); assign(ge32, unop(Iop_Not32, mkexpr(irt_diff))); put_GEFLAG32( 2, 31, mkexpr(ge32), condT ); put_GEFLAG32( 3, 31, mkexpr(ge32), condT ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "usax%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------------- uasx
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF040) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop( Iop_Sub32, unop( Iop_16Uto32, unop( Iop_32to16, mkexpr(irt_regN) ) ), binop( Iop_Shr32, mkexpr(irt_regM), mkU8(16) ) ) ); assign( irt_sum, binop( Iop_Add32, binop( Iop_Shr32, mkexpr(irt_regN), mkU8(16) ), unop( Iop_16Uto32, unop( Iop_32to16, mkexpr(irt_regM) ) ) ) ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_sum), mkU8(16) ), binop( Iop_And32, mkexpr(irt_diff), mkU32(0xFFFF) ) ); IRTemp ge10 = newTemp(Ity_I32); assign(ge10, unop(Iop_Not32, mkexpr(irt_diff))); put_GEFLAG32( 0, 31, mkexpr(ge10), condT ); put_GEFLAG32( 1, 31, mkexpr(ge10), condT ); IRTemp ge32 = newTemp(Ity_I32); assign( ge32, IRExpr_ITE( binop( Iop_CmpLE32U, mkU32(0x10000), mkexpr(irt_sum) ), mkU32(1), mkU32(0) ) ); put_GEFLAG32( 2, 0, mkexpr(ge32), condT ); put_GEFLAG32( 3, 0, mkexpr(ge32), condT ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "uasx%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ------------------- ssax
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAE && (INSNT1(15,0) & 0xF0F0) == 0xF000) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,0,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); IRTemp irt_regM = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_sum, binop( Iop_Add32, binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regN), mkU8(16) ), mkU8(16) ), binop( Iop_Sar32, mkexpr(irt_regM), mkU8(16) ) ) ); assign( irt_diff, binop( Iop_Sub32, binop( Iop_Sar32, mkexpr(irt_regN), mkU8(16) ), binop( Iop_Sar32, binop( Iop_Shl32, mkexpr(irt_regM), mkU8(16) ), mkU8(16) ) ) ); IRExpr* ire_result = binop( Iop_Or32, binop( Iop_Shl32, mkexpr(irt_diff), mkU8(16) ), binop( Iop_And32, mkexpr(irt_sum), mkU32(0xFFFF) ) ); IRTemp ge10 = newTemp(Ity_I32); assign(ge10, unop(Iop_Not32, mkexpr(irt_sum))); put_GEFLAG32( 0, 31, mkexpr(ge10), condT ); put_GEFLAG32( 1, 31, mkexpr(ge10), condT ); IRTemp ge32 = newTemp(Ity_I32); assign(ge32, unop(Iop_Not32, mkexpr(irt_diff))); put_GEFLAG32( 2, 31, mkexpr(ge32), condT ); put_GEFLAG32( 3, 31, mkexpr(ge32), condT ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "ssax%s r%u, r%u, r%u\n", nCC(conq), regD, regN, regM ); return True; } /* fall through */ } /* ----------------- shsub8
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAC && (INSNT1(15,0) & 0xF0F0) == 0xF020) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(1,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HSub8Sx4, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("shsub8%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- sxtab16
Rd,Rn,Rm{,rot} ------------------ */ { UInt regD = 99, regN = 99, regM = 99, rotate = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFA2 && (INSNT1(15,0) & 0xF0C0) == 0xF080) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); rotate = INSNT1(5,4); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,1,0,0,0) && INSNA(9,4) == BITS6(0,0,0,1,1,1) ) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); rotate = INSNA(11,10); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp irt_regN = newTemp(Ity_I32); assign( irt_regN, isT ? getIRegT(regN) : getIRegA(regN) ); IRTemp irt_regM = newTemp(Ity_I32); assign( irt_regM, isT ? getIRegT(regM) : getIRegA(regM) ); IRTemp irt_rot = newTemp(Ity_I32); assign( irt_rot, genROR32(irt_regM, 8 * rotate) ); /* FIXME Maybe we can write this arithmetic in shorter form. */ IRExpr* resLo = binop(Iop_And32, binop(Iop_Add32, mkexpr(irt_regN), unop(Iop_16Uto32, unop(Iop_8Sto16, unop(Iop_32to8, mkexpr(irt_rot))))), mkU32(0x0000FFFF)); IRExpr* resHi = binop(Iop_And32, binop(Iop_Add32, mkexpr(irt_regN), binop(Iop_Shl32, unop(Iop_16Uto32, unop(Iop_8Sto16, unop(Iop_32to8, binop(Iop_Shr32, mkexpr(irt_rot), mkU8(16))))), mkU8(16))), mkU32(0xFFFF0000)); IRExpr* ire_result = binop( Iop_Or32, resHi, resLo ); if (isT) putIRegT( regD, ire_result, condT ); else putIRegA( regD, ire_result, condT, Ijk_Boring ); DIP( "sxtab16%s r%u, r%u, r%u, ROR #%u\n", nCC(conq), regD, regN, regM, 8 * rotate ); return True; } /* fall through */ } /* ----------------- shasx
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF020) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop(Iop_Sub32, unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(rNt) ) ), unop(Iop_16Sto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rMt), mkU8(16) ) ) ) ) ); assign( irt_sum, binop(Iop_Add32, unop(Iop_16Sto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rNt), mkU8(16) ) ) ), unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(rMt) ) ) ) ); assign( res_q, binop(Iop_Or32, unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(irt_diff), mkU8(1) ) ) ), binop(Iop_Shl32, binop(Iop_Shr32, mkexpr(irt_sum), mkU8(1) ), mkU8(16) ) ) ); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("shasx%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uhasx
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAA && (INSNT1(15,0) & 0xF0F0) == 0xF060) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,0,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_diff, binop(Iop_Sub32, unop(Iop_16Uto32, unop(Iop_32to16, mkexpr(rNt) ) ), unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rMt), mkU8(16) ) ) ) ) ); assign( irt_sum, binop(Iop_Add32, unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rNt), mkU8(16) ) ) ), unop(Iop_16Uto32, unop(Iop_32to16, mkexpr(rMt) ) ) ) ); assign( res_q, binop(Iop_Or32, unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(irt_diff), mkU8(1) ) ) ), binop(Iop_Shl32, binop(Iop_Shr32, mkexpr(irt_sum), mkU8(1) ), mkU8(16) ) ) ); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uhasx%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- shsax
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAE && (INSNT1(15,0) & 0xF0F0) == 0xF020) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_sum, binop(Iop_Add32, unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(rNt) ) ), unop(Iop_16Sto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rMt), mkU8(16) ) ) ) ) ); assign( irt_diff, binop(Iop_Sub32, unop(Iop_16Sto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rNt), mkU8(16) ) ) ), unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(rMt) ) ) ) ); assign( res_q, binop(Iop_Or32, unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(irt_sum), mkU8(1) ) ) ), binop(Iop_Shl32, binop(Iop_Shr32, mkexpr(irt_diff), mkU8(1) ), mkU8(16) ) ) ); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("shsax%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- uhsax
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAE && (INSNT1(15,0) & 0xF0F0) == 0xF060) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,1,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,0,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp irt_diff = newTemp(Ity_I32); IRTemp irt_sum = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign( irt_sum, binop(Iop_Add32, unop(Iop_16Uto32, unop(Iop_32to16, mkexpr(rNt) ) ), unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rMt), mkU8(16) ) ) ) ) ); assign( irt_diff, binop(Iop_Sub32, unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(rNt), mkU8(16) ) ) ), unop(Iop_16Uto32, unop(Iop_32to16, mkexpr(rMt) ) ) ) ); assign( res_q, binop(Iop_Or32, unop(Iop_16Uto32, unop(Iop_32to16, binop(Iop_Shr32, mkexpr(irt_sum), mkU8(1) ) ) ), binop(Iop_Shl32, binop(Iop_Shr32, mkexpr(irt_diff), mkU8(1) ), mkU8(16) ) ) ); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("uhsax%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- shsub16
,
,
------------------- */ { UInt regD = 99, regN = 99, regM = 99; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFAD && (INSNT1(15,0) & 0xF0F0) == 0xF020) { regN = INSNT0(3,0); regD = INSNT1(11,8); regM = INSNT1(3,0); if (!isBadRegT(regD) && !isBadRegT(regN) && !isBadRegT(regM)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,0,0,0,1,1) && INSNA(11,8) == BITS4(1,1,1,1) && INSNA(7,4) == BITS4(0,1,1,1)) { regD = INSNA(15,12); regN = INSNA(19,16); regM = INSNA(3,0); if (regD != 15 && regN != 15 && regM != 15) gate = True; } } if (gate) { IRTemp rNt = newTemp(Ity_I32); IRTemp rMt = newTemp(Ity_I32); IRTemp res_q = newTemp(Ity_I32); assign( rNt, isT ? getIRegT(regN) : getIRegA(regN) ); assign( rMt, isT ? getIRegT(regM) : getIRegA(regM) ); assign(res_q, binop(Iop_HSub16Sx2, mkexpr(rNt), mkexpr(rMt))); if (isT) putIRegT( regD, mkexpr(res_q), condT ); else putIRegA( regD, mkexpr(res_q), condT, Ijk_Boring ); DIP("shsub16%s r%u, r%u, r%u\n", nCC(conq),regD,regN,regM); return True; } /* fall through */ } /* ----------------- smmls{r}
,
,
,
------------------- */ { UInt rD = 99, rN = 99, rM = 99, rA = 99; Bool round = False; Bool gate = False; if (isT) { if (INSNT0(15,7) == BITS9(1,1,1,1,1,0,1,1,0) && INSNT0(6,4) == BITS3(1,1,0) && INSNT1(7,5) == BITS3(0,0,0)) { round = INSNT1(4,4); rA = INSNT1(15,12); rD = INSNT1(11,8); rM = INSNT1(3,0); rN = INSNT0(3,0); if (!isBadRegT(rD) && !isBadRegT(rN) && !isBadRegT(rM) && !isBadRegT(rA)) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,1,0,1,0,1) && INSNA(15,12) != BITS4(1,1,1,1) && (INSNA(7,4) & BITS4(1,1,0,1)) == BITS4(1,1,0,1)) { round = INSNA(5,5); rD = INSNA(19,16); rA = INSNA(15,12); rM = INSNA(11,8); rN = INSNA(3,0); if (rD != 15 && rM != 15 && rN != 15) gate = True; } } if (gate) { IRTemp irt_rA = newTemp(Ity_I32); IRTemp irt_rN = newTemp(Ity_I32); IRTemp irt_rM = newTemp(Ity_I32); assign( irt_rA, isT ? getIRegT(rA) : getIRegA(rA) ); assign( irt_rN, isT ? getIRegT(rN) : getIRegA(rN) ); assign( irt_rM, isT ? getIRegT(rM) : getIRegA(rM) ); IRExpr* res = unop(Iop_64HIto32, binop(Iop_Add64, binop(Iop_Sub64, binop(Iop_32HLto64, mkexpr(irt_rA), mkU32(0)), binop(Iop_MullS32, mkexpr(irt_rN), mkexpr(irt_rM))), mkU64(round ? 0x80000000ULL : 0ULL))); if (isT) putIRegT( rD, res, condT ); else putIRegA(rD, res, condT, Ijk_Boring); DIP("smmls%s%s r%u, r%u, r%u, r%u\n", round ? "r" : "", nCC(conq), rD, rN, rM, rA); return True; } /* fall through */ } /* -------------- smlald{x}
,
,
,
---------------- */ { UInt rN = 99, rDlo = 99, rDhi = 99, rM = 99; Bool m_swap = False; Bool gate = False; if (isT) { if (INSNT0(15,4) == 0xFBC && (INSNT1(7,4) & BITS4(1,1,1,0)) == BITS4(1,1,0,0)) { rN = INSNT0(3,0); rDlo = INSNT1(15,12); rDhi = INSNT1(11,8); rM = INSNT1(3,0); m_swap = (INSNT1(4,4) & 1) == 1; if (!isBadRegT(rDlo) && !isBadRegT(rDhi) && !isBadRegT(rN) && !isBadRegT(rM) && rDhi != rDlo) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,1,0,1,0,0) && (INSNA(7,4) & BITS4(1,1,0,1)) == BITS4(0,0,0,1)) { rN = INSNA(3,0); rDlo = INSNA(15,12); rDhi = INSNA(19,16); rM = INSNA(11,8); m_swap = ( INSNA(5,5) & 1 ) == 1; if (rDlo != 15 && rDhi != 15 && rN != 15 && rM != 15 && rDlo != rDhi) gate = True; } } if (gate) { IRTemp irt_rM = newTemp(Ity_I32); IRTemp irt_rN = newTemp(Ity_I32); IRTemp irt_rDhi = newTemp(Ity_I32); IRTemp irt_rDlo = newTemp(Ity_I32); IRTemp op_2 = newTemp(Ity_I32); IRTemp pr_1 = newTemp(Ity_I64); IRTemp pr_2 = newTemp(Ity_I64); IRTemp result = newTemp(Ity_I64); IRTemp resHi = newTemp(Ity_I32); IRTemp resLo = newTemp(Ity_I32); assign( irt_rM, isT ? getIRegT(rM) : getIRegA(rM)); assign( irt_rN, isT ? getIRegT(rN) : getIRegA(rN)); assign( irt_rDhi, isT ? getIRegT(rDhi) : getIRegA(rDhi)); assign( irt_rDlo, isT ? getIRegT(rDlo) : getIRegA(rDlo)); assign( op_2, genROR32(irt_rM, m_swap ? 16 : 0) ); assign( pr_1, binop(Iop_MullS32, unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(irt_rN)) ), unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(op_2)) ) ) ); assign( pr_2, binop(Iop_MullS32, binop(Iop_Sar32, mkexpr(irt_rN), mkU8(16)), binop(Iop_Sar32, mkexpr(op_2), mkU8(16)) ) ); assign( result, binop(Iop_Add64, binop(Iop_Add64, mkexpr(pr_1), mkexpr(pr_2) ), binop(Iop_32HLto64, mkexpr(irt_rDhi), mkexpr(irt_rDlo) ) ) ); assign( resHi, unop(Iop_64HIto32, mkexpr(result)) ); assign( resLo, unop(Iop_64to32, mkexpr(result)) ); if (isT) { putIRegT( rDhi, mkexpr(resHi), condT ); putIRegT( rDlo, mkexpr(resLo), condT ); } else { putIRegA( rDhi, mkexpr(resHi), condT, Ijk_Boring ); putIRegA( rDlo, mkexpr(resLo), condT, Ijk_Boring ); } DIP("smlald%c%s r%u, r%u, r%u, r%u\n", m_swap ? 'x' : ' ', nCC(conq), rDlo, rDhi, rN, rM); return True; } /* fall through */ } /* -------------- smlsld{x}
,
,
,
---------------- */ { UInt rN = 99, rDlo = 99, rDhi = 99, rM = 99; Bool m_swap = False; Bool gate = False; if (isT) { if ((INSNT0(15,4) == 0xFBD && (INSNT1(7,4) & BITS4(1,1,1,0)) == BITS4(1,1,0,0))) { rN = INSNT0(3,0); rDlo = INSNT1(15,12); rDhi = INSNT1(11,8); rM = INSNT1(3,0); m_swap = (INSNT1(4,4) & 1) == 1; if (!isBadRegT(rDlo) && !isBadRegT(rDhi) && !isBadRegT(rN) && !isBadRegT(rM) && rDhi != rDlo) gate = True; } } else { if (INSNA(27,20) == BITS8(0,1,1,1,0,1,0,0) && (INSNA(7,4) & BITS4(1,1,0,1)) == BITS4(0,1,0,1)) { rN = INSNA(3,0); rDlo = INSNA(15,12); rDhi = INSNA(19,16); rM = INSNA(11,8); m_swap = (INSNA(5,5) & 1) == 1; if (rDlo != 15 && rDhi != 15 && rN != 15 && rM != 15 && rDlo != rDhi) gate = True; } } if (gate) { IRTemp irt_rM = newTemp(Ity_I32); IRTemp irt_rN = newTemp(Ity_I32); IRTemp irt_rDhi = newTemp(Ity_I32); IRTemp irt_rDlo = newTemp(Ity_I32); IRTemp op_2 = newTemp(Ity_I32); IRTemp pr_1 = newTemp(Ity_I64); IRTemp pr_2 = newTemp(Ity_I64); IRTemp result = newTemp(Ity_I64); IRTemp resHi = newTemp(Ity_I32); IRTemp resLo = newTemp(Ity_I32); assign( irt_rM, isT ? getIRegT(rM) : getIRegA(rM) ); assign( irt_rN, isT ? getIRegT(rN) : getIRegA(rN) ); assign( irt_rDhi, isT ? getIRegT(rDhi) : getIRegA(rDhi) ); assign( irt_rDlo, isT ? getIRegT(rDlo) : getIRegA(rDlo) ); assign( op_2, genROR32(irt_rM, m_swap ? 16 : 0) ); assign( pr_1, binop(Iop_MullS32, unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(irt_rN)) ), unop(Iop_16Sto32, unop(Iop_32to16, mkexpr(op_2)) ) ) ); assign( pr_2, binop(Iop_MullS32, binop(Iop_Sar32, mkexpr(irt_rN), mkU8(16)), binop(Iop_Sar32, mkexpr(op_2), mkU8(16)) ) ); assign( result, binop(Iop_Add64, binop(Iop_Sub64, mkexpr(pr_1), mkexpr(pr_2) ), binop(Iop_32HLto64, mkexpr(irt_rDhi), mkexpr(irt_rDlo) ) ) ); assign( resHi, unop(Iop_64HIto32, mkexpr(result)) ); assign( resLo, unop(Iop_64to32, mkexpr(result)) ); if (isT) { putIRegT( rDhi, mkexpr(resHi), condT ); putIRegT( rDlo, mkexpr(resLo), condT ); } else { putIRegA( rDhi, mkexpr(resHi), condT, Ijk_Boring ); putIRegA( rDlo, mkexpr(resLo), condT, Ijk_Boring ); } DIP("smlsld%c%s r%u, r%u, r%u, r%u\n", m_swap ? 'x' : ' ', nCC(conq), rDlo, rDhi, rN, rM); return True; } /* fall through */ } /* ---------- Doesn't match anything. ---------- */ return False; # undef INSNA # undef INSNT0 # undef INSNT1 } /*------------------------------------------------------------*/ /*--- V8 instructions ---*/ /*------------------------------------------------------------*/ /* Break a V128-bit value up into four 32-bit ints. */ static void breakupV128to32s ( IRTemp t128, /*OUTs*/ IRTemp* t3, IRTemp* t2, IRTemp* t1, IRTemp* t0 ) { IRTemp hi64 = newTemp(Ity_I64); IRTemp lo64 = newTemp(Ity_I64); assign( hi64, unop(Iop_V128HIto64, mkexpr(t128)) ); assign( lo64, unop(Iop_V128to64, mkexpr(t128)) ); vassert(t0 && *t0 == IRTemp_INVALID); vassert(t1 && *t1 == IRTemp_INVALID); vassert(t2 && *t2 == IRTemp_INVALID); vassert(t3 && *t3 == IRTemp_INVALID); *t0 = newTemp(Ity_I32); *t1 = newTemp(Ity_I32); *t2 = newTemp(Ity_I32); *t3 = newTemp(Ity_I32); assign( *t0, unop(Iop_64to32, mkexpr(lo64)) ); assign( *t1, unop(Iop_64HIto32, mkexpr(lo64)) ); assign( *t2, unop(Iop_64to32, mkexpr(hi64)) ); assign( *t3, unop(Iop_64HIto32, mkexpr(hi64)) ); } /* Both ARM and Thumb */ /* Translate a V8 instruction. If successful, returns True and *dres may or may not be updated. If unsuccessful, returns False and doesn't change *dres nor create any IR. The Thumb and ARM encodings are potentially different. In both ARM and Thumb mode, the caller must pass the entire 32 bits of the instruction. Callers may pass any instruction; this function ignores anything it doesn't recognise. Caller must supply an IRTemp 'condT' holding the gating condition, or IRTemp_INVALID indicating the insn is always executed. If we are decoding an ARM instruction which is in the NV space then it is expected that condT will be IRTemp_INVALID, and that is asserted for. That condition is ensured by the logic near the top of disInstr_ARM_WRK, that sets up condT. When decoding for Thumb, the caller must pass the ITState pre/post this instruction, so that we can generate a SIGILL in the cases where the instruction may not be in an IT block. When decoding for ARM, both of these must be IRTemp_INVALID. Finally, the caller must indicate whether this occurs in ARM or in Thumb code. */ static Bool decode_V8_instruction ( /*MOD*/DisResult* dres, UInt insnv8, IRTemp condT, Bool isT, IRTemp old_itstate, IRTemp new_itstate ) { # define INSN(_bMax,_bMin) SLICE_UInt(insnv8, (_bMax), (_bMin)) if (isT) { vassert(old_itstate != IRTemp_INVALID); vassert(new_itstate != IRTemp_INVALID); } else { vassert(old_itstate == IRTemp_INVALID); vassert(new_itstate == IRTemp_INVALID); } /* ARMCondcode 'conq' is only used for debug printing and for no other purpose. For ARM, this is simply the top 4 bits of the instruction. For Thumb, the condition is not (really) known until run time, and so we set it to ARMCondAL in order that printing of these instructions does not show any condition. */ ARMCondcode conq; if (isT) { conq = ARMCondAL; } else { conq = (ARMCondcode)INSN(31,28); if (conq == ARMCondNV || conq == ARMCondAL) { vassert(condT == IRTemp_INVALID); } else { vassert(condT != IRTemp_INVALID); } vassert(conq >= ARMCondEQ && conq <= ARMCondNV); } /* ----------- {AESD, AESE, AESMC, AESIMC}.8 q_q ----------- */ /* 31 27 23 21 19 17 15 11 7 3 T1: 1111 1111 1 D 11 sz 00 d 0011 00 M 0 m AESE Qd, Qm A1: 1111 0011 1 D 11 sz 00 d 0011 00 M 0 m AESE Qd, Qm T1: 1111 1111 1 D 11 sz 00 d 0011 01 M 0 m AESD Qd, Qm A1: 1111 0011 1 D 11 sz 00 d 0011 01 M 0 m AESD Qd, Qm T1: 1111 1111 1 D 11 sz 00 d 0011 10 M 0 m AESMC Qd, Qm A1: 1111 0011 1 D 11 sz 00 d 0011 10 M 0 m AESMC Qd, Qm T1: 1111 1111 1 D 11 sz 00 d 0011 11 M 0 m AESIMC Qd, Qm A1: 1111 0011 1 D 11 sz 00 d 0011 11 M 0 m AESIMC Qd, Qm sz must be 00 ARM encoding is in NV space. In Thumb mode, we must not be in an IT block. */ { UInt regD = 99, regM = 99, opc = 4/*invalid*/; Bool gate = True; UInt high9 = isT ? BITS9(1,1,1,1,1,1,1,1,1) : BITS9(1,1,1,1,0,0,1,1,1); if (INSN(31,23) == high9 && INSN(21,16) == BITS6(1,1,0,0,0,0) && INSN(11,8) == BITS4(0,0,1,1) && INSN(4,4) == 0) { UInt bitD = INSN(22,22); UInt fldD = INSN(15,12); UInt bitM = INSN(5,5); UInt fldM = INSN(3,0); opc = INSN(7,6); regD = (bitD << 4) | fldD; regM = (bitM << 4) | fldM; } if ((regD & 1) == 1 || (regM & 1) == 1) gate = False; if (gate) { if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } /* In ARM mode, this is statically unconditional. In Thumb mode, this must be dynamically unconditional, and we've SIGILLd if not. In either case we can create unconditional IR. */ IRTemp op1 = newTemp(Ity_V128); IRTemp op2 = newTemp(Ity_V128); IRTemp src = newTemp(Ity_V128); IRTemp res = newTemp(Ity_V128); assign(op1, getQReg(regD >> 1)); assign(op2, getQReg(regM >> 1)); assign(src, opc == BITS2(0,0) || opc == BITS2(0,1) ? binop(Iop_XorV128, mkexpr(op1), mkexpr(op2)) : mkexpr(op2)); void* helpers[4] = { &armg_dirtyhelper_AESE, &armg_dirtyhelper_AESD, &armg_dirtyhelper_AESMC, &armg_dirtyhelper_AESIMC }; const HChar* hNames[4] = { "armg_dirtyhelper_AESE", "armg_dirtyhelper_AESD", "armg_dirtyhelper_AESMC", "armg_dirtyhelper_AESIMC" }; const HChar* iNames[4] = { "aese", "aesd", "aesmc", "aesimc" }; vassert(opc >= 0 && opc <= 3); void* helper = helpers[opc]; const HChar* hname = hNames[opc]; IRTemp w32_3, w32_2, w32_1, w32_0; w32_3 = w32_2 = w32_1 = w32_0 = IRTemp_INVALID; breakupV128to32s( src, &w32_3, &w32_2, &w32_1, &w32_0 ); IRDirty* di = unsafeIRDirty_1_N( res, 0/*regparms*/, hname, helper, mkIRExprVec_5( IRExpr_VECRET(), mkexpr(w32_3), mkexpr(w32_2), mkexpr(w32_1), mkexpr(w32_0)) ); stmt(IRStmt_Dirty(di)); putQReg(regD >> 1, mkexpr(res), IRTemp_INVALID); DIP("%s.8 q%d, q%d\n", iNames[opc], regD >> 1, regM >> 1); return True; } /* fall through */ } /* ----------- SHA 3-reg insns q_q_q ----------- */ /* 31 27 23 19 15 11 7 3 T1: 1110 1111 0 D 00 n d 1100 N Q M 0 m SHA1C Qd, Qn, Qm ix=0 A1: 1111 0010 ---------------------------- T1: 1110 1111 0 D 01 n d 1100 N Q M 0 m SHA1P Qd, Qn, Qm ix=1 A1: 1111 0010 ---------------------------- T1: 1110 1111 0 D 10 n d 1100 N Q M 0 m SHA1M Qd, Qn, Qm ix=2 A1: 1111 0010 ---------------------------- T1: 1110 1111 0 D 11 n d 1100 N Q M 0 m SHA1SU0 Qd, Qn, Qm ix=3 A1: 1111 0010 ---------------------------- (that's a complete set of 4, based on insn[21,20]) T1: 1111 1111 0 D 00 n d 1100 N Q M 0 m SHA256H Qd, Qn, Qm ix=4 A1: 1111 0011 ---------------------------- T1: 1111 1111 0 D 01 n d 1100 N Q M 0 m SHA256H2 Qd, Qn, Qm ix=5 A1: 1111 0011 ---------------------------- T1: 1111 1111 0 D 10 n d 1100 N Q M 0 m SHA256SU1 Qd, Qn, Qm ix=6 A1: 1111 0011 ---------------------------- (3/4 of a complete set of 4, based on insn[21,20]) Q must be 1. Same comments about conditionalisation as for the AES group above apply. */ { UInt ix = 8; /* invalid */ Bool gate = False; UInt hi9_sha1 = isT ? BITS9(1,1,1,0,1,1,1,1,0) : BITS9(1,1,1,1,0,0,1,0,0); UInt hi9_sha256 = isT ? BITS9(1,1,1,1,1,1,1,1,0) : BITS9(1,1,1,1,0,0,1,1,0); if ((INSN(31,23) == hi9_sha1 || INSN(31,23) == hi9_sha256) && INSN(11,8) == BITS4(1,1,0,0) && INSN(6,6) == 1 && INSN(4,4) == 0) { ix = INSN(21,20); if (INSN(31,23) == hi9_sha256) ix |= 4; if (ix < 7) gate = True; } UInt regN = (INSN(7,7) << 4) | INSN(19,16); UInt regD = (INSN(22,22) << 4) | INSN(15,12); UInt regM = (INSN(5,5) << 4) | INSN(3,0); if ((regD & 1) == 1 || (regM & 1) == 1 || (regN & 1) == 1) gate = False; if (gate) { vassert(ix >= 0 && ix < 7); const HChar* inames[7] = { "sha1c", "sha1p", "sha1m", "sha1su0", "sha256h", "sha256h2", "sha256su1" }; void(*helpers[7])(V128*,UInt,UInt,UInt,UInt,UInt,UInt, UInt,UInt,UInt,UInt,UInt,UInt) = { &armg_dirtyhelper_SHA1C, &armg_dirtyhelper_SHA1P, &armg_dirtyhelper_SHA1M, &armg_dirtyhelper_SHA1SU0, &armg_dirtyhelper_SHA256H, &armg_dirtyhelper_SHA256H2, &armg_dirtyhelper_SHA256SU1 }; const HChar* hnames[7] = { "armg_dirtyhelper_SHA1C", "armg_dirtyhelper_SHA1P", "armg_dirtyhelper_SHA1M", "armg_dirtyhelper_SHA1SU0", "armg_dirtyhelper_SHA256H", "armg_dirtyhelper_SHA256H2", "armg_dirtyhelper_SHA256SU1" }; /* This is a really lame way to implement this, even worse than the arm64 version. But at least it works. */ if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } IRTemp vD = newTemp(Ity_V128); IRTemp vN = newTemp(Ity_V128); IRTemp vM = newTemp(Ity_V128); assign(vD, getQReg(regD >> 1)); assign(vN, getQReg(regN >> 1)); assign(vM, getQReg(regM >> 1)); IRTemp d32_3, d32_2, d32_1, d32_0; d32_3 = d32_2 = d32_1 = d32_0 = IRTemp_INVALID; breakupV128to32s( vD, &d32_3, &d32_2, &d32_1, &d32_0 ); IRTemp n32_3_pre, n32_2_pre, n32_1_pre, n32_0_pre; n32_3_pre = n32_2_pre = n32_1_pre = n32_0_pre = IRTemp_INVALID; breakupV128to32s( vN, &n32_3_pre, &n32_2_pre, &n32_1_pre, &n32_0_pre ); IRTemp m32_3, m32_2, m32_1, m32_0; m32_3 = m32_2 = m32_1 = m32_0 = IRTemp_INVALID; breakupV128to32s( vM, &m32_3, &m32_2, &m32_1, &m32_0 ); IRTemp n32_3 = newTemp(Ity_I32); IRTemp n32_2 = newTemp(Ity_I32); IRTemp n32_1 = newTemp(Ity_I32); IRTemp n32_0 = newTemp(Ity_I32); /* Mask off any bits of the N register operand that aren't actually needed, so that Memcheck doesn't complain unnecessarily. */ switch (ix) { case 0: case 1: case 2: assign(n32_3, mkU32(0)); assign(n32_2, mkU32(0)); assign(n32_1, mkU32(0)); assign(n32_0, mkexpr(n32_0_pre)); break; case 3: case 4: case 5: case 6: assign(n32_3, mkexpr(n32_3_pre)); assign(n32_2, mkexpr(n32_2_pre)); assign(n32_1, mkexpr(n32_1_pre)); assign(n32_0, mkexpr(n32_0_pre)); break; default: vassert(0); } IRExpr** argvec = mkIRExprVec_13( IRExpr_VECRET(), mkexpr(d32_3), mkexpr(d32_2), mkexpr(d32_1), mkexpr(d32_0), mkexpr(n32_3), mkexpr(n32_2), mkexpr(n32_1), mkexpr(n32_0), mkexpr(m32_3), mkexpr(m32_2), mkexpr(m32_1), mkexpr(m32_0) ); IRTemp res = newTemp(Ity_V128); IRDirty* di = unsafeIRDirty_1_N( res, 0/*regparms*/, hnames[ix], helpers[ix], argvec ); stmt(IRStmt_Dirty(di)); putQReg(regD >> 1, mkexpr(res), IRTemp_INVALID); DIP("%s.8 q%u, q%u, q%u\n", inames[ix], regD >> 1, regN >> 1, regM >> 1); return True; } /* fall through */ } /* ----------- SHA1SU1, SHA256SU0 ----------- */ /* 31 27 23 21 19 15 11 7 3 T1: 1111 1111 1 D 11 1010 d 0011 10 M 0 m SHA1SU1 Qd, Qm A1: 1111 0011 ---------------------------- T1: 1111 1111 1 D 11 1010 d 0011 11 M 0 m SHA256SU0 Qd, Qm A1: 1111 0011 ---------------------------- Same comments about conditionalisation as for the AES group above apply. */ { Bool gate = False; UInt hi9 = isT ? BITS9(1,1,1,1,1,1,1,1,1) : BITS9(1,1,1,1,0,0,1,1,1); if (INSN(31,23) == hi9 && INSN(21,16) == BITS6(1,1,1,0,1,0) && INSN(11,7) == BITS5(0,0,1,1,1) && INSN(4,4) == 0) { gate = True; } UInt regD = (INSN(22,22) << 4) | INSN(15,12); UInt regM = (INSN(5,5) << 4) | INSN(3,0); if ((regD & 1) == 1 || (regM & 1) == 1) gate = False; Bool is_1SU1 = INSN(6,6) == 0; if (gate) { const HChar* iname = is_1SU1 ? "sha1su1" : "sha256su0"; void (*helper)(V128*,UInt,UInt,UInt,UInt,UInt,UInt,UInt,UInt) = is_1SU1 ? &armg_dirtyhelper_SHA1SU1 : *armg_dirtyhelper_SHA256SU0; const HChar* hname = is_1SU1 ? "armg_dirtyhelper_SHA1SU1" : "armg_dirtyhelper_SHA256SU0"; if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } IRTemp vD = newTemp(Ity_V128); IRTemp vM = newTemp(Ity_V128); assign(vD, getQReg(regD >> 1)); assign(vM, getQReg(regM >> 1)); IRTemp d32_3, d32_2, d32_1, d32_0; d32_3 = d32_2 = d32_1 = d32_0 = IRTemp_INVALID; breakupV128to32s( vD, &d32_3, &d32_2, &d32_1, &d32_0 ); IRTemp m32_3, m32_2, m32_1, m32_0; m32_3 = m32_2 = m32_1 = m32_0 = IRTemp_INVALID; breakupV128to32s( vM, &m32_3, &m32_2, &m32_1, &m32_0 ); IRExpr** argvec = mkIRExprVec_9( IRExpr_VECRET(), mkexpr(d32_3), mkexpr(d32_2), mkexpr(d32_1), mkexpr(d32_0), mkexpr(m32_3), mkexpr(m32_2), mkexpr(m32_1), mkexpr(m32_0) ); IRTemp res = newTemp(Ity_V128); IRDirty* di = unsafeIRDirty_1_N( res, 0/*regparms*/, hname, helper, argvec ); stmt(IRStmt_Dirty(di)); putQReg(regD >> 1, mkexpr(res), IRTemp_INVALID); DIP("%s.8 q%u, q%u\n", iname, regD >> 1, regM >> 1); return True; } /* fall through */ } /* ----------- SHA1H ----------- */ /* 31 27 23 21 19 15 11 7 3 T1: 1111 1111 1 D 11 1001 d 0010 11 M 0 m SHA1H Qd, Qm A1: 1111 0011 ---------------------------- Same comments about conditionalisation as for the AES group above apply. */ { Bool gate = False; UInt hi9 = isT ? BITS9(1,1,1,1,1,1,1,1,1) : BITS9(1,1,1,1,0,0,1,1,1); if (INSN(31,23) == hi9 && INSN(21,16) == BITS6(1,1,1,0,0,1) && INSN(11,6) == BITS6(0,0,1,0,1,1) && INSN(4,4) == 0) { gate = True; } UInt regD = (INSN(22,22) << 4) | INSN(15,12); UInt regM = (INSN(5,5) << 4) | INSN(3,0); if ((regD & 1) == 1 || (regM & 1) == 1) gate = False; if (gate) { const HChar* iname = "sha1h"; void (*helper)(V128*,UInt,UInt,UInt,UInt) = &armg_dirtyhelper_SHA1H; const HChar* hname = "armg_dirtyhelper_SHA1H"; if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } IRTemp vM = newTemp(Ity_V128); assign(vM, getQReg(regM >> 1)); IRTemp m32_3, m32_2, m32_1, m32_0; m32_3 = m32_2 = m32_1 = m32_0 = IRTemp_INVALID; breakupV128to32s( vM, &m32_3, &m32_2, &m32_1, &m32_0 ); /* m32_3, m32_2, m32_1 are just abandoned. No harm; iropt will remove them. */ IRExpr* zero = mkU32(0); IRExpr** argvec = mkIRExprVec_5(IRExpr_VECRET(), zero, zero, zero, mkexpr(m32_0)); IRTemp res = newTemp(Ity_V128); IRDirty* di = unsafeIRDirty_1_N( res, 0/*regparms*/, hname, helper, argvec ); stmt(IRStmt_Dirty(di)); putQReg(regD >> 1, mkexpr(res), IRTemp_INVALID); DIP("%s.8 q%u, q%u\n", iname, regD >> 1, regM >> 1); return True; } /* fall through */ } /* ----------- VMULL.P64 ----------- */ /* 31 27 23 21 19 15 11 7 3 T2: 1110 1111 1 D 10 n d 1110 N 0 M 0 m A2: 1111 0010 ------------------------- The ARM documentation is pretty difficult to follow here. Same comments about conditionalisation as for the AES group above apply. */ { Bool gate = False; UInt hi9 = isT ? BITS9(1,1,1,0,1,1,1,1,1) : BITS9(1,1,1,1,0,0,1,0,1); if (INSN(31,23) == hi9 && INSN(21,20) == BITS2(1,0) && INSN(11,8) == BITS4(1,1,1,0) && INSN(6,6) == 0 && INSN(4,4) == 0) { gate = True; } UInt regN = (INSN(7,7) << 4) | INSN(19,16); UInt regD = (INSN(22,22) << 4) | INSN(15,12); UInt regM = (INSN(5,5) << 4) | INSN(3,0); if ((regD & 1) == 1) gate = False; if (gate) { const HChar* iname = "vmull"; void (*helper)(V128*,UInt,UInt,UInt,UInt) = &armg_dirtyhelper_VMULLP64; const HChar* hname = "armg_dirtyhelper_VMULLP64"; if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } IRTemp srcN = newTemp(Ity_I64); IRTemp srcM = newTemp(Ity_I64); assign(srcN, getDRegI64(regN)); assign(srcM, getDRegI64(regM)); IRExpr** argvec = mkIRExprVec_5(IRExpr_VECRET(), unop(Iop_64HIto32, mkexpr(srcN)), unop(Iop_64to32, mkexpr(srcN)), unop(Iop_64HIto32, mkexpr(srcM)), unop(Iop_64to32, mkexpr(srcM))); IRTemp res = newTemp(Ity_V128); IRDirty* di = unsafeIRDirty_1_N( res, 0/*regparms*/, hname, helper, argvec ); stmt(IRStmt_Dirty(di)); putQReg(regD >> 1, mkexpr(res), IRTemp_INVALID); DIP("%s.p64 q%u, q%u, w%u\n", iname, regD >> 1, regN, regM); return True; } /* fall through */ } /* ----------- LDA{,B,H}, STL{,B,H} ----------- */ /* 31 27 23 19 15 11 7 3 A1: cond 0001 1001 n t 1100 1001 1111 LDA Rt, [Rn] A1: cond 0001 1111 n t 1100 1001 1111 LDAH Rt, [Rn] A1: cond 0001 1101 n t 1100 1001 1111 LDAB Rt, [Rn] A1: cond 0001 1000 n 1111 1100 1001 t STL Rt, [Rn] A1: cond 0001 1110 n 1111 1100 1001 t STLH Rt, [Rn] A1: cond 0001 1100 n 1111 1100 1001 t STLB Rt, [Rn] T1: 1110 1000 1101 n t 1111 1010 1111 LDA Rt, [Rn] T1: 1110 1000 1101 n t 1111 1001 1111 LDAH Rt, [Rn] T1: 1110 1000 1101 n t 1111 1000 1111 LDAB Rt, [Rn] T1: 1110 1000 1100 n t 1111 1010 1111 STL Rt, [Rn] T1: 1110 1000 1100 n t 1111 1001 1111 STLH Rt, [Rn] T1: 1110 1000 1100 n t 1111 1000 1111 STLB Rt, [Rn] */ { UInt nn = 16; // invalid UInt tt = 16; // invalid UInt szBlg2 = 4; // invalid Bool isLoad = False; Bool gate = False; if (isT) { if (INSN(31,21) == BITS11(1,1,1,0,1,0,0,0,1,1,0) && INSN(11,6) == BITS6(1,1,1,1,1,0) && INSN(3,0) == BITS4(1,1,1,1)) { nn = INSN(19,16); tt = INSN(15,12); isLoad = INSN(20,20) == 1; szBlg2 = INSN(5,4); // 00:B 01:H 10:W 11:invalid gate = szBlg2 != BITS2(1,1) && tt != 15 && nn != 15; } } else { if (INSN(27,23) == BITS5(0,0,0,1,1) && INSN(20,20) == 1 && INSN(11,0) == BITS12(1,1,0,0,1,0,0,1,1,1,1,1)) { nn = INSN(19,16); tt = INSN(15,12); isLoad = True; szBlg2 = INSN(22,21); // 10:B 11:H 00:W 01:invalid gate = szBlg2 != BITS2(0,1) && tt != 15 && nn != 15; } else if (INSN(27,23) == BITS5(0,0,0,1,1) && INSN(20,20) == 0 && INSN(15,4) == BITS12(1,1,1,1,1,1,0,0,1,0,0,1)) { nn = INSN(19,16); tt = INSN(3,0); isLoad = False; szBlg2 = INSN(22,21); // 10:B 11:H 00:W 01:invalid gate = szBlg2 != BITS2(0,1) && tt != 15 && nn != 15; } if (gate) { // Rearrange szBlg2 bits to be the same as the Thumb case switch (szBlg2) { case 2: szBlg2 = 0; break; case 3: szBlg2 = 1; break; case 0: szBlg2 = 2; break; default: /*NOTREACHED*/vassert(0); } } } // For both encodings, the instruction is guarded by condT, which // is passed in by the caller. Note that the the loads and stores // are conditional, so we don't have to truncate the IRSB at this // point, but the fence is unconditional. There's no way to // represent a conditional fence without a side exit, but it // doesn't matter from a correctness standpoint that it is // unconditional -- it just loses a bit of performance in the // case where the condition doesn't hold. if (gate) { vassert(szBlg2 <= 2 && nn <= 14 && tt <= 14); IRExpr* ea = llGetIReg(nn); if (isLoad) { static IRLoadGOp cvt[3] = { ILGop_8Uto32, ILGop_16Uto32, ILGop_Ident32 }; IRTemp data = newTemp(Ity_I32); loadGuardedLE(data, cvt[szBlg2], ea, mkU32(0)/*alt*/, condT); if (isT) { putIRegT(tt, mkexpr(data), condT); } else { putIRegA(tt, mkexpr(data), condT, Ijk_INVALID); } stmt(IRStmt_MBE(Imbe_Fence)); } else { stmt(IRStmt_MBE(Imbe_Fence)); IRExpr* data = llGetIReg(tt); switch (szBlg2) { case 0: data = unop(Iop_32to8, data); break; case 1: data = unop(Iop_32to16, data); break; case 2: break; default: vassert(0); } storeGuardedLE(ea, data, condT); } const HChar* ldNames[3] = { "ldab", "ldah", "lda" }; const HChar* stNames[3] = { "stlb", "stlh", "stl" }; DIP("%s r%u, [r%u]", (isLoad ? ldNames : stNames)[szBlg2], tt, nn); return True; } /* else fall through */ } /* ----------- LDAEX{,B,H,D}, STLEX{,B,H,D} ----------- */ /* 31 27 23 19 15 11 7 3 A1: cond 0001 1101 n t 1110 1001 1111 LDAEXB Rt, [Rn] A1: cond 0001 1111 n t 1110 1001 1111 LDAEXH Rt, [Rn] A1: cond 0001 1001 n t 1110 1001 1111 LDAEX Rt, [Rn] A1: cond 0001 1011 n t 1110 1001 1111 LDAEXD Rt, Rt+1, [Rn] A1: cond 0001 1100 n d 1110 1001 t STLEXB Rd, Rt, [Rn] A1: cond 0001 1110 n d 1110 1001 t STLEXH Rd, Rt, [Rn] A1: cond 0001 1000 n d 1110 1001 t STLEX Rd, Rt, [Rn] A1: cond 0001 1010 n d 1110 1001 t STLEXD Rd, Rt, Rt+1, [Rn] 31 28 24 19 15 11 7 3 T1: 111 0100 01101 n t 1111 1100 1111 LDAEXB Rt, [Rn] T1: 111 0100 01101 n t 1111 1101 1111 LDAEXH Rt, [Rn] T1: 111 0100 01101 n t 1111 1110 1111 LDAEX Rt, [Rn] T1: 111 0100 01101 n t t2 1111 1111 LDAEXD Rt, Rt2, [Rn] T1: 111 0100 01100 n t 1111 1100 d STLEXB Rd, Rt, [Rn] T1: 111 0100 01100 n t 1111 1101 d STLEXH Rd, Rt, [Rn] T1: 111 0100 01100 n t 1111 1110 d STLEX Rd, Rt, [Rn] T1: 111 0100 01100 n t t2 1111 d STLEXD Rd, Rt, Rt2, [Rn] */ { UInt nn = 16; // invalid UInt tt = 16; // invalid UInt tt2 = 16; // invalid UInt dd = 16; // invalid UInt szBlg2 = 4; // invalid Bool isLoad = False; Bool gate = False; if (isT) { if (INSN(31,21) == BITS11(1,1,1,0,1,0,0,0,1,1,0) && INSN(7,6) == BITS2(1,1)) { isLoad = INSN(20,20) == 1; nn = INSN(19,16); tt = INSN(15,12); tt2 = INSN(11,8); szBlg2 = INSN(5,4); dd = INSN(3,0); gate = True; if (szBlg2 < BITS2(1,1) && tt2 != BITS4(1,1,1,1)) gate = False; if (isLoad && dd != BITS4(1,1,1,1)) gate = False; // re-set not-used register values to invalid if (szBlg2 < BITS2(1,1)) tt2 = 16; if (isLoad) dd = 16; } } else { /* ARM encoding. Do the load and store cases separately as the register numbers are in different places and a combined decode is too confusing. */ if (INSN(27,23) == BITS5(0,0,0,1,1) && INSN(20,20) == 1 && INSN(11,0) == BITS12(1,1,1,0,1,0,0,1,1,1,1,1)) { szBlg2 = INSN(22,21); isLoad = True; nn = INSN(19,16); tt = INSN(15,12); gate = True; } else if (INSN(27,23) == BITS5(0,0,0,1,1) && INSN(20,20) == 0 && INSN(11,4) == BITS8(1,1,1,0,1,0,0,1)) { szBlg2 = INSN(22,21); isLoad = False; nn = INSN(19,16); dd = INSN(15,12); tt = INSN(3,0); gate = True; } if (gate) { // Rearrange szBlg2 bits to be the same as the Thumb case switch (szBlg2) { case 2: szBlg2 = 0; break; case 3: szBlg2 = 1; break; case 0: szBlg2 = 2; break; case 1: szBlg2 = 3; break; default: /*NOTREACHED*/vassert(0); } } } // Perform further checks on register numbers if (gate) { /**/ if (isT && isLoad) { // Thumb load if (szBlg2 < 3) { if (! (tt != 13 && tt != 15 && nn != 15)) gate = False; } else { if (! (tt != 13 && tt != 15 && tt2 != 13 && tt2 != 15 && tt != tt2 && nn != 15)) gate = False; } } else if (isT && !isLoad) { // Thumb store if (szBlg2 < 3) { if (! (dd != 13 && dd != 15 && tt != 13 && tt != 15 && nn != 15 && dd != nn && dd != tt)) gate = False; } else { if (! (dd != 13 && dd != 15 && tt != 13 && tt != 15 && tt2 != 13 && tt2 != 15 && nn != 15 && dd != nn && dd != tt && dd != tt2)) gate = False; } } else if (!isT && isLoad) { // ARM Load if (szBlg2 < 3) { if (! (tt != 15 && nn != 15)) gate = False; } else { if (! ((tt & 1) == 0 && tt != 14 && nn != 15)) gate = False; vassert(tt2 == 16/*invalid*/); tt2 = tt + 1; } } else if (!isT && !isLoad) { // ARM Store if (szBlg2 < 3) { if (! (dd != 15 && tt != 15 && nn != 15 && dd != nn && dd != tt)) gate = False; } else { if (! (dd != 15 && (tt & 1) == 0 && tt != 14 && nn != 15 && dd != nn && dd != tt && dd != tt+1)) gate = False; vassert(tt2 == 16/*invalid*/); tt2 = tt + 1; } } else /*NOTREACHED*/vassert(0); } if (gate) { // Paranoia .. vassert(szBlg2 <= 3); if (szBlg2 < 3) { vassert(tt2 == 16/*invalid*/); } else { vassert(tt2 <= 14); } if (isLoad) { vassert(dd == 16/*invalid*/); } else { vassert(dd <= 14); } } // If we're still good even after all that, generate the IR. if (gate) { /* First, go unconditional. Staying in-line is too complex. */ if (isT) { vassert(condT != IRTemp_INVALID); mk_skip_over_T32_if_cond_is_false( condT ); } else { if (condT != IRTemp_INVALID) { mk_skip_over_A32_if_cond_is_false( condT ); condT = IRTemp_INVALID; } } /* Now the load or store. */ IRType ty = Ity_INVALID; /* the type of the transferred data */ const HChar* nm = NULL; switch (szBlg2) { case 0: nm = "b"; ty = Ity_I8; break; case 1: nm = "h"; ty = Ity_I16; break; case 2: nm = ""; ty = Ity_I32; break; case 3: nm = "d"; ty = Ity_I64; break; default: vassert(0); } IRExpr* ea = isT ? getIRegT(nn) : getIRegA(nn); if (isLoad) { // LOAD. Transaction, then fence. IROp widen = Iop_INVALID; switch (szBlg2) { case 0: widen = Iop_8Uto32; break; case 1: widen = Iop_16Uto32; break; case 2: case 3: break; default: vassert(0); } IRTemp res = newTemp(ty); // FIXME: assumes little-endian guest stmt( IRStmt_LLSC(Iend_LE, res, ea, NULL/*this is a load*/) ); # define PUT_IREG(_nnz, _eez) \ do { vassert((_nnz) <= 14); /* no writes to the PC */ \ if (isT) { putIRegT((_nnz), (_eez), IRTemp_INVALID); } \ else { putIRegA((_nnz), (_eez), \ IRTemp_INVALID, Ijk_Boring); } } while(0) if (ty == Ity_I64) { // FIXME: assumes little-endian guest PUT_IREG(tt, unop(Iop_64to32, mkexpr(res))); PUT_IREG(tt2, unop(Iop_64HIto32, mkexpr(res))); } else { PUT_IREG(tt, widen == Iop_INVALID ? mkexpr(res) : unop(widen, mkexpr(res))); } stmt(IRStmt_MBE(Imbe_Fence)); if (ty == Ity_I64) { DIP("ldrex%s%s r%u, r%u, [r%u]\n", nm, isT ? "" : nCC(conq), tt, tt2, nn); } else { DIP("ldrex%s%s r%u, [r%u]\n", nm, isT ? "" : nCC(conq), tt, nn); } # undef PUT_IREG } else { // STORE. Fence, then transaction. IRTemp resSC1, resSC32, data; IROp narrow = Iop_INVALID; switch (szBlg2) { case 0: narrow = Iop_32to8; break; case 1: narrow = Iop_32to16; break; case 2: case 3: break; default: vassert(0); } stmt(IRStmt_MBE(Imbe_Fence)); data = newTemp(ty); # define GET_IREG(_nnz) (isT ? getIRegT(_nnz) : getIRegA(_nnz)) assign(data, ty == Ity_I64 // FIXME: assumes little-endian guest ? binop(Iop_32HLto64, GET_IREG(tt2), GET_IREG(tt)) : narrow == Iop_INVALID ? GET_IREG(tt) : unop(narrow, GET_IREG(tt))); # undef GET_IREG resSC1 = newTemp(Ity_I1); // FIXME: assumes little-endian guest stmt( IRStmt_LLSC(Iend_LE, resSC1, ea, mkexpr(data)) ); /* Set rDD to 1 on failure, 0 on success. Currently we have resSC1 == 0 on failure, 1 on success. */ resSC32 = newTemp(Ity_I32); assign(resSC32, unop(Iop_1Uto32, unop(Iop_Not1, mkexpr(resSC1)))); vassert(dd <= 14); /* no writes to the PC */ if (isT) { putIRegT(dd, mkexpr(resSC32), IRTemp_INVALID); } else { putIRegA(dd, mkexpr(resSC32), IRTemp_INVALID, Ijk_Boring); } if (ty == Ity_I64) { DIP("strex%s%s r%u, r%u, r%u, [r%u]\n", nm, isT ? "" : nCC(conq), dd, tt, tt2, nn); } else { DIP("strex%s%s r%u, r%u, [r%u]\n", nm, isT ? "" : nCC(conq), dd, tt, nn); } } /* if (isLoad) */ return True; } /* if (gate) */ /* else fall through */ } /* ----------- VSEL
.F64 d_d_d, VSEL
.F32 s_s_s ----------- */ /* 31 27 22 21 19 15 11 8 7 6 5 4 3 T1/A1: 1111 11100 D cc n d 101 1 N 0 M 0 m VSEL
.F64 Dd, Dn, Dm T1/A1: 1111 11100 D cc n d 101 0 N 0 M 0 m VSEL
.F32 Sd, Sn, Sm ARM encoding is in NV space. In Thumb mode, we must not be in an IT block. */ if (INSN(31,23) == BITS9(1,1,1,1,1,1,1,0,0) && INSN(11,9) == BITS3(1,0,1) && INSN(6,6) == 0 && INSN(4,4) == 0) { UInt bit_D = INSN(22,22); UInt fld_cc = INSN(21,20); UInt fld_n = INSN(19,16); UInt fld_d = INSN(15,12); Bool isF64 = INSN(8,8) == 1; UInt bit_N = INSN(7,7); UInt bit_M = INSN(5,5); UInt fld_m = INSN(3,0); UInt dd = isF64 ? ((bit_D << 4) | fld_d) : ((fld_d << 1) | bit_D); UInt nn = isF64 ? ((bit_N << 4) | fld_n) : ((fld_n << 1) | bit_N); UInt mm = isF64 ? ((bit_M << 4) | fld_m) : ((fld_m << 1) | bit_M); UInt cc_1 = (fld_cc >> 1) & 1; UInt cc_0 = (fld_cc >> 0) & 1; UInt cond = (fld_cc << 2) | ((cc_1 ^ cc_0) << 1) | 0; if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } /* In ARM mode, this is statically unconditional. In Thumb mode, this must be dynamically unconditional, and we've SIGILLd if not. In either case we can create unconditional IR. */ IRTemp guard = newTemp(Ity_I32); assign(guard, mk_armg_calculate_condition(cond)); IRExpr* srcN = (isF64 ? llGetDReg : llGetFReg)(nn); IRExpr* srcM = (isF64 ? llGetDReg : llGetFReg)(mm); IRExpr* res = IRExpr_ITE(unop(Iop_32to1, mkexpr(guard)), srcN, srcM); (isF64 ? llPutDReg : llPutFReg)(dd, res); UChar rch = isF64 ? 'd' : 'f'; DIP("vsel%s.%s %c%u, %c%u, %c%u\n", nCC(cond), isF64 ? "f64" : "f32", rch, dd, rch, nn, rch, mm); return True; } /* -------- VRINT{A,N,P,M}.F64 d_d, VRINT{A,N,P,M}.F32 s_s -------- */ /* 31 22 21 17 15 11 8 7 5 4 3 T1/A1: 111111101 D 1110 rm Vd 101 1 01 M 0 Vm VRINT{A,N,P,M}.F64 Dd, Dm T1/A1: 111111101 D 1110 rm Vd 101 0 01 M 0 Vm VRINT{A,N,P,M}.F32 Sd, Sm ARM encoding is in NV space. In Thumb mode, we must not be in an IT block. */ if (INSN(31,23) == BITS9(1,1,1,1,1,1,1,0,1) && INSN(21,18) == BITS4(1,1,1,0) && INSN(11,9) == BITS3(1,0,1) && INSN(7,6) == BITS2(0,1) && INSN(4,4) == 0) { UInt bit_D = INSN(22,22); UInt fld_rm = INSN(17,16); UInt fld_d = INSN(15,12); Bool isF64 = INSN(8,8) == 1; UInt bit_M = INSN(5,5); UInt fld_m = INSN(3,0); UInt dd = isF64 ? ((bit_D << 4) | fld_d) : ((fld_d << 1) | bit_D); UInt mm = isF64 ? ((bit_M << 4) | fld_m) : ((fld_m << 1) | bit_M); if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } /* In ARM mode, this is statically unconditional. In Thumb mode, this must be dynamically unconditional, and we've SIGILLd if not. In either case we can create unconditional IR. */ UChar c = '?'; IRRoundingMode rm = Irrm_NEAREST; switch (fld_rm) { /* The use of NEAREST for both the 'a' and 'n' cases is a bit of a kludge since it doesn't take into account the nearest-even vs nearest-away semantics. */ case BITS2(0,0): c = 'a'; rm = Irrm_NEAREST; break; case BITS2(0,1): c = 'n'; rm = Irrm_NEAREST; break; case BITS2(1,0): c = 'p'; rm = Irrm_PosINF; break; case BITS2(1,1): c = 'm'; rm = Irrm_NegINF; break; default: vassert(0); } IRExpr* srcM = (isF64 ? llGetDReg : llGetFReg)(mm); IRExpr* res = binop(isF64 ? Iop_RoundF64toInt : Iop_RoundF32toInt, mkU32((UInt)rm), srcM); (isF64 ? llPutDReg : llPutFReg)(dd, res); UChar rch = isF64 ? 'd' : 'f'; DIP("vrint%c.%s.%s %c%u, %c%u\n", c, isF64 ? "f64" : "f32", isF64 ? "f64" : "f32", rch, dd, rch, mm); return True; } /* -------- VRINT{Z,R}.F64.F64 d_d, VRINT{Z,R}.F32.F32 s_s -------- */ /* 31 27 22 21 15 11 7 6 5 4 3 T1: 1110 11101 D 110110 Vd 1011 op 1 M 0 Vm VRINT
.F64.F64 Dd, Dm A1: cond 11101 D 110110 Vd 1011 op 1 M 0 Vm T1: 1110 11101 D 110110 Vd 1010 op 1 M 0 Vm VRINT
.F32.F32 Sd, Sm A1: cond 11101 D 110110 Vd 1010 op 1 M 0 Vm In contrast to the VRINT variants just above, this can be conditional. */ if ((isT ? (INSN(31,28) == BITS4(1,1,1,0)) : True) && INSN(27,23) == BITS5(1,1,1,0,1) && INSN(21,16) == BITS6(1,1,0,1,1,0) && INSN(11,9) == BITS3(1,0,1) && INSN(6,6) == 1 && INSN(4,4) == 0) { UInt bit_D = INSN(22,22); UInt fld_Vd = INSN(15,12); Bool isF64 = INSN(8,8) == 1; Bool rToZero = INSN(7,7) == 1; UInt bit_M = INSN(5,5); UInt fld_Vm = INSN(3,0); UInt dd = isF64 ? ((bit_D << 4) | fld_Vd) : ((fld_Vd << 1) | bit_D); UInt mm = isF64 ? ((bit_M << 4) | fld_Vm) : ((fld_Vm << 1) | bit_M); if (isT) vassert(condT != IRTemp_INVALID); IRType ty = isF64 ? Ity_F64 : Ity_F32; IRTemp src = newTemp(ty); IRTemp res = newTemp(ty); assign(src, (isF64 ? getDReg : getFReg)(mm)); IRTemp rm = newTemp(Ity_I32); assign(rm, rToZero ? mkU32(Irrm_ZERO) : mkexpr(mk_get_IR_rounding_mode())); assign(res, binop(isF64 ? Iop_RoundF64toInt : Iop_RoundF32toInt, mkexpr(rm), mkexpr(src))); (isF64 ? putDReg : putFReg)(dd, mkexpr(res), condT); UChar rch = isF64 ? 'd' : 'f'; DIP("vrint%c.%s.%s %c%u, %c%u\n", rToZero ? 'z' : 'r', isF64 ? "f64" : "f32", isF64 ? "f64" : "f32", rch, dd, rch, mm); return True; } /* ----------- VCVT{A,N,P,M}{.S32,.U32}{.F64,.F32} ----------- */ /* 31 27 22 21 17 15 11 8 7 6 5 4 3 T1/A1: 1111 11101 D 1111 rm Vd 101 sz op 1 M 0 Vm VCVT{A,N,P,M}{.S32,.U32}.F64 Sd, Dm VCVT{A,N,P,M}{.S32,.U32}.F32 Sd, Sm ARM encoding is in NV space. In Thumb mode, we must not be in an IT block. */ if (INSN(31,23) == BITS9(1,1,1,1,1,1,1,0,1) && INSN(21,18) == BITS4(1,1,1,1) && INSN(11,9) == BITS3(1,0,1) && INSN(6,6) == 1 && INSN(4,4) == 0) { UInt bit_D = INSN(22,22); UInt fld_rm = INSN(17,16); UInt fld_Vd = INSN(15,12); Bool isF64 = INSN(8,8) == 1; Bool isU = INSN(7,7) == 0; UInt bit_M = INSN(5,5); UInt fld_Vm = INSN(3,0); UInt dd = (fld_Vd << 1) | bit_D; UInt mm = isF64 ? ((bit_M << 4) | fld_Vm) : ((fld_Vm << 1) | bit_M); if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } /* In ARM mode, this is statically unconditional. In Thumb mode, this must be dynamically unconditional, and we've SIGILLd if not. In either case we can create unconditional IR. */ UChar c = '?'; IRRoundingMode rm = Irrm_NEAREST; switch (fld_rm) { /* The use of NEAREST for both the 'a' and 'n' cases is a bit of a kludge since it doesn't take into account the nearest-even vs nearest-away semantics. */ case BITS2(0,0): c = 'a'; rm = Irrm_NEAREST; break; case BITS2(0,1): c = 'n'; rm = Irrm_NEAREST; break; case BITS2(1,0): c = 'p'; rm = Irrm_PosINF; break; case BITS2(1,1): c = 'm'; rm = Irrm_NegINF; break; default: vassert(0); } IRExpr* srcM = (isF64 ? llGetDReg : llGetFReg)(mm); IRTemp res = newTemp(Ity_I32); /* The arm back end doesn't support use of Iop_F32toI32U or Iop_F32toI32S, so for those cases we widen the F32 to F64 and then follow the F64 route. */ if (!isF64) { srcM = unop(Iop_F32toF64, srcM); } assign(res, binop(isU ? Iop_F64toI32U : Iop_F64toI32S, mkU32((UInt)rm), srcM)); llPutFReg(dd, unop(Iop_ReinterpI32asF32, mkexpr(res))); UChar rch = isF64 ? 'd' : 'f'; DIP("vcvt%c.%s.%s %c%u, %c%u\n", c, isU ? "u32" : "s32", isF64 ? "f64" : "f32", 's', dd, rch, mm); return True; } /* ----------- V{MAX,MIN}NM{.F64 d_d_d, .F32 s_s_s} ----------- */ /* 31 27 22 21 19 15 11 8 7 6 5 4 3 1111 11101 D 00 Vn Vd 101 1 N op M 0 Vm V{MIN,MAX}NM.F64 Dd, Dn, Dm 1111 11101 D 00 Vn Vd 101 0 N op M 0 Vm V{MIN,MAX}NM.F32 Sd, Sn, Sm ARM encoding is in NV space. In Thumb mode, we must not be in an IT block. */ if (INSN(31,23) == BITS9(1,1,1,1,1,1,1,0,1) && INSN(21,20) == BITS2(0,0) && INSN(11,9) == BITS3(1,0,1) && INSN(4,4) == 0) { UInt bit_D = INSN(22,22); UInt fld_Vn = INSN(19,16); UInt fld_Vd = INSN(15,12); Bool isF64 = INSN(8,8) == 1; UInt bit_N = INSN(7,7); Bool isMAX = INSN(6,6) == 0; UInt bit_M = INSN(5,5); UInt fld_Vm = INSN(3,0); UInt dd = isF64 ? ((bit_D << 4) | fld_Vd) : ((fld_Vd << 1) | bit_D); UInt nn = isF64 ? ((bit_N << 4) | fld_Vn) : ((fld_Vn << 1) | bit_N); UInt mm = isF64 ? ((bit_M << 4) | fld_Vm) : ((fld_Vm << 1) | bit_M); if (isT) { gen_SIGILL_T_if_in_ITBlock(old_itstate, new_itstate); } /* In ARM mode, this is statically unconditional. In Thumb mode, this must be dynamically unconditional, and we've SIGILLd if not. In either case we can create unconditional IR. */ IROp op = isF64 ? (isMAX ? Iop_MaxNumF64 : Iop_MinNumF64) : (isMAX ? Iop_MaxNumF32 : Iop_MinNumF32); IRExpr* srcN = (isF64 ? llGetDReg : llGetFReg)(nn); IRExpr* srcM = (isF64 ? llGetDReg : llGetFReg)(mm); IRExpr* res = binop(op, srcN, srcM); (isF64 ? llPutDReg : llPutFReg)(dd, res); UChar rch = isF64 ? 'd' : 'f'; DIP("v%snm.%s %c%u, %c%u, %c%u\n", isMAX ? "max" : "min", isF64 ? "f64" : "f32", rch, dd, rch, nn, rch, mm); return True; } /* ----------- VRINTX.F64.F64 d_d, VRINTX.F32.F32 s_s ----------- */ /* 31 27 22 21 15 11 8 7 5 4 3 T1: 1110 11101 D 110111 Vd 101 1 01 M 0 Vm VRINTX