Message ID | 878tg2w1fv.fsf@linaro.org |
---|---|
State | New |
Headers | show |
Series | [14/nn] Add helpers for shift count modes | expand |
On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford <richard.sandiford@linaro.org> wrote: > This patch adds a stub helper routine to provide the mode > of a scalar shift amount, given the mode of the values > being shifted. > > One long-standing problem has been to decide what this mode > should be for arbitrary rtxes (as opposed to those directly > tied to a target pattern). Is it the mode of the shifted > elements? Is it word_mode? Or maybe QImode? Is it whatever > the corresponding target pattern says? (In which case what > should the mode be when the target doesn't have a pattern?) > > For now the patch picks word_mode, which should be safe on > all targets but could perhaps become suboptimal if the helper > routine is used more often than it is in this patch. As it > stands the patch does not change the generated code. > > The patch also adds a helper function that constructs rtxes > for constant shift amounts, again given the mode of the value > being shifted. As well as helping with the SVE patches, this > is one step towards allowing CONST_INTs to have a real mode. I think gen_shift_amount_mode is flawed and while encapsulating constant shift amount RTX generation into a gen_int_shift_amount looks good to me I'd rather have that ??? in this function (and I'd use the mode of the RTX shifted, not word_mode...). In the end it's up to insn recognizing to convert the op to the expected mode and for generic RTL it's us that should decide on the mode -- on GENERIC the shift amount has to be an integer so why not simply use a mode that is large enough to make the constant fit? Just throwing in some comments here, RTL isn't my primary expertise. Richard. > > 2017-10-23 Richard Sandiford <richard.sandiford@linaro.org> > Alan Hayward <alan.hayward@arm.com> > David Sherwood <david.sherwood@arm.com> > > gcc/ > * target.h (get_shift_amount_mode): New function. > * emit-rtl.h (gen_int_shift_amount): Declare. > * emit-rtl.c (gen_int_shift_amount): New function. > * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount > instead of GEN_INT. > * calls.c (shift_return_value): Likewise. > * cse.c (fold_rtx): Likewise. > * dse.c (find_shift_sequence): Likewise. > * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) > (expand_shift, expand_smod_pow2): Likewise. > * lower-subreg.c (shift_cost): Likewise. > * simplify-rtx.c (simplify_unary_operation_1): Likewise. > (simplify_binary_operation_1): Likewise. > * combine.c (try_combine, find_split_point, force_int_to_mode) > (simplify_shift_const_1, simplify_shift_const): Likewise. > (change_zero_ext): Likewise. Use simplify_gen_binary. > * optabs.c (expand_superword_shift, expand_doubleword_mult) > (expand_unop): Use gen_int_shift_amount instead of GEN_INT. > (expand_binop): Likewise. Use get_shift_amount_mode instead > of word_mode as the mode of a CONST_INT shift amount. > (shift_amt_for_vec_perm_mask): Add a machine_mode argument. > Use gen_int_shift_amount instead of GEN_INT. > (expand_vec_perm): Update caller accordingly. Use > gen_int_shift_amount instead of GEN_INT. > > Index: gcc/target.h > =================================================================== > --- gcc/target.h 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/target.h 2017-10-23 11:47:11.277288162 +0100 > @@ -209,6 +209,17 @@ #define HOOKSTRUCT(FRAGMENT) FRAGMENT > > extern struct gcc_target targetm; > > +/* Return the mode that should be used to hold a scalar shift amount > + when shifting values of the given mode. */ > +/* ??? This could in principle be generated automatically from the .md > + shift patterns, but for now word_mode should be universally OK. */ > + > +inline scalar_int_mode > +get_shift_amount_mode (machine_mode) > +{ > + return word_mode; > +} > + > #ifdef GCC_TM_H > > #ifndef CUMULATIVE_ARGS_MAGIC > Index: gcc/emit-rtl.h > =================================================================== > --- gcc/emit-rtl.h 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/emit-rtl.h 2017-10-23 11:47:11.274393237 +0100 > @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, > extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); > extern void adjust_reg_mode (rtx, machine_mode); > extern int mem_expr_equal_p (const_tree, const_tree); > +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); > > extern bool need_atomic_barrier_p (enum memmodel, bool); > > Index: gcc/emit-rtl.c > =================================================================== > --- gcc/emit-rtl.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/emit-rtl.c 2017-10-23 11:47:11.273428262 +0100 > @@ -6478,6 +6478,15 @@ need_atomic_barrier_p (enum memmodel mod > } > } > > +/* Return a constant shift amount for shifting a value of mode MODE > + by VALUE bits. */ > + > +rtx > +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) > +{ > + return gen_int_mode (value, get_shift_amount_mode (mode)); > +} > + > /* Initialize fields of rtl_data related to stack alignment. */ > > void > Index: gcc/asan.c > =================================================================== > --- gcc/asan.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/asan.c 2017-10-23 11:47:11.270533336 +0100 > @@ -1388,7 +1388,7 @@ asan_emit_stack_protection (rtx base, rt > TREE_ASM_WRITTEN (id) = 1; > emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); > shadow_base = expand_binop (Pmode, lshr_optab, base, > - GEN_INT (ASAN_SHADOW_SHIFT), > + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), > NULL_RTX, 1, OPTAB_DIRECT); > shadow_base > = plus_constant (Pmode, shadow_base, > Index: gcc/calls.c > =================================================================== > --- gcc/calls.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/calls.c 2017-10-23 11:47:11.270533336 +0100 > @@ -2749,15 +2749,17 @@ shift_return_value (machine_mode mode, b > HOST_WIDE_INT shift; > > gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); > - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); > + machine_mode value_mode = GET_MODE (value); > + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); > if (shift == 0) > return false; > > /* Use ashr rather than lshr for right shifts. This is for the benefit > of the MIPS port, which requires SImode values to be sign-extended > when stored in 64-bit registers. */ > - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, > - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) > + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, > + value, gen_int_shift_amount (value_mode, shift), > + value, 1, OPTAB_WIDEN)) > gcc_unreachable (); > return true; > } > Index: gcc/cse.c > =================================================================== > --- gcc/cse.c 2017-10-23 11:47:03.707058235 +0100 > +++ gcc/cse.c 2017-10-23 11:47:11.273428262 +0100 > @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) > || INTVAL (const_arg1) < 0)) > { > if (SHIFT_COUNT_TRUNCATED) > - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) > - & (GET_MODE_UNIT_BITSIZE (mode) > - - 1)); > + canon_const_arg1 = gen_int_shift_amount > + (mode, (INTVAL (const_arg1) > + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); > else > break; > } > @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) > || INTVAL (inner_const) < 0)) > { > if (SHIFT_COUNT_TRUNCATED) > - inner_const = GEN_INT (INTVAL (inner_const) > - & (GET_MODE_UNIT_BITSIZE (mode) > - - 1)); > + inner_const = gen_int_shift_amount > + (mode, (INTVAL (inner_const) > + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); > else > break; > } > @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) > /* As an exception, we can turn an ASHIFTRT of this > form into a shift of the number of bits - 1. */ > if (code == ASHIFTRT) > - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); > + new_const = gen_int_shift_amount > + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); > else if (!side_effects_p (XEXP (y, 0))) > return CONST0_RTX (mode); > else > Index: gcc/dse.c > =================================================================== > --- gcc/dse.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/dse.c 2017-10-23 11:47:11.273428262 +0100 > @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, > store_mode, byte); > if (ret && CONSTANT_P (ret)) > { > + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); > ret = simplify_const_binary_operation (LSHIFTRT, new_mode, > - ret, GEN_INT (shift)); > + ret, shift_rtx); > if (ret && CONSTANT_P (ret)) > { > byte = subreg_lowpart_offset (read_mode, new_mode); > @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, > of one dsp where the cost of these two was not the same. But > this really is a rare case anyway. */ > target = expand_binop (new_mode, lshr_optab, new_reg, > - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); > + gen_int_shift_amount (new_mode, shift), > + new_reg, 1, OPTAB_DIRECT); > > shift_seq = get_insns (); > end_sequence (); > Index: gcc/expmed.c > =================================================================== > --- gcc/expmed.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/expmed.c 2017-10-23 11:47:11.274393237 +0100 > @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed > PUT_MODE (all->zext, wider_mode); > PUT_MODE (all->wide_mult, wider_mode); > PUT_MODE (all->wide_lshr, wider_mode); > - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); > + XEXP (all->wide_lshr, 1) > + = gen_int_shift_amount (wider_mode, mode_bitsize); > > set_mul_widen_cost (speed, wider_mode, > set_src_cost (all->wide_mult, wider_mode, speed)); > @@ -908,12 +909,14 @@ store_bit_field_1 (rtx str_rtx, unsigned > to make sure that for big-endian machines the higher order > bits are used. */ > if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) > - value_word = simplify_expand_binop (word_mode, lshr_optab, > - value_word, > - GEN_INT (BITS_PER_WORD > - - new_bitsize), > - NULL_RTX, true, > - OPTAB_LIB_WIDEN); > + { > + int shift = BITS_PER_WORD - new_bitsize; > + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); > + value_word = simplify_expand_binop (word_mode, lshr_optab, > + value_word, shift_rtx, > + NULL_RTX, true, > + OPTAB_LIB_WIDEN); > + } > > if (!store_bit_field_1 (op0, new_bitsize, > bitnum + bit_offset, > @@ -2366,8 +2369,9 @@ expand_shift_1 (enum tree_code code, mac > if (CONST_INT_P (op1) > && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= > (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) > - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) > - % GET_MODE_BITSIZE (scalar_mode)); > + op1 = gen_int_shift_amount (mode, > + (unsigned HOST_WIDE_INT) INTVAL (op1) > + % GET_MODE_BITSIZE (scalar_mode)); > else if (GET_CODE (op1) == SUBREG > && subreg_lowpart_p (op1) > && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) > @@ -2384,7 +2388,8 @@ expand_shift_1 (enum tree_code code, mac > && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, > GET_MODE_BITSIZE (scalar_mode) - 1)) > { > - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); > + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) > + - INTVAL (op1))); > left = !left; > code = left ? LROTATE_EXPR : RROTATE_EXPR; > } > @@ -2464,8 +2469,8 @@ expand_shift_1 (enum tree_code code, mac > if (op1 == const0_rtx) > return shifted; > else if (CONST_INT_P (op1)) > - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) > - - INTVAL (op1)); > + other_amount = gen_int_shift_amount > + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); > else > { > other_amount > @@ -2538,8 +2543,9 @@ expand_shift_1 (enum tree_code code, mac > expand_shift (enum tree_code code, machine_mode mode, rtx shifted, > int amount, rtx target, int unsignedp) > { > - return expand_shift_1 (code, mode, > - shifted, GEN_INT (amount), target, unsignedp); > + return expand_shift_1 (code, mode, shifted, > + gen_int_shift_amount (mode, amount), > + target, unsignedp); > } > > /* Likewise, but return 0 if that cannot be done. */ > @@ -3855,7 +3861,7 @@ expand_smod_pow2 (scalar_int_mode mode, > { > HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; > signmask = force_reg (mode, signmask); > - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); > + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); > > /* Use the rtx_cost of a LSHIFTRT instruction to determine > which instruction sequence to use. If logical right shifts > Index: gcc/lower-subreg.c > =================================================================== > --- gcc/lower-subreg.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/lower-subreg.c 2017-10-23 11:47:11.274393237 +0100 > @@ -129,7 +129,7 @@ shift_cost (bool speed_p, struct cost_rt > PUT_CODE (rtxes->shift, code); > PUT_MODE (rtxes->shift, mode); > PUT_MODE (rtxes->source, mode); > - XEXP (rtxes->shift, 1) = GEN_INT (op1); > + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); > return set_src_cost (rtxes->shift, mode, speed_p); > } > > Index: gcc/simplify-rtx.c > =================================================================== > --- gcc/simplify-rtx.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/simplify-rtx.c 2017-10-23 11:47:11.277288162 +0100 > @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod > if (STORE_FLAG_VALUE == 1) > { > temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), > - GEN_INT (isize - 1)); > + gen_int_shift_amount (inner, > + isize - 1)); > if (int_mode == inner) > return temp; > if (GET_MODE_PRECISION (int_mode) > isize) > @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod > else if (STORE_FLAG_VALUE == -1) > { > temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), > - GEN_INT (isize - 1)); > + gen_int_shift_amount (inner, > + isize - 1)); > if (int_mode == inner) > return temp; > if (GET_MODE_PRECISION (int_mode) > isize) > @@ -2679,7 +2681,8 @@ simplify_binary_operation_1 (enum rtx_co > { > val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); > if (val >= 0) > - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); > + return simplify_gen_binary (ASHIFT, mode, op0, > + gen_int_shift_amount (mode, val)); > } > > /* x*2 is x+x and x*(-1) is -x */ > @@ -3303,7 +3306,8 @@ simplify_binary_operation_1 (enum rtx_co > /* Convert divide by power of two into shift. */ > if (CONST_INT_P (trueop1) > && (val = exact_log2 (UINTVAL (trueop1))) > 0) > - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); > + return simplify_gen_binary (LSHIFTRT, mode, op0, > + gen_int_shift_amount (mode, val)); > break; > > case DIV: > @@ -3423,10 +3427,12 @@ simplify_binary_operation_1 (enum rtx_co > && IN_RANGE (INTVAL (trueop1), > GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), > GET_MODE_UNIT_PRECISION (mode) - 1)) > - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, > - mode, op0, > - GEN_INT (GET_MODE_UNIT_PRECISION (mode) > - - INTVAL (trueop1))); > + { > + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); > + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); > + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, > + mode, op0, new_amount_rtx); > + } > #endif > /* FALLTHRU */ > case ASHIFTRT: > @@ -3466,8 +3472,8 @@ simplify_binary_operation_1 (enum rtx_co > == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) > && subreg_lowpart_p (op0)) > { > - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) > - + INTVAL (op1)); > + rtx tmp = gen_int_shift_amount > + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); > tmp = simplify_gen_binary (code, inner_mode, > XEXP (SUBREG_REG (op0), 0), > tmp); > @@ -3478,7 +3484,8 @@ simplify_binary_operation_1 (enum rtx_co > { > val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); > if (val != INTVAL (op1)) > - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); > + return simplify_gen_binary (code, mode, op0, > + gen_int_shift_amount (mode, val)); > } > break; > > Index: gcc/combine.c > =================================================================== > --- gcc/combine.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/combine.c 2017-10-23 11:47:11.272463287 +0100 > @@ -3773,8 +3773,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, > && INTVAL (XEXP (*split, 1)) > 0 > && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) > { > + rtx i_rtx = gen_int_shift_amount (split_mode, i); > SUBST (*split, gen_rtx_ASHIFT (split_mode, > - XEXP (*split, 0), GEN_INT (i))); > + XEXP (*split, 0), i_rtx)); > /* Update split_code because we may not have a multiply > anymore. */ > split_code = GET_CODE (*split); > @@ -3788,8 +3789,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, > && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) > { > rtx nsplit = XEXP (*split, 0); > + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); > SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), > - XEXP (nsplit, 0), GEN_INT (i))); > + XEXP (nsplit, 0), > + i_rtx)); > /* Update split_code because we may not have a multiply > anymore. */ > split_code = GET_CODE (*split); > @@ -5057,12 +5060,12 @@ find_split_point (rtx *loc, rtx_insn *in > GET_MODE (XEXP (SET_SRC (x), 0)))))) > { > machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); > - > + rtx pos_rtx = gen_int_shift_amount (mode, pos); > SUBST (SET_SRC (x), > gen_rtx_NEG (mode, > gen_rtx_LSHIFTRT (mode, > XEXP (SET_SRC (x), 0), > - GEN_INT (pos)))); > + pos_rtx))); > > split = find_split_point (&SET_SRC (x), insn, true); > if (split && split != &SET_SRC (x)) > @@ -5120,11 +5123,11 @@ find_split_point (rtx *loc, rtx_insn *in > { > unsigned HOST_WIDE_INT mask > = (HOST_WIDE_INT_1U << len) - 1; > + rtx pos_rtx = gen_int_shift_amount (mode, pos); > SUBST (SET_SRC (x), > gen_rtx_AND (mode, > gen_rtx_LSHIFTRT > - (mode, gen_lowpart (mode, inner), > - GEN_INT (pos)), > + (mode, gen_lowpart (mode, inner), pos_rtx), > gen_int_mode (mask, mode))); > > split = find_split_point (&SET_SRC (x), insn, true); > @@ -5133,14 +5136,15 @@ find_split_point (rtx *loc, rtx_insn *in > } > else > { > + int left_bits = GET_MODE_PRECISION (mode) - len - pos; > + int right_bits = GET_MODE_PRECISION (mode) - len; > SUBST (SET_SRC (x), > gen_rtx_fmt_ee > (unsignedp ? LSHIFTRT : ASHIFTRT, mode, > gen_rtx_ASHIFT (mode, > gen_lowpart (mode, inner), > - GEN_INT (GET_MODE_PRECISION (mode) > - - len - pos)), > - GEN_INT (GET_MODE_PRECISION (mode) - len))); > + gen_int_shift_amount (mode, left_bits)), > + gen_int_shift_amount (mode, right_bits))); > > split = find_split_point (&SET_SRC (x), insn, true); > if (split && split != &SET_SRC (x)) > @@ -8915,10 +8919,11 @@ force_int_to_mode (rtx x, scalar_int_mod > /* Must be more sign bit copies than the mask needs. */ > && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) > >= exact_log2 (mask + 1))) > - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), > - GEN_INT (GET_MODE_PRECISION (xmode) > - - exact_log2 (mask + 1))); > - > + { > + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); > + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), > + gen_int_shift_amount (xmode, nbits)); > + } > goto shiftrt; > > case ASHIFTRT: > @@ -10415,7 +10420,7 @@ simplify_shift_const_1 (enum rtx_code co > { > enum rtx_code orig_code = code; > rtx orig_varop = varop; > - int count; > + int count, log2; > machine_mode mode = result_mode; > machine_mode shift_mode; > scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; > @@ -10618,13 +10623,11 @@ simplify_shift_const_1 (enum rtx_code co > is cheaper. But it is still better on those machines to > merge two shifts into one. */ > if (CONST_INT_P (XEXP (varop, 1)) > - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) > + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) > { > - varop > - = simplify_gen_binary (ASHIFT, GET_MODE (varop), > - XEXP (varop, 0), > - GEN_INT (exact_log2 ( > - UINTVAL (XEXP (varop, 1))))); > + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); > + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), > + XEXP (varop, 0), log2_rtx); > continue; > } > break; > @@ -10632,13 +10635,11 @@ simplify_shift_const_1 (enum rtx_code co > case UDIV: > /* Similar, for when divides are cheaper. */ > if (CONST_INT_P (XEXP (varop, 1)) > - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) > + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) > { > - varop > - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), > - XEXP (varop, 0), > - GEN_INT (exact_log2 ( > - UINTVAL (XEXP (varop, 1))))); > + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); > + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), > + XEXP (varop, 0), log2_rtx); > continue; > } > break; > @@ -10773,10 +10774,10 @@ simplify_shift_const_1 (enum rtx_code co > > mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), > int_result_mode); > - > + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); > mask_rtx > = simplify_const_binary_operation (code, int_result_mode, > - mask_rtx, GEN_INT (count)); > + mask_rtx, count_rtx); > > /* Give up if we can't compute an outer operation to use. */ > if (mask_rtx == 0 > @@ -10832,9 +10833,10 @@ simplify_shift_const_1 (enum rtx_code co > if (code == ASHIFTRT && int_mode != int_result_mode) > break; > > + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); > rtx new_rtx = simplify_const_binary_operation (code, int_mode, > XEXP (varop, 0), > - GEN_INT (count)); > + count_rtx); > varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); > count = 0; > continue; > @@ -10900,7 +10902,7 @@ simplify_shift_const_1 (enum rtx_code co > && (new_rtx = simplify_const_binary_operation > (code, int_result_mode, > gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), > - GEN_INT (count))) != 0 > + gen_int_shift_amount (int_result_mode, count))) != 0 > && CONST_INT_P (new_rtx) > && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), > INTVAL (new_rtx), int_result_mode, > @@ -11043,7 +11045,7 @@ simplify_shift_const_1 (enum rtx_code co > && (new_rtx = simplify_const_binary_operation > (ASHIFT, int_result_mode, > gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), > - GEN_INT (count))) != 0 > + gen_int_shift_amount (int_result_mode, count))) != 0 > && CONST_INT_P (new_rtx) > && merge_outer_ops (&outer_op, &outer_const, PLUS, > INTVAL (new_rtx), int_result_mode, > @@ -11064,7 +11066,7 @@ simplify_shift_const_1 (enum rtx_code co > && (new_rtx = simplify_const_binary_operation > (code, int_result_mode, > gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), > - GEN_INT (count))) != 0 > + gen_int_shift_amount (int_result_mode, count))) != 0 > && CONST_INT_P (new_rtx) > && merge_outer_ops (&outer_op, &outer_const, XOR, > INTVAL (new_rtx), int_result_mode, > @@ -11119,12 +11121,12 @@ simplify_shift_const_1 (enum rtx_code co > - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) > { > rtx varop_inner = XEXP (varop, 0); > - > - varop_inner > - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), > - XEXP (varop_inner, 0), > - GEN_INT > - (count + INTVAL (XEXP (varop_inner, 1)))); > + int new_count = count + INTVAL (XEXP (varop_inner, 1)); > + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), > + new_count); > + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), > + XEXP (varop_inner, 0), > + new_count_rtx); > varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); > count = 0; > continue; > @@ -11176,7 +11178,8 @@ simplify_shift_const_1 (enum rtx_code co > x = NULL_RTX; > > if (x == NULL_RTX) > - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); > + x = simplify_gen_binary (code, shift_mode, varop, > + gen_int_shift_amount (shift_mode, count)); > > /* If we were doing an LSHIFTRT in a wider mode than it was originally, > turn off all the bits that the shift would have turned off. */ > @@ -11238,7 +11241,8 @@ simplify_shift_const (rtx x, enum rtx_co > return tem; > > if (!x) > - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); > + x = simplify_gen_binary (code, GET_MODE (varop), varop, > + gen_int_shift_amount (GET_MODE (varop), count)); > if (GET_MODE (x) != result_mode) > x = gen_lowpart (result_mode, x); > return x; > @@ -11429,8 +11433,9 @@ change_zero_ext (rtx pat) > if (BITS_BIG_ENDIAN) > start = GET_MODE_PRECISION (inner_mode) - size - start; > > - if (start) > - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); > + if (start != 0) > + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), > + gen_int_shift_amount (inner_mode, start)); > else > x = XEXP (x, 0); > if (mode != inner_mode) > Index: gcc/optabs.c > =================================================================== > --- gcc/optabs.c 2017-10-23 11:47:06.643477568 +0100 > +++ gcc/optabs.c 2017-10-23 11:47:11.276323187 +0100 > @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, > if (binoptab != ashr_optab) > emit_move_insn (outof_target, CONST0_RTX (word_mode)); > else > - if (!force_expand_binop (word_mode, binoptab, > - outof_input, GEN_INT (BITS_PER_WORD - 1), > + if (!force_expand_binop (word_mode, binoptab, outof_input, > + gen_int_shift_amount (word_mode, > + BITS_PER_WORD - 1), > outof_target, unsignedp, methods)) > return false; > } > @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod > { > int low = (WORDS_BIG_ENDIAN ? 1 : 0); > int high = (WORDS_BIG_ENDIAN ? 0 : 1); > - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); > + rtx wordm1 = (umulp ? NULL_RTX > + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); > rtx product, adjust, product_high, temp; > > rtx op0_high = operand_subword_force (op0, high, mode); > @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b > unsigned int bits = GET_MODE_PRECISION (int_mode); > > if (CONST_INT_P (op1)) > - newop1 = GEN_INT (bits - INTVAL (op1)); > + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); > else if (targetm.shift_truncation_mask (int_mode) == bits - 1) > newop1 = negate_rtx (GET_MODE (op1), op1); > else > @@ -1399,11 +1401,11 @@ expand_binop (machine_mode mode, optab b > shift_mask = targetm.shift_truncation_mask (word_mode); > op1_mode = (GET_MODE (op1) != VOIDmode > ? as_a <scalar_int_mode> (GET_MODE (op1)) > - : word_mode); > + : get_shift_amount_mode (word_mode)); > > /* Apply the truncation to constant shifts. */ > if (double_shift_mask > 0 && CONST_INT_P (op1)) > - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); > + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); > > if (op1 == CONST0_RTX (op1_mode)) > return op0; > @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b > else > { > rtx into_temp1, into_temp2, outof_temp1, outof_temp2; > - rtx first_shift_count, second_shift_count; > + HOST_WIDE_INT first_shift_count, second_shift_count; > optab reverse_unsigned_shift, unsigned_shift; > > reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) > @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b > > if (shift_count > BITS_PER_WORD) > { > - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); > - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); > + first_shift_count = shift_count - BITS_PER_WORD; > + second_shift_count = 2 * BITS_PER_WORD - shift_count; > } > else > { > - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); > - second_shift_count = GEN_INT (shift_count); > + first_shift_count = BITS_PER_WORD - shift_count; > + second_shift_count = shift_count; > } > + rtx first_shift_count_rtx > + = gen_int_shift_amount (word_mode, first_shift_count); > + rtx second_shift_count_rtx > + = gen_int_shift_amount (word_mode, second_shift_count); > > into_temp1 = expand_binop (word_mode, unsigned_shift, > - outof_input, first_shift_count, > + outof_input, first_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, > - into_input, second_shift_count, > + into_input, second_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > > if (into_temp1 != 0 && into_temp2 != 0) > @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b > emit_move_insn (into_target, inter); > > outof_temp1 = expand_binop (word_mode, unsigned_shift, > - into_input, first_shift_count, > + into_input, first_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, > - outof_input, second_shift_count, > + outof_input, second_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > > if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) > @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un > > if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) > { > - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, > - unsignedp, OPTAB_DIRECT); > + temp = expand_binop (mode, rotl_optab, op0, > + gen_int_shift_amount (mode, 8), > + target, unsignedp, OPTAB_DIRECT); > if (temp) > return temp; > } > > if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) > { > - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, > - unsignedp, OPTAB_DIRECT); > + temp = expand_binop (mode, rotr_optab, op0, > + gen_int_shift_amount (mode, 8), > + target, unsignedp, OPTAB_DIRECT); > if (temp) > return temp; > } > > last = get_last_insn (); > > - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, > + temp1 = expand_binop (mode, ashl_optab, op0, > + gen_int_shift_amount (mode, 8), NULL_RTX, > unsignedp, OPTAB_WIDEN); > - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, > + temp2 = expand_binop (mode, lshr_optab, op0, > + gen_int_shift_amount (mode, 8), NULL_RTX, > unsignedp, OPTAB_WIDEN); > if (temp1 && temp2) > { > @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod > } > > /* Checks if vec_perm mask SEL is a constant equivalent to a shift of the first > - vec_perm operand, assuming the second operand is a constant vector of zeroes. > - Return the shift distance in bits if so, or NULL_RTX if the vec_perm is not a > - shift. */ > + vec_perm operand (which has mode OP0_MODE), assuming the second > + operand is a constant vector of zeroes. Return the shift distance in > + bits if so, or NULL_RTX if the vec_perm is not a shift. */ > static rtx > -shift_amt_for_vec_perm_mask (rtx sel) > +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) > { > unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); > unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); > @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) > return NULL_RTX; > } > > - return GEN_INT (first * bitsize); > + return gen_int_shift_amount (op0_mode, first * bitsize); > } > > /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ > @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx > && (shift_code != CODE_FOR_nothing > || shift_code_qi != CODE_FOR_nothing)) > { > - shift_amt = shift_amt_for_vec_perm_mask (sel); > + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); > if (shift_amt) > { > struct expand_operand ops[3]; > @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx > NULL, 0, OPTAB_DIRECT); > else > sel = expand_simple_binop (selmode, ASHIFT, sel, > - GEN_INT (exact_log2 (u)), > + gen_int_shift_amount (selmode, > + exact_log2 (u)), > NULL, 0, OPTAB_DIRECT); > gcc_assert (sel != NULL); >
On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener <richard.guenther@gmail.com> wrote: > On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford > <richard.sandiford@linaro.org> wrote: >> This patch adds a stub helper routine to provide the mode >> of a scalar shift amount, given the mode of the values >> being shifted. >> >> One long-standing problem has been to decide what this mode >> should be for arbitrary rtxes (as opposed to those directly >> tied to a target pattern). Is it the mode of the shifted >> elements? Is it word_mode? Or maybe QImode? Is it whatever >> the corresponding target pattern says? (In which case what >> should the mode be when the target doesn't have a pattern?) >> >> For now the patch picks word_mode, which should be safe on >> all targets but could perhaps become suboptimal if the helper >> routine is used more often than it is in this patch. As it >> stands the patch does not change the generated code. >> >> The patch also adds a helper function that constructs rtxes >> for constant shift amounts, again given the mode of the value >> being shifted. As well as helping with the SVE patches, this >> is one step towards allowing CONST_INTs to have a real mode. > > I think gen_shift_amount_mode is flawed and while encapsulating > constant shift amount RTX generation into a gen_int_shift_amount > looks good to me I'd rather have that ??? in this function (and > I'd use the mode of the RTX shifted, not word_mode...). > > In the end it's up to insn recognizing to convert the op to the > expected mode and for generic RTL it's us that should decide > on the mode -- on GENERIC the shift amount has to be an > integer so why not simply use a mode that is large enough to > make the constant fit? > > Just throwing in some comments here, RTL isn't my primary > expertise. To add a little bit - shift amounts is maybe the only(?) place where a modeless CONST_INT makes sense! So "fixing" that first sounds backwards. Richard. > Richard. > >> >> 2017-10-23 Richard Sandiford <richard.sandiford@linaro.org> >> Alan Hayward <alan.hayward@arm.com> >> David Sherwood <david.sherwood@arm.com> >> >> gcc/ >> * target.h (get_shift_amount_mode): New function. >> * emit-rtl.h (gen_int_shift_amount): Declare. >> * emit-rtl.c (gen_int_shift_amount): New function. >> * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount >> instead of GEN_INT. >> * calls.c (shift_return_value): Likewise. >> * cse.c (fold_rtx): Likewise. >> * dse.c (find_shift_sequence): Likewise. >> * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) >> (expand_shift, expand_smod_pow2): Likewise. >> * lower-subreg.c (shift_cost): Likewise. >> * simplify-rtx.c (simplify_unary_operation_1): Likewise. >> (simplify_binary_operation_1): Likewise. >> * combine.c (try_combine, find_split_point, force_int_to_mode) >> (simplify_shift_const_1, simplify_shift_const): Likewise. >> (change_zero_ext): Likewise. Use simplify_gen_binary. >> * optabs.c (expand_superword_shift, expand_doubleword_mult) >> (expand_unop): Use gen_int_shift_amount instead of GEN_INT. >> (expand_binop): Likewise. Use get_shift_amount_mode instead >> of word_mode as the mode of a CONST_INT shift amount. >> (shift_amt_for_vec_perm_mask): Add a machine_mode argument. >> Use gen_int_shift_amount instead of GEN_INT. >> (expand_vec_perm): Update caller accordingly. Use >> gen_int_shift_amount instead of GEN_INT. >> >> Index: gcc/target.h >> =================================================================== >> --- gcc/target.h 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/target.h 2017-10-23 11:47:11.277288162 +0100 >> @@ -209,6 +209,17 @@ #define HOOKSTRUCT(FRAGMENT) FRAGMENT >> >> extern struct gcc_target targetm; >> >> +/* Return the mode that should be used to hold a scalar shift amount >> + when shifting values of the given mode. */ >> +/* ??? This could in principle be generated automatically from the .md >> + shift patterns, but for now word_mode should be universally OK. */ >> + >> +inline scalar_int_mode >> +get_shift_amount_mode (machine_mode) >> +{ >> + return word_mode; >> +} >> + >> #ifdef GCC_TM_H >> >> #ifndef CUMULATIVE_ARGS_MAGIC >> Index: gcc/emit-rtl.h >> =================================================================== >> --- gcc/emit-rtl.h 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/emit-rtl.h 2017-10-23 11:47:11.274393237 +0100 >> @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, >> extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); >> extern void adjust_reg_mode (rtx, machine_mode); >> extern int mem_expr_equal_p (const_tree, const_tree); >> +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); >> >> extern bool need_atomic_barrier_p (enum memmodel, bool); >> >> Index: gcc/emit-rtl.c >> =================================================================== >> --- gcc/emit-rtl.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/emit-rtl.c 2017-10-23 11:47:11.273428262 +0100 >> @@ -6478,6 +6478,15 @@ need_atomic_barrier_p (enum memmodel mod >> } >> } >> >> +/* Return a constant shift amount for shifting a value of mode MODE >> + by VALUE bits. */ >> + >> +rtx >> +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) >> +{ >> + return gen_int_mode (value, get_shift_amount_mode (mode)); >> +} >> + >> /* Initialize fields of rtl_data related to stack alignment. */ >> >> void >> Index: gcc/asan.c >> =================================================================== >> --- gcc/asan.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/asan.c 2017-10-23 11:47:11.270533336 +0100 >> @@ -1388,7 +1388,7 @@ asan_emit_stack_protection (rtx base, rt >> TREE_ASM_WRITTEN (id) = 1; >> emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); >> shadow_base = expand_binop (Pmode, lshr_optab, base, >> - GEN_INT (ASAN_SHADOW_SHIFT), >> + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), >> NULL_RTX, 1, OPTAB_DIRECT); >> shadow_base >> = plus_constant (Pmode, shadow_base, >> Index: gcc/calls.c >> =================================================================== >> --- gcc/calls.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/calls.c 2017-10-23 11:47:11.270533336 +0100 >> @@ -2749,15 +2749,17 @@ shift_return_value (machine_mode mode, b >> HOST_WIDE_INT shift; >> >> gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); >> - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); >> + machine_mode value_mode = GET_MODE (value); >> + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); >> if (shift == 0) >> return false; >> >> /* Use ashr rather than lshr for right shifts. This is for the benefit >> of the MIPS port, which requires SImode values to be sign-extended >> when stored in 64-bit registers. */ >> - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, >> - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) >> + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, >> + value, gen_int_shift_amount (value_mode, shift), >> + value, 1, OPTAB_WIDEN)) >> gcc_unreachable (); >> return true; >> } >> Index: gcc/cse.c >> =================================================================== >> --- gcc/cse.c 2017-10-23 11:47:03.707058235 +0100 >> +++ gcc/cse.c 2017-10-23 11:47:11.273428262 +0100 >> @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) >> || INTVAL (const_arg1) < 0)) >> { >> if (SHIFT_COUNT_TRUNCATED) >> - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) >> - & (GET_MODE_UNIT_BITSIZE (mode) >> - - 1)); >> + canon_const_arg1 = gen_int_shift_amount >> + (mode, (INTVAL (const_arg1) >> + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); >> else >> break; >> } >> @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) >> || INTVAL (inner_const) < 0)) >> { >> if (SHIFT_COUNT_TRUNCATED) >> - inner_const = GEN_INT (INTVAL (inner_const) >> - & (GET_MODE_UNIT_BITSIZE (mode) >> - - 1)); >> + inner_const = gen_int_shift_amount >> + (mode, (INTVAL (inner_const) >> + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); >> else >> break; >> } >> @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) >> /* As an exception, we can turn an ASHIFTRT of this >> form into a shift of the number of bits - 1. */ >> if (code == ASHIFTRT) >> - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); >> + new_const = gen_int_shift_amount >> + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); >> else if (!side_effects_p (XEXP (y, 0))) >> return CONST0_RTX (mode); >> else >> Index: gcc/dse.c >> =================================================================== >> --- gcc/dse.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/dse.c 2017-10-23 11:47:11.273428262 +0100 >> @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, >> store_mode, byte); >> if (ret && CONSTANT_P (ret)) >> { >> + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); >> ret = simplify_const_binary_operation (LSHIFTRT, new_mode, >> - ret, GEN_INT (shift)); >> + ret, shift_rtx); >> if (ret && CONSTANT_P (ret)) >> { >> byte = subreg_lowpart_offset (read_mode, new_mode); >> @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, >> of one dsp where the cost of these two was not the same. But >> this really is a rare case anyway. */ >> target = expand_binop (new_mode, lshr_optab, new_reg, >> - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); >> + gen_int_shift_amount (new_mode, shift), >> + new_reg, 1, OPTAB_DIRECT); >> >> shift_seq = get_insns (); >> end_sequence (); >> Index: gcc/expmed.c >> =================================================================== >> --- gcc/expmed.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/expmed.c 2017-10-23 11:47:11.274393237 +0100 >> @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed >> PUT_MODE (all->zext, wider_mode); >> PUT_MODE (all->wide_mult, wider_mode); >> PUT_MODE (all->wide_lshr, wider_mode); >> - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); >> + XEXP (all->wide_lshr, 1) >> + = gen_int_shift_amount (wider_mode, mode_bitsize); >> >> set_mul_widen_cost (speed, wider_mode, >> set_src_cost (all->wide_mult, wider_mode, speed)); >> @@ -908,12 +909,14 @@ store_bit_field_1 (rtx str_rtx, unsigned >> to make sure that for big-endian machines the higher order >> bits are used. */ >> if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) >> - value_word = simplify_expand_binop (word_mode, lshr_optab, >> - value_word, >> - GEN_INT (BITS_PER_WORD >> - - new_bitsize), >> - NULL_RTX, true, >> - OPTAB_LIB_WIDEN); >> + { >> + int shift = BITS_PER_WORD - new_bitsize; >> + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); >> + value_word = simplify_expand_binop (word_mode, lshr_optab, >> + value_word, shift_rtx, >> + NULL_RTX, true, >> + OPTAB_LIB_WIDEN); >> + } >> >> if (!store_bit_field_1 (op0, new_bitsize, >> bitnum + bit_offset, >> @@ -2366,8 +2369,9 @@ expand_shift_1 (enum tree_code code, mac >> if (CONST_INT_P (op1) >> && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= >> (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) >> - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) >> - % GET_MODE_BITSIZE (scalar_mode)); >> + op1 = gen_int_shift_amount (mode, >> + (unsigned HOST_WIDE_INT) INTVAL (op1) >> + % GET_MODE_BITSIZE (scalar_mode)); >> else if (GET_CODE (op1) == SUBREG >> && subreg_lowpart_p (op1) >> && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) >> @@ -2384,7 +2388,8 @@ expand_shift_1 (enum tree_code code, mac >> && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, >> GET_MODE_BITSIZE (scalar_mode) - 1)) >> { >> - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); >> + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) >> + - INTVAL (op1))); >> left = !left; >> code = left ? LROTATE_EXPR : RROTATE_EXPR; >> } >> @@ -2464,8 +2469,8 @@ expand_shift_1 (enum tree_code code, mac >> if (op1 == const0_rtx) >> return shifted; >> else if (CONST_INT_P (op1)) >> - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) >> - - INTVAL (op1)); >> + other_amount = gen_int_shift_amount >> + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); >> else >> { >> other_amount >> @@ -2538,8 +2543,9 @@ expand_shift_1 (enum tree_code code, mac >> expand_shift (enum tree_code code, machine_mode mode, rtx shifted, >> int amount, rtx target, int unsignedp) >> { >> - return expand_shift_1 (code, mode, >> - shifted, GEN_INT (amount), target, unsignedp); >> + return expand_shift_1 (code, mode, shifted, >> + gen_int_shift_amount (mode, amount), >> + target, unsignedp); >> } >> >> /* Likewise, but return 0 if that cannot be done. */ >> @@ -3855,7 +3861,7 @@ expand_smod_pow2 (scalar_int_mode mode, >> { >> HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; >> signmask = force_reg (mode, signmask); >> - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); >> + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); >> >> /* Use the rtx_cost of a LSHIFTRT instruction to determine >> which instruction sequence to use. If logical right shifts >> Index: gcc/lower-subreg.c >> =================================================================== >> --- gcc/lower-subreg.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/lower-subreg.c 2017-10-23 11:47:11.274393237 +0100 >> @@ -129,7 +129,7 @@ shift_cost (bool speed_p, struct cost_rt >> PUT_CODE (rtxes->shift, code); >> PUT_MODE (rtxes->shift, mode); >> PUT_MODE (rtxes->source, mode); >> - XEXP (rtxes->shift, 1) = GEN_INT (op1); >> + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); >> return set_src_cost (rtxes->shift, mode, speed_p); >> } >> >> Index: gcc/simplify-rtx.c >> =================================================================== >> --- gcc/simplify-rtx.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/simplify-rtx.c 2017-10-23 11:47:11.277288162 +0100 >> @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod >> if (STORE_FLAG_VALUE == 1) >> { >> temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), >> - GEN_INT (isize - 1)); >> + gen_int_shift_amount (inner, >> + isize - 1)); >> if (int_mode == inner) >> return temp; >> if (GET_MODE_PRECISION (int_mode) > isize) >> @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod >> else if (STORE_FLAG_VALUE == -1) >> { >> temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), >> - GEN_INT (isize - 1)); >> + gen_int_shift_amount (inner, >> + isize - 1)); >> if (int_mode == inner) >> return temp; >> if (GET_MODE_PRECISION (int_mode) > isize) >> @@ -2679,7 +2681,8 @@ simplify_binary_operation_1 (enum rtx_co >> { >> val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); >> if (val >= 0) >> - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); >> + return simplify_gen_binary (ASHIFT, mode, op0, >> + gen_int_shift_amount (mode, val)); >> } >> >> /* x*2 is x+x and x*(-1) is -x */ >> @@ -3303,7 +3306,8 @@ simplify_binary_operation_1 (enum rtx_co >> /* Convert divide by power of two into shift. */ >> if (CONST_INT_P (trueop1) >> && (val = exact_log2 (UINTVAL (trueop1))) > 0) >> - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); >> + return simplify_gen_binary (LSHIFTRT, mode, op0, >> + gen_int_shift_amount (mode, val)); >> break; >> >> case DIV: >> @@ -3423,10 +3427,12 @@ simplify_binary_operation_1 (enum rtx_co >> && IN_RANGE (INTVAL (trueop1), >> GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), >> GET_MODE_UNIT_PRECISION (mode) - 1)) >> - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, >> - mode, op0, >> - GEN_INT (GET_MODE_UNIT_PRECISION (mode) >> - - INTVAL (trueop1))); >> + { >> + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); >> + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); >> + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, >> + mode, op0, new_amount_rtx); >> + } >> #endif >> /* FALLTHRU */ >> case ASHIFTRT: >> @@ -3466,8 +3472,8 @@ simplify_binary_operation_1 (enum rtx_co >> == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) >> && subreg_lowpart_p (op0)) >> { >> - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) >> - + INTVAL (op1)); >> + rtx tmp = gen_int_shift_amount >> + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); >> tmp = simplify_gen_binary (code, inner_mode, >> XEXP (SUBREG_REG (op0), 0), >> tmp); >> @@ -3478,7 +3484,8 @@ simplify_binary_operation_1 (enum rtx_co >> { >> val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); >> if (val != INTVAL (op1)) >> - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); >> + return simplify_gen_binary (code, mode, op0, >> + gen_int_shift_amount (mode, val)); >> } >> break; >> >> Index: gcc/combine.c >> =================================================================== >> --- gcc/combine.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/combine.c 2017-10-23 11:47:11.272463287 +0100 >> @@ -3773,8 +3773,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, >> && INTVAL (XEXP (*split, 1)) > 0 >> && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) >> { >> + rtx i_rtx = gen_int_shift_amount (split_mode, i); >> SUBST (*split, gen_rtx_ASHIFT (split_mode, >> - XEXP (*split, 0), GEN_INT (i))); >> + XEXP (*split, 0), i_rtx)); >> /* Update split_code because we may not have a multiply >> anymore. */ >> split_code = GET_CODE (*split); >> @@ -3788,8 +3789,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, >> && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) >> { >> rtx nsplit = XEXP (*split, 0); >> + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); >> SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), >> - XEXP (nsplit, 0), GEN_INT (i))); >> + XEXP (nsplit, 0), >> + i_rtx)); >> /* Update split_code because we may not have a multiply >> anymore. */ >> split_code = GET_CODE (*split); >> @@ -5057,12 +5060,12 @@ find_split_point (rtx *loc, rtx_insn *in >> GET_MODE (XEXP (SET_SRC (x), 0)))))) >> { >> machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); >> - >> + rtx pos_rtx = gen_int_shift_amount (mode, pos); >> SUBST (SET_SRC (x), >> gen_rtx_NEG (mode, >> gen_rtx_LSHIFTRT (mode, >> XEXP (SET_SRC (x), 0), >> - GEN_INT (pos)))); >> + pos_rtx))); >> >> split = find_split_point (&SET_SRC (x), insn, true); >> if (split && split != &SET_SRC (x)) >> @@ -5120,11 +5123,11 @@ find_split_point (rtx *loc, rtx_insn *in >> { >> unsigned HOST_WIDE_INT mask >> = (HOST_WIDE_INT_1U << len) - 1; >> + rtx pos_rtx = gen_int_shift_amount (mode, pos); >> SUBST (SET_SRC (x), >> gen_rtx_AND (mode, >> gen_rtx_LSHIFTRT >> - (mode, gen_lowpart (mode, inner), >> - GEN_INT (pos)), >> + (mode, gen_lowpart (mode, inner), pos_rtx), >> gen_int_mode (mask, mode))); >> >> split = find_split_point (&SET_SRC (x), insn, true); >> @@ -5133,14 +5136,15 @@ find_split_point (rtx *loc, rtx_insn *in >> } >> else >> { >> + int left_bits = GET_MODE_PRECISION (mode) - len - pos; >> + int right_bits = GET_MODE_PRECISION (mode) - len; >> SUBST (SET_SRC (x), >> gen_rtx_fmt_ee >> (unsignedp ? LSHIFTRT : ASHIFTRT, mode, >> gen_rtx_ASHIFT (mode, >> gen_lowpart (mode, inner), >> - GEN_INT (GET_MODE_PRECISION (mode) >> - - len - pos)), >> - GEN_INT (GET_MODE_PRECISION (mode) - len))); >> + gen_int_shift_amount (mode, left_bits)), >> + gen_int_shift_amount (mode, right_bits))); >> >> split = find_split_point (&SET_SRC (x), insn, true); >> if (split && split != &SET_SRC (x)) >> @@ -8915,10 +8919,11 @@ force_int_to_mode (rtx x, scalar_int_mod >> /* Must be more sign bit copies than the mask needs. */ >> && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >> >= exact_log2 (mask + 1))) >> - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), >> - GEN_INT (GET_MODE_PRECISION (xmode) >> - - exact_log2 (mask + 1))); >> - >> + { >> + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); >> + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), >> + gen_int_shift_amount (xmode, nbits)); >> + } >> goto shiftrt; >> >> case ASHIFTRT: >> @@ -10415,7 +10420,7 @@ simplify_shift_const_1 (enum rtx_code co >> { >> enum rtx_code orig_code = code; >> rtx orig_varop = varop; >> - int count; >> + int count, log2; >> machine_mode mode = result_mode; >> machine_mode shift_mode; >> scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; >> @@ -10618,13 +10623,11 @@ simplify_shift_const_1 (enum rtx_code co >> is cheaper. But it is still better on those machines to >> merge two shifts into one. */ >> if (CONST_INT_P (XEXP (varop, 1)) >> - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) >> + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) >> { >> - varop >> - = simplify_gen_binary (ASHIFT, GET_MODE (varop), >> - XEXP (varop, 0), >> - GEN_INT (exact_log2 ( >> - UINTVAL (XEXP (varop, 1))))); >> + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); >> + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), >> + XEXP (varop, 0), log2_rtx); >> continue; >> } >> break; >> @@ -10632,13 +10635,11 @@ simplify_shift_const_1 (enum rtx_code co >> case UDIV: >> /* Similar, for when divides are cheaper. */ >> if (CONST_INT_P (XEXP (varop, 1)) >> - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) >> + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) >> { >> - varop >> - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), >> - XEXP (varop, 0), >> - GEN_INT (exact_log2 ( >> - UINTVAL (XEXP (varop, 1))))); >> + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); >> + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), >> + XEXP (varop, 0), log2_rtx); >> continue; >> } >> break; >> @@ -10773,10 +10774,10 @@ simplify_shift_const_1 (enum rtx_code co >> >> mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), >> int_result_mode); >> - >> + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); >> mask_rtx >> = simplify_const_binary_operation (code, int_result_mode, >> - mask_rtx, GEN_INT (count)); >> + mask_rtx, count_rtx); >> >> /* Give up if we can't compute an outer operation to use. */ >> if (mask_rtx == 0 >> @@ -10832,9 +10833,10 @@ simplify_shift_const_1 (enum rtx_code co >> if (code == ASHIFTRT && int_mode != int_result_mode) >> break; >> >> + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); >> rtx new_rtx = simplify_const_binary_operation (code, int_mode, >> XEXP (varop, 0), >> - GEN_INT (count)); >> + count_rtx); >> varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); >> count = 0; >> continue; >> @@ -10900,7 +10902,7 @@ simplify_shift_const_1 (enum rtx_code co >> && (new_rtx = simplify_const_binary_operation >> (code, int_result_mode, >> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >> - GEN_INT (count))) != 0 >> + gen_int_shift_amount (int_result_mode, count))) != 0 >> && CONST_INT_P (new_rtx) >> && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), >> INTVAL (new_rtx), int_result_mode, >> @@ -11043,7 +11045,7 @@ simplify_shift_const_1 (enum rtx_code co >> && (new_rtx = simplify_const_binary_operation >> (ASHIFT, int_result_mode, >> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >> - GEN_INT (count))) != 0 >> + gen_int_shift_amount (int_result_mode, count))) != 0 >> && CONST_INT_P (new_rtx) >> && merge_outer_ops (&outer_op, &outer_const, PLUS, >> INTVAL (new_rtx), int_result_mode, >> @@ -11064,7 +11066,7 @@ simplify_shift_const_1 (enum rtx_code co >> && (new_rtx = simplify_const_binary_operation >> (code, int_result_mode, >> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >> - GEN_INT (count))) != 0 >> + gen_int_shift_amount (int_result_mode, count))) != 0 >> && CONST_INT_P (new_rtx) >> && merge_outer_ops (&outer_op, &outer_const, XOR, >> INTVAL (new_rtx), int_result_mode, >> @@ -11119,12 +11121,12 @@ simplify_shift_const_1 (enum rtx_code co >> - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) >> { >> rtx varop_inner = XEXP (varop, 0); >> - >> - varop_inner >> - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), >> - XEXP (varop_inner, 0), >> - GEN_INT >> - (count + INTVAL (XEXP (varop_inner, 1)))); >> + int new_count = count + INTVAL (XEXP (varop_inner, 1)); >> + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), >> + new_count); >> + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), >> + XEXP (varop_inner, 0), >> + new_count_rtx); >> varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); >> count = 0; >> continue; >> @@ -11176,7 +11178,8 @@ simplify_shift_const_1 (enum rtx_code co >> x = NULL_RTX; >> >> if (x == NULL_RTX) >> - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); >> + x = simplify_gen_binary (code, shift_mode, varop, >> + gen_int_shift_amount (shift_mode, count)); >> >> /* If we were doing an LSHIFTRT in a wider mode than it was originally, >> turn off all the bits that the shift would have turned off. */ >> @@ -11238,7 +11241,8 @@ simplify_shift_const (rtx x, enum rtx_co >> return tem; >> >> if (!x) >> - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); >> + x = simplify_gen_binary (code, GET_MODE (varop), varop, >> + gen_int_shift_amount (GET_MODE (varop), count)); >> if (GET_MODE (x) != result_mode) >> x = gen_lowpart (result_mode, x); >> return x; >> @@ -11429,8 +11433,9 @@ change_zero_ext (rtx pat) >> if (BITS_BIG_ENDIAN) >> start = GET_MODE_PRECISION (inner_mode) - size - start; >> >> - if (start) >> - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); >> + if (start != 0) >> + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), >> + gen_int_shift_amount (inner_mode, start)); >> else >> x = XEXP (x, 0); >> if (mode != inner_mode) >> Index: gcc/optabs.c >> =================================================================== >> --- gcc/optabs.c 2017-10-23 11:47:06.643477568 +0100 >> +++ gcc/optabs.c 2017-10-23 11:47:11.276323187 +0100 >> @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, >> if (binoptab != ashr_optab) >> emit_move_insn (outof_target, CONST0_RTX (word_mode)); >> else >> - if (!force_expand_binop (word_mode, binoptab, >> - outof_input, GEN_INT (BITS_PER_WORD - 1), >> + if (!force_expand_binop (word_mode, binoptab, outof_input, >> + gen_int_shift_amount (word_mode, >> + BITS_PER_WORD - 1), >> outof_target, unsignedp, methods)) >> return false; >> } >> @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod >> { >> int low = (WORDS_BIG_ENDIAN ? 1 : 0); >> int high = (WORDS_BIG_ENDIAN ? 0 : 1); >> - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); >> + rtx wordm1 = (umulp ? NULL_RTX >> + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); >> rtx product, adjust, product_high, temp; >> >> rtx op0_high = operand_subword_force (op0, high, mode); >> @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b >> unsigned int bits = GET_MODE_PRECISION (int_mode); >> >> if (CONST_INT_P (op1)) >> - newop1 = GEN_INT (bits - INTVAL (op1)); >> + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); >> else if (targetm.shift_truncation_mask (int_mode) == bits - 1) >> newop1 = negate_rtx (GET_MODE (op1), op1); >> else >> @@ -1399,11 +1401,11 @@ expand_binop (machine_mode mode, optab b >> shift_mask = targetm.shift_truncation_mask (word_mode); >> op1_mode = (GET_MODE (op1) != VOIDmode >> ? as_a <scalar_int_mode> (GET_MODE (op1)) >> - : word_mode); >> + : get_shift_amount_mode (word_mode)); >> >> /* Apply the truncation to constant shifts. */ >> if (double_shift_mask > 0 && CONST_INT_P (op1)) >> - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); >> + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); >> >> if (op1 == CONST0_RTX (op1_mode)) >> return op0; >> @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b >> else >> { >> rtx into_temp1, into_temp2, outof_temp1, outof_temp2; >> - rtx first_shift_count, second_shift_count; >> + HOST_WIDE_INT first_shift_count, second_shift_count; >> optab reverse_unsigned_shift, unsigned_shift; >> >> reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) >> @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b >> >> if (shift_count > BITS_PER_WORD) >> { >> - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); >> - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); >> + first_shift_count = shift_count - BITS_PER_WORD; >> + second_shift_count = 2 * BITS_PER_WORD - shift_count; >> } >> else >> { >> - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); >> - second_shift_count = GEN_INT (shift_count); >> + first_shift_count = BITS_PER_WORD - shift_count; >> + second_shift_count = shift_count; >> } >> + rtx first_shift_count_rtx >> + = gen_int_shift_amount (word_mode, first_shift_count); >> + rtx second_shift_count_rtx >> + = gen_int_shift_amount (word_mode, second_shift_count); >> >> into_temp1 = expand_binop (word_mode, unsigned_shift, >> - outof_input, first_shift_count, >> + outof_input, first_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, >> - into_input, second_shift_count, >> + into_input, second_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> >> if (into_temp1 != 0 && into_temp2 != 0) >> @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b >> emit_move_insn (into_target, inter); >> >> outof_temp1 = expand_binop (word_mode, unsigned_shift, >> - into_input, first_shift_count, >> + into_input, first_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, >> - outof_input, second_shift_count, >> + outof_input, second_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> >> if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) >> @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un >> >> if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) >> { >> - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, >> - unsignedp, OPTAB_DIRECT); >> + temp = expand_binop (mode, rotl_optab, op0, >> + gen_int_shift_amount (mode, 8), >> + target, unsignedp, OPTAB_DIRECT); >> if (temp) >> return temp; >> } >> >> if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) >> { >> - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, >> - unsignedp, OPTAB_DIRECT); >> + temp = expand_binop (mode, rotr_optab, op0, >> + gen_int_shift_amount (mode, 8), >> + target, unsignedp, OPTAB_DIRECT); >> if (temp) >> return temp; >> } >> >> last = get_last_insn (); >> >> - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, >> + temp1 = expand_binop (mode, ashl_optab, op0, >> + gen_int_shift_amount (mode, 8), NULL_RTX, >> unsignedp, OPTAB_WIDEN); >> - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, >> + temp2 = expand_binop (mode, lshr_optab, op0, >> + gen_int_shift_amount (mode, 8), NULL_RTX, >> unsignedp, OPTAB_WIDEN); >> if (temp1 && temp2) >> { >> @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod >> } >> >> /* Checks if vec_perm mask SEL is a constant equivalent to a shift of the first >> - vec_perm operand, assuming the second operand is a constant vector of zeroes. >> - Return the shift distance in bits if so, or NULL_RTX if the vec_perm is not a >> - shift. */ >> + vec_perm operand (which has mode OP0_MODE), assuming the second >> + operand is a constant vector of zeroes. Return the shift distance in >> + bits if so, or NULL_RTX if the vec_perm is not a shift. */ >> static rtx >> -shift_amt_for_vec_perm_mask (rtx sel) >> +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) >> { >> unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); >> unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); >> @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) >> return NULL_RTX; >> } >> >> - return GEN_INT (first * bitsize); >> + return gen_int_shift_amount (op0_mode, first * bitsize); >> } >> >> /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ >> @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx >> && (shift_code != CODE_FOR_nothing >> || shift_code_qi != CODE_FOR_nothing)) >> { >> - shift_amt = shift_amt_for_vec_perm_mask (sel); >> + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); >> if (shift_amt) >> { >> struct expand_operand ops[3]; >> @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx >> NULL, 0, OPTAB_DIRECT); >> else >> sel = expand_simple_binop (selmode, ASHIFT, sel, >> - GEN_INT (exact_log2 (u)), >> + gen_int_shift_amount (selmode, >> + exact_log2 (u)), >> NULL, 0, OPTAB_DIRECT); >> gcc_assert (sel != NULL); >>
On 10/26/2017 06:06 AM, Richard Biener wrote: > On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford > <richard.sandiford@linaro.org> wrote: >> This patch adds a stub helper routine to provide the mode >> of a scalar shift amount, given the mode of the values >> being shifted. >> >> One long-standing problem has been to decide what this mode >> should be for arbitrary rtxes (as opposed to those directly >> tied to a target pattern). Is it the mode of the shifted >> elements? Is it word_mode? Or maybe QImode? Is it whatever >> the corresponding target pattern says? (In which case what >> should the mode be when the target doesn't have a pattern?) >> >> For now the patch picks word_mode, which should be safe on >> all targets but could perhaps become suboptimal if the helper >> routine is used more often than it is in this patch. As it >> stands the patch does not change the generated code. >> >> The patch also adds a helper function that constructs rtxes >> for constant shift amounts, again given the mode of the value >> being shifted. As well as helping with the SVE patches, this >> is one step towards allowing CONST_INTs to have a real mode. > > I think gen_shift_amount_mode is flawed and while encapsulating > constant shift amount RTX generation into a gen_int_shift_amount > looks good to me I'd rather have that ??? in this function (and > I'd use the mode of the RTX shifted, not word_mode...). > > In the end it's up to insn recognizing to convert the op to the > expected mode and for generic RTL it's us that should decide > on the mode -- on GENERIC the shift amount has to be an > integer so why not simply use a mode that is large enough to > make the constant fit? > > Just throwing in some comments here, RTL isn't my primary > expertise. I wonder if encapsulation + a target hook to specify the mode would be better? We'd then have to argue over word_mode, vs QImode vs something else for the default, but at least we'd have a way for the target to specify the mode is generally best when working on shift counts. In the end I doubt there's a single definition that is overall better. Largely because I suspect there are times when the narrowest mode is best, or the mode of the operand being shifted. So thoughts on doing the encapsulation with a target hook to specify the desired mode? Does that get us what we need for SVE and does it provide us a path forward on this issue if we were to try to move towards CONST_INTs with modes? jeff
Richard Biener <richard.guenther@gmail.com> writes: > On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener > <richard.guenther@gmail.com> wrote: >> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >> <richard.sandiford@linaro.org> wrote: >>> This patch adds a stub helper routine to provide the mode >>> of a scalar shift amount, given the mode of the values >>> being shifted. >>> >>> One long-standing problem has been to decide what this mode >>> should be for arbitrary rtxes (as opposed to those directly >>> tied to a target pattern). Is it the mode of the shifted >>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>> the corresponding target pattern says? (In which case what >>> should the mode be when the target doesn't have a pattern?) >>> >>> For now the patch picks word_mode, which should be safe on >>> all targets but could perhaps become suboptimal if the helper >>> routine is used more often than it is in this patch. As it >>> stands the patch does not change the generated code. >>> >>> The patch also adds a helper function that constructs rtxes >>> for constant shift amounts, again given the mode of the value >>> being shifted. As well as helping with the SVE patches, this >>> is one step towards allowing CONST_INTs to have a real mode. >> >> I think gen_shift_amount_mode is flawed and while encapsulating >> constant shift amount RTX generation into a gen_int_shift_amount >> looks good to me I'd rather have that ??? in this function (and >> I'd use the mode of the RTX shifted, not word_mode...). OK. I'd gone for word_mode because that's what expand_binop uses for CONST_INTs: op1_mode = (GET_MODE (op1) != VOIDmode ? as_a <scalar_int_mode> (GET_MODE (op1)) : word_mode); But using the inner mode should be fine too. The patch below does that. >> In the end it's up to insn recognizing to convert the op to the >> expected mode and for generic RTL it's us that should decide >> on the mode -- on GENERIC the shift amount has to be an >> integer so why not simply use a mode that is large enough to >> make the constant fit? ...but I can do that instead if you think it's better. >> Just throwing in some comments here, RTL isn't my primary >> expertise. > > To add a little bit - shift amounts is maybe the only(?) place > where a modeless CONST_INT makes sense! So "fixing" > that first sounds backwards. But even here they have a mode conceptually, since out-of-range shift amounts are target-defined rather than undefined. E.g. if the target interprets the shift amount as unsigned, then for a shift amount (const_int -1) it matters whether the mode is QImode (and so we're shifting by 255) or HImode (and so we're shifting by 65535. OK, so shifts by 65535 make no sense in practice, but *conceptually*... :-) Jeff Law <law@redhat.com> writes: > On 10/26/2017 06:06 AM, Richard Biener wrote: >> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >> <richard.sandiford@linaro.org> wrote: >>> This patch adds a stub helper routine to provide the mode >>> of a scalar shift amount, given the mode of the values >>> being shifted. >>> >>> One long-standing problem has been to decide what this mode >>> should be for arbitrary rtxes (as opposed to those directly >>> tied to a target pattern). Is it the mode of the shifted >>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>> the corresponding target pattern says? (In which case what >>> should the mode be when the target doesn't have a pattern?) >>> >>> For now the patch picks word_mode, which should be safe on >>> all targets but could perhaps become suboptimal if the helper >>> routine is used more often than it is in this patch. As it >>> stands the patch does not change the generated code. >>> >>> The patch also adds a helper function that constructs rtxes >>> for constant shift amounts, again given the mode of the value >>> being shifted. As well as helping with the SVE patches, this >>> is one step towards allowing CONST_INTs to have a real mode. >> >> I think gen_shift_amount_mode is flawed and while encapsulating >> constant shift amount RTX generation into a gen_int_shift_amount >> looks good to me I'd rather have that ??? in this function (and >> I'd use the mode of the RTX shifted, not word_mode...). >> >> In the end it's up to insn recognizing to convert the op to the >> expected mode and for generic RTL it's us that should decide >> on the mode -- on GENERIC the shift amount has to be an >> integer so why not simply use a mode that is large enough to >> make the constant fit? >> >> Just throwing in some comments here, RTL isn't my primary >> expertise. > I wonder if encapsulation + a target hook to specify the mode would be > better? We'd then have to argue over word_mode, vs QImode vs something > else for the default, but at least we'd have a way for the target to > specify the mode is generally best when working on shift counts. > > In the end I doubt there's a single definition that is overall better. > Largely because I suspect there are times when the narrowest mode is > best, or the mode of the operand being shifted. > > So thoughts on doing the encapsulation with a target hook to specify the > desired mode? Does that get us what we need for SVE and does it provide > us a path forward on this issue if we were to try to move towards > CONST_INTs with modes? I think it'd better to do that only if we have a use case, since it's hard to predict what the best way of handling it is until then. E.g. I'd still like to hold out the possibility of doing this automatically from the .md file instead, if some kind of override ends up being necessary. Like you say, we have to argue over the default either way, and I think that's been the sticking point. Thanks, Richard 2017-11-20 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * emit-rtl.h (gen_int_shift_amount): Declare. * emit-rtl.c (gen_int_shift_amount): New function. * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount instead of GEN_INT. * calls.c (shift_return_value): Likewise. * cse.c (fold_rtx): Likewise. * dse.c (find_shift_sequence): Likewise. * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) (expand_shift, expand_smod_pow2): Likewise. * lower-subreg.c (shift_cost): Likewise. * simplify-rtx.c (simplify_unary_operation_1): Likewise. (simplify_binary_operation_1): Likewise. * combine.c (try_combine, find_split_point, force_int_to_mode) (simplify_shift_const_1, simplify_shift_const): Likewise. (change_zero_ext): Likewise. Use simplify_gen_binary. * optabs.c (expand_superword_shift, expand_doubleword_mult) (expand_unop, expand_binop): Use gen_int_shift_amount instead of GEN_INT. (shift_amt_for_vec_perm_mask): Add a machine_mode argument. Use gen_int_shift_amount instead of GEN_INT. (expand_vec_perm): Update caller accordingly. Use gen_int_shift_amount instead of GEN_INT. Index: gcc/emit-rtl.h =================================================================== --- gcc/emit-rtl.h 2017-11-20 20:37:41.918226976 +0000 +++ gcc/emit-rtl.h 2017-11-20 20:37:51.661320782 +0000 @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); extern void adjust_reg_mode (rtx, machine_mode); extern int mem_expr_equal_p (const_tree, const_tree); +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); extern bool need_atomic_barrier_p (enum memmodel, bool); Index: gcc/emit-rtl.c =================================================================== --- gcc/emit-rtl.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/emit-rtl.c 2017-11-20 20:37:51.660320782 +0000 @@ -6507,6 +6507,24 @@ need_atomic_barrier_p (enum memmodel mod } } +/* Return a constant shift amount for shifting a value of mode MODE + by VALUE bits. */ + +rtx +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) +{ + /* ??? Using the inner mode should be wide enough for all useful + cases (e.g. QImode usually has 8 shiftable bits, while a QImode + shift amount has a range of [-128, 127]). But in principle + a target could require target-dependent behaviour for a + shift whose shift amount is wider than the shifted value. + Perhaps this should be automatically derived from the .md + files instead, or perhaps have a target hook. */ + scalar_int_mode shift_mode + = int_mode_for_mode (GET_MODE_INNER (mode)).require (); + return gen_int_mode (value, shift_mode); +} + /* Initialize fields of rtl_data related to stack alignment. */ void Index: gcc/asan.c =================================================================== --- gcc/asan.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/asan.c 2017-11-20 20:37:51.657320781 +0000 @@ -1386,7 +1386,7 @@ asan_emit_stack_protection (rtx base, rt TREE_ASM_WRITTEN (id) = 1; emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); shadow_base = expand_binop (Pmode, lshr_optab, base, - GEN_INT (ASAN_SHADOW_SHIFT), + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), NULL_RTX, 1, OPTAB_DIRECT); shadow_base = plus_constant (Pmode, shadow_base, Index: gcc/calls.c =================================================================== --- gcc/calls.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/calls.c 2017-11-20 20:37:51.657320781 +0000 @@ -2742,15 +2742,17 @@ shift_return_value (machine_mode mode, b HOST_WIDE_INT shift; gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); + machine_mode value_mode = GET_MODE (value); + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); if (shift == 0) return false; /* Use ashr rather than lshr for right shifts. This is for the benefit of the MIPS port, which requires SImode values to be sign-extended when stored in 64-bit registers. */ - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, + value, gen_int_shift_amount (value_mode, shift), + value, 1, OPTAB_WIDEN)) gcc_unreachable (); return true; } Index: gcc/cse.c =================================================================== --- gcc/cse.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/cse.c 2017-11-20 20:37:51.660320782 +0000 @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (const_arg1) < 0)) { if (SHIFT_COUNT_TRUNCATED) - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + canon_const_arg1 = gen_int_shift_amount + (mode, (INTVAL (const_arg1) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (inner_const) < 0)) { if (SHIFT_COUNT_TRUNCATED) - inner_const = GEN_INT (INTVAL (inner_const) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + inner_const = gen_int_shift_amount + (mode, (INTVAL (inner_const) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) /* As an exception, we can turn an ASHIFTRT of this form into a shift of the number of bits - 1. */ if (code == ASHIFTRT) - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); + new_const = gen_int_shift_amount + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); else if (!side_effects_p (XEXP (y, 0))) return CONST0_RTX (mode); else Index: gcc/dse.c =================================================================== --- gcc/dse.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/dse.c 2017-11-20 20:37:51.660320782 +0000 @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, store_mode, byte); if (ret && CONSTANT_P (ret)) { + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); ret = simplify_const_binary_operation (LSHIFTRT, new_mode, - ret, GEN_INT (shift)); + ret, shift_rtx); if (ret && CONSTANT_P (ret)) { byte = subreg_lowpart_offset (read_mode, new_mode); @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, of one dsp where the cost of these two was not the same. But this really is a rare case anyway. */ target = expand_binop (new_mode, lshr_optab, new_reg, - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); + gen_int_shift_amount (new_mode, shift), + new_reg, 1, OPTAB_DIRECT); shift_seq = get_insns (); end_sequence (); Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/expmed.c 2017-11-20 20:37:51.661320782 +0000 @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed PUT_MODE (all->zext, wider_mode); PUT_MODE (all->wide_mult, wider_mode); PUT_MODE (all->wide_lshr, wider_mode); - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); + XEXP (all->wide_lshr, 1) + = gen_int_shift_amount (wider_mode, mode_bitsize); set_mul_widen_cost (speed, wider_mode, set_src_cost (all->wide_mult, wider_mode, speed)); @@ -909,12 +910,14 @@ store_bit_field_1 (rtx str_rtx, unsigned to make sure that for big-endian machines the higher order bits are used. */ if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) - value_word = simplify_expand_binop (word_mode, lshr_optab, - value_word, - GEN_INT (BITS_PER_WORD - - new_bitsize), - NULL_RTX, true, - OPTAB_LIB_WIDEN); + { + int shift = BITS_PER_WORD - new_bitsize; + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); + value_word = simplify_expand_binop (word_mode, lshr_optab, + value_word, shift_rtx, + NULL_RTX, true, + OPTAB_LIB_WIDEN); + } if (!store_bit_field_1 (op0, new_bitsize, bitnum + bit_offset, @@ -2365,8 +2368,9 @@ expand_shift_1 (enum tree_code code, mac if (CONST_INT_P (op1) && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) - % GET_MODE_BITSIZE (scalar_mode)); + op1 = gen_int_shift_amount (mode, + (unsigned HOST_WIDE_INT) INTVAL (op1) + % GET_MODE_BITSIZE (scalar_mode)); else if (GET_CODE (op1) == SUBREG && subreg_lowpart_p (op1) && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) @@ -2383,7 +2387,8 @@ expand_shift_1 (enum tree_code code, mac && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, GET_MODE_BITSIZE (scalar_mode) - 1)) { - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) + - INTVAL (op1))); left = !left; code = left ? LROTATE_EXPR : RROTATE_EXPR; } @@ -2463,8 +2468,8 @@ expand_shift_1 (enum tree_code code, mac if (op1 == const0_rtx) return shifted; else if (CONST_INT_P (op1)) - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - - INTVAL (op1)); + other_amount = gen_int_shift_amount + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); else { other_amount @@ -2537,8 +2542,9 @@ expand_shift_1 (enum tree_code code, mac expand_shift (enum tree_code code, machine_mode mode, rtx shifted, int amount, rtx target, int unsignedp) { - return expand_shift_1 (code, mode, - shifted, GEN_INT (amount), target, unsignedp); + return expand_shift_1 (code, mode, shifted, + gen_int_shift_amount (mode, amount), + target, unsignedp); } /* Likewise, but return 0 if that cannot be done. */ @@ -3856,7 +3862,7 @@ expand_smod_pow2 (scalar_int_mode mode, { HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; signmask = force_reg (mode, signmask); - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); /* Use the rtx_cost of a LSHIFTRT instruction to determine which instruction sequence to use. If logical right shifts Index: gcc/lower-subreg.c =================================================================== --- gcc/lower-subreg.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/lower-subreg.c 2017-11-20 20:37:51.661320782 +0000 @@ -141,7 +141,7 @@ shift_cost (bool speed_p, struct cost_rt PUT_CODE (rtxes->shift, code); PUT_MODE (rtxes->shift, mode); PUT_MODE (rtxes->source, mode); - XEXP (rtxes->shift, 1) = GEN_INT (op1); + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); return set_src_cost (rtxes->shift, mode, speed_p); } Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/simplify-rtx.c 2017-11-20 20:37:51.663320783 +0000 @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod if (STORE_FLAG_VALUE == 1) { temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod else if (STORE_FLAG_VALUE == -1) { temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -2672,7 +2674,8 @@ simplify_binary_operation_1 (enum rtx_co { val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); if (val >= 0) - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (ASHIFT, mode, op0, + gen_int_shift_amount (mode, val)); } /* x*2 is x+x and x*(-1) is -x */ @@ -3296,7 +3299,8 @@ simplify_binary_operation_1 (enum rtx_co /* Convert divide by power of two into shift. */ if (CONST_INT_P (trueop1) && (val = exact_log2 (UINTVAL (trueop1))) > 0) - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (LSHIFTRT, mode, op0, + gen_int_shift_amount (mode, val)); break; case DIV: @@ -3416,10 +3420,12 @@ simplify_binary_operation_1 (enum rtx_co && IN_RANGE (INTVAL (trueop1), GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), GET_MODE_UNIT_PRECISION (mode) - 1)) - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, - mode, op0, - GEN_INT (GET_MODE_UNIT_PRECISION (mode) - - INTVAL (trueop1))); + { + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, + mode, op0, new_amount_rtx); + } #endif /* FALLTHRU */ case ASHIFTRT: @@ -3460,8 +3466,8 @@ simplify_binary_operation_1 (enum rtx_co == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) && subreg_lowpart_p (op0)) { - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) - + INTVAL (op1)); + rtx tmp = gen_int_shift_amount + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); tmp = simplify_gen_binary (code, inner_mode, XEXP (SUBREG_REG (op0), 0), tmp); @@ -3472,7 +3478,8 @@ simplify_binary_operation_1 (enum rtx_co { val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); if (val != INTVAL (op1)) - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); + return simplify_gen_binary (code, mode, op0, + gen_int_shift_amount (mode, val)); } break; Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/combine.c 2017-11-20 20:37:51.659320782 +0000 @@ -3792,8 +3792,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && INTVAL (XEXP (*split, 1)) > 0 && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) { + rtx i_rtx = gen_int_shift_amount (split_mode, i); SUBST (*split, gen_rtx_ASHIFT (split_mode, - XEXP (*split, 0), GEN_INT (i))); + XEXP (*split, 0), i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -3807,8 +3808,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) { rtx nsplit = XEXP (*split, 0); + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), - XEXP (nsplit, 0), GEN_INT (i))); + XEXP (nsplit, 0), + i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -5077,12 +5080,12 @@ find_split_point (rtx *loc, rtx_insn *in GET_MODE (XEXP (SET_SRC (x), 0)))))) { machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); - + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_NEG (mode, gen_rtx_LSHIFTRT (mode, XEXP (SET_SRC (x), 0), - GEN_INT (pos)))); + pos_rtx))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -5140,11 +5143,11 @@ find_split_point (rtx *loc, rtx_insn *in { unsigned HOST_WIDE_INT mask = (HOST_WIDE_INT_1U << len) - 1; + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_AND (mode, gen_rtx_LSHIFTRT - (mode, gen_lowpart (mode, inner), - GEN_INT (pos)), + (mode, gen_lowpart (mode, inner), pos_rtx), gen_int_mode (mask, mode))); split = find_split_point (&SET_SRC (x), insn, true); @@ -5153,14 +5156,15 @@ find_split_point (rtx *loc, rtx_insn *in } else { + int left_bits = GET_MODE_PRECISION (mode) - len - pos; + int right_bits = GET_MODE_PRECISION (mode) - len; SUBST (SET_SRC (x), gen_rtx_fmt_ee (unsignedp ? LSHIFTRT : ASHIFTRT, mode, gen_rtx_ASHIFT (mode, gen_lowpart (mode, inner), - GEN_INT (GET_MODE_PRECISION (mode) - - len - pos)), - GEN_INT (GET_MODE_PRECISION (mode) - len))); + gen_int_shift_amount (mode, left_bits)), + gen_int_shift_amount (mode, right_bits))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -8935,10 +8939,11 @@ force_int_to_mode (rtx x, scalar_int_mod /* Must be more sign bit copies than the mask needs. */ && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >= exact_log2 (mask + 1))) - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), - GEN_INT (GET_MODE_PRECISION (xmode) - - exact_log2 (mask + 1))); - + { + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), + gen_int_shift_amount (xmode, nbits)); + } goto shiftrt; case ASHIFTRT: @@ -10431,7 +10436,7 @@ simplify_shift_const_1 (enum rtx_code co { enum rtx_code orig_code = code; rtx orig_varop = varop; - int count; + int count, log2; machine_mode mode = result_mode; machine_mode shift_mode; scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; @@ -10634,13 +10639,11 @@ simplify_shift_const_1 (enum rtx_code co is cheaper. But it is still better on those machines to merge two shifts into one. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (ASHIFT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10648,13 +10651,11 @@ simplify_shift_const_1 (enum rtx_code co case UDIV: /* Similar, for when divides are cheaper. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10789,10 +10790,10 @@ simplify_shift_const_1 (enum rtx_code co mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), int_result_mode); - + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); mask_rtx = simplify_const_binary_operation (code, int_result_mode, - mask_rtx, GEN_INT (count)); + mask_rtx, count_rtx); /* Give up if we can't compute an outer operation to use. */ if (mask_rtx == 0 @@ -10848,9 +10849,10 @@ simplify_shift_const_1 (enum rtx_code co if (code == ASHIFTRT && int_mode != int_result_mode) break; + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); rtx new_rtx = simplify_const_binary_operation (code, int_mode, XEXP (varop, 0), - GEN_INT (count)); + count_rtx); varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); count = 0; continue; @@ -10916,7 +10918,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), INTVAL (new_rtx), int_result_mode, @@ -11059,7 +11061,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (ASHIFT, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, PLUS, INTVAL (new_rtx), int_result_mode, @@ -11080,7 +11082,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, XOR, INTVAL (new_rtx), int_result_mode, @@ -11135,12 +11137,12 @@ simplify_shift_const_1 (enum rtx_code co - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) { rtx varop_inner = XEXP (varop, 0); - - varop_inner - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), - XEXP (varop_inner, 0), - GEN_INT - (count + INTVAL (XEXP (varop_inner, 1)))); + int new_count = count + INTVAL (XEXP (varop_inner, 1)); + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), + new_count); + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), + XEXP (varop_inner, 0), + new_count_rtx); varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); count = 0; continue; @@ -11192,7 +11194,8 @@ simplify_shift_const_1 (enum rtx_code co x = NULL_RTX; if (x == NULL_RTX) - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); + x = simplify_gen_binary (code, shift_mode, varop, + gen_int_shift_amount (shift_mode, count)); /* If we were doing an LSHIFTRT in a wider mode than it was originally, turn off all the bits that the shift would have turned off. */ @@ -11254,7 +11257,8 @@ simplify_shift_const (rtx x, enum rtx_co return tem; if (!x) - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); + x = simplify_gen_binary (code, GET_MODE (varop), varop, + gen_int_shift_amount (GET_MODE (varop), count)); if (GET_MODE (x) != result_mode) x = gen_lowpart (result_mode, x); return x; @@ -11445,8 +11449,9 @@ change_zero_ext (rtx pat) if (BITS_BIG_ENDIAN) start = GET_MODE_PRECISION (inner_mode) - size - start; - if (start) - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); + if (start != 0) + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), + gen_int_shift_amount (inner_mode, start)); else x = XEXP (x, 0); if (mode != inner_mode) Index: gcc/optabs.c =================================================================== --- gcc/optabs.c 2017-11-20 20:37:41.918226976 +0000 +++ gcc/optabs.c 2017-11-20 20:37:51.662320782 +0000 @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, if (binoptab != ashr_optab) emit_move_insn (outof_target, CONST0_RTX (word_mode)); else - if (!force_expand_binop (word_mode, binoptab, - outof_input, GEN_INT (BITS_PER_WORD - 1), + if (!force_expand_binop (word_mode, binoptab, outof_input, + gen_int_shift_amount (word_mode, + BITS_PER_WORD - 1), outof_target, unsignedp, methods)) return false; } @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod { int low = (WORDS_BIG_ENDIAN ? 1 : 0); int high = (WORDS_BIG_ENDIAN ? 0 : 1); - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); + rtx wordm1 = (umulp ? NULL_RTX + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); rtx product, adjust, product_high, temp; rtx op0_high = operand_subword_force (op0, high, mode); @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b unsigned int bits = GET_MODE_PRECISION (int_mode); if (CONST_INT_P (op1)) - newop1 = GEN_INT (bits - INTVAL (op1)); + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); else if (targetm.shift_truncation_mask (int_mode) == bits - 1) newop1 = negate_rtx (GET_MODE (op1), op1); else @@ -1403,7 +1405,7 @@ expand_binop (machine_mode mode, optab b /* Apply the truncation to constant shifts. */ if (double_shift_mask > 0 && CONST_INT_P (op1)) - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); if (op1 == CONST0_RTX (op1_mode)) return op0; @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b else { rtx into_temp1, into_temp2, outof_temp1, outof_temp2; - rtx first_shift_count, second_shift_count; + HOST_WIDE_INT first_shift_count, second_shift_count; optab reverse_unsigned_shift, unsigned_shift; reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b if (shift_count > BITS_PER_WORD) { - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); + first_shift_count = shift_count - BITS_PER_WORD; + second_shift_count = 2 * BITS_PER_WORD - shift_count; } else { - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); - second_shift_count = GEN_INT (shift_count); + first_shift_count = BITS_PER_WORD - shift_count; + second_shift_count = shift_count; } + rtx first_shift_count_rtx + = gen_int_shift_amount (word_mode, first_shift_count); + rtx second_shift_count_rtx + = gen_int_shift_amount (word_mode, second_shift_count); into_temp1 = expand_binop (word_mode, unsigned_shift, - outof_input, first_shift_count, + outof_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - into_input, second_shift_count, + into_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (into_temp1 != 0 && into_temp2 != 0) @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b emit_move_insn (into_target, inter); outof_temp1 = expand_binop (word_mode, unsigned_shift, - into_input, first_shift_count, + into_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - outof_input, second_shift_count, + outof_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotl_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotr_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } last = get_last_insn (); - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, + temp1 = expand_binop (mode, ashl_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, + temp2 = expand_binop (mode, lshr_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); if (temp1 && temp2) { @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod } /* Checks if vec_perm mask SEL is a constant equivalent to a shift of the first - vec_perm operand, assuming the second operand is a constant vector of zeroes. - Return the shift distance in bits if so, or NULL_RTX if the vec_perm is not a - shift. */ + vec_perm operand (which has mode OP0_MODE), assuming the second + operand is a constant vector of zeroes. Return the shift distance in + bits if so, or NULL_RTX if the vec_perm is not a shift. */ static rtx -shift_amt_for_vec_perm_mask (rtx sel) +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) { unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) return NULL_RTX; } - return GEN_INT (first * bitsize); + return gen_int_shift_amount (op0_mode, first * bitsize); } /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx && (shift_code != CODE_FOR_nothing || shift_code_qi != CODE_FOR_nothing)) { - shift_amt = shift_amt_for_vec_perm_mask (sel); + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); if (shift_amt) { struct expand_operand ops[3]; @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx NULL, 0, OPTAB_DIRECT); else sel = expand_simple_binop (selmode, ASHIFT, sel, - GEN_INT (exact_log2 (u)), + gen_int_shift_amount (selmode, + exact_log2 (u)), NULL, 0, OPTAB_DIRECT); gcc_assert (sel != NULL);
On Mon, Nov 20, 2017 at 10:02 PM, Richard Sandiford <richard.sandiford@linaro.org> wrote: > Richard Biener <richard.guenther@gmail.com> writes: >> On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener >> <richard.guenther@gmail.com> wrote: >>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>> <richard.sandiford@linaro.org> wrote: >>>> This patch adds a stub helper routine to provide the mode >>>> of a scalar shift amount, given the mode of the values >>>> being shifted. >>>> >>>> One long-standing problem has been to decide what this mode >>>> should be for arbitrary rtxes (as opposed to those directly >>>> tied to a target pattern). Is it the mode of the shifted >>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>> the corresponding target pattern says? (In which case what >>>> should the mode be when the target doesn't have a pattern?) >>>> >>>> For now the patch picks word_mode, which should be safe on >>>> all targets but could perhaps become suboptimal if the helper >>>> routine is used more often than it is in this patch. As it >>>> stands the patch does not change the generated code. >>>> >>>> The patch also adds a helper function that constructs rtxes >>>> for constant shift amounts, again given the mode of the value >>>> being shifted. As well as helping with the SVE patches, this >>>> is one step towards allowing CONST_INTs to have a real mode. >>> >>> I think gen_shift_amount_mode is flawed and while encapsulating >>> constant shift amount RTX generation into a gen_int_shift_amount >>> looks good to me I'd rather have that ??? in this function (and >>> I'd use the mode of the RTX shifted, not word_mode...). > > OK. I'd gone for word_mode because that's what expand_binop uses > for CONST_INTs: > > op1_mode = (GET_MODE (op1) != VOIDmode > ? as_a <scalar_int_mode> (GET_MODE (op1)) > : word_mode); > > But using the inner mode should be fine too. The patch below does that. > >>> In the end it's up to insn recognizing to convert the op to the >>> expected mode and for generic RTL it's us that should decide >>> on the mode -- on GENERIC the shift amount has to be an >>> integer so why not simply use a mode that is large enough to >>> make the constant fit? > > ...but I can do that instead if you think it's better. > >>> Just throwing in some comments here, RTL isn't my primary >>> expertise. >> >> To add a little bit - shift amounts is maybe the only(?) place >> where a modeless CONST_INT makes sense! So "fixing" >> that first sounds backwards. > > But even here they have a mode conceptually, since out-of-range shift > amounts are target-defined rather than undefined. E.g. if the target > interprets the shift amount as unsigned, then for a shift amount > (const_int -1) it matters whether the mode is QImode (and so we're > shifting by 255) or HImode (and so we're shifting by 65535. I think RTL is well-defined (at least I hope so ...) and machine constraints need to be modeled explicitely (like embedding an implicit bit_and in shift patterns). > OK, so shifts by 65535 make no sense in practice, but *conceptually*... :-) > > Jeff Law <law@redhat.com> writes: >> On 10/26/2017 06:06 AM, Richard Biener wrote: >>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>> <richard.sandiford@linaro.org> wrote: >>>> This patch adds a stub helper routine to provide the mode >>>> of a scalar shift amount, given the mode of the values >>>> being shifted. >>>> >>>> One long-standing problem has been to decide what this mode >>>> should be for arbitrary rtxes (as opposed to those directly >>>> tied to a target pattern). Is it the mode of the shifted >>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>> the corresponding target pattern says? (In which case what >>>> should the mode be when the target doesn't have a pattern?) >>>> >>>> For now the patch picks word_mode, which should be safe on >>>> all targets but could perhaps become suboptimal if the helper >>>> routine is used more often than it is in this patch. As it >>>> stands the patch does not change the generated code. >>>> >>>> The patch also adds a helper function that constructs rtxes >>>> for constant shift amounts, again given the mode of the value >>>> being shifted. As well as helping with the SVE patches, this >>>> is one step towards allowing CONST_INTs to have a real mode. >>> >>> I think gen_shift_amount_mode is flawed and while encapsulating >>> constant shift amount RTX generation into a gen_int_shift_amount >>> looks good to me I'd rather have that ??? in this function (and >>> I'd use the mode of the RTX shifted, not word_mode...). >>> >>> In the end it's up to insn recognizing to convert the op to the >>> expected mode and for generic RTL it's us that should decide >>> on the mode -- on GENERIC the shift amount has to be an >>> integer so why not simply use a mode that is large enough to >>> make the constant fit? >>> >>> Just throwing in some comments here, RTL isn't my primary >>> expertise. >> I wonder if encapsulation + a target hook to specify the mode would be >> better? We'd then have to argue over word_mode, vs QImode vs something >> else for the default, but at least we'd have a way for the target to >> specify the mode is generally best when working on shift counts. >> >> In the end I doubt there's a single definition that is overall better. >> Largely because I suspect there are times when the narrowest mode is >> best, or the mode of the operand being shifted. >> >> So thoughts on doing the encapsulation with a target hook to specify the >> desired mode? Does that get us what we need for SVE and does it provide >> us a path forward on this issue if we were to try to move towards >> CONST_INTs with modes? > > I think it'd better to do that only if we have a use case, since > it's hard to predict what the best way of handling it is until then. > E.g. I'd still like to hold out the possibility of doing this automatically > from the .md file instead, if some kind of override ends up being necessary. > > Like you say, we have to argue over the default either way, and I think > that's been the sticking point. > > Thanks, > Richard > > > 2017-11-20 Richard Sandiford <richard.sandiford@linaro.org> > Alan Hayward <alan.hayward@arm.com> > David Sherwood <david.sherwood@arm.com> > > gcc/ > * emit-rtl.h (gen_int_shift_amount): Declare. > * emit-rtl.c (gen_int_shift_amount): New function. > * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount > instead of GEN_INT. > * calls.c (shift_return_value): Likewise. > * cse.c (fold_rtx): Likewise. > * dse.c (find_shift_sequence): Likewise. > * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) > (expand_shift, expand_smod_pow2): Likewise. > * lower-subreg.c (shift_cost): Likewise. > * simplify-rtx.c (simplify_unary_operation_1): Likewise. > (simplify_binary_operation_1): Likewise. > * combine.c (try_combine, find_split_point, force_int_to_mode) > (simplify_shift_const_1, simplify_shift_const): Likewise. > (change_zero_ext): Likewise. Use simplify_gen_binary. > * optabs.c (expand_superword_shift, expand_doubleword_mult) > (expand_unop, expand_binop): Use gen_int_shift_amount instead > of GEN_INT. > (shift_amt_for_vec_perm_mask): Add a machine_mode argument. > Use gen_int_shift_amount instead of GEN_INT. > (expand_vec_perm): Update caller accordingly. Use > gen_int_shift_amount instead of GEN_INT. > > Index: gcc/emit-rtl.h > =================================================================== > --- gcc/emit-rtl.h 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/emit-rtl.h 2017-11-20 20:37:51.661320782 +0000 > @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, > extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); > extern void adjust_reg_mode (rtx, machine_mode); > extern int mem_expr_equal_p (const_tree, const_tree); > +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); > > extern bool need_atomic_barrier_p (enum memmodel, bool); > > Index: gcc/emit-rtl.c > =================================================================== > --- gcc/emit-rtl.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/emit-rtl.c 2017-11-20 20:37:51.660320782 +0000 > @@ -6507,6 +6507,24 @@ need_atomic_barrier_p (enum memmodel mod > } > } > > +/* Return a constant shift amount for shifting a value of mode MODE > + by VALUE bits. */ > + > +rtx > +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) > +{ > + /* ??? Using the inner mode should be wide enough for all useful > + cases (e.g. QImode usually has 8 shiftable bits, while a QImode > + shift amount has a range of [-128, 127]). But in principle > + a target could require target-dependent behaviour for a > + shift whose shift amount is wider than the shifted value. > + Perhaps this should be automatically derived from the .md > + files instead, or perhaps have a target hook. */ > + scalar_int_mode shift_mode > + = int_mode_for_mode (GET_MODE_INNER (mode)).require (); > + return gen_int_mode (value, shift_mode); > +} > + > /* Initialize fields of rtl_data related to stack alignment. */ > > void > Index: gcc/asan.c > =================================================================== > --- gcc/asan.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/asan.c 2017-11-20 20:37:51.657320781 +0000 > @@ -1386,7 +1386,7 @@ asan_emit_stack_protection (rtx base, rt > TREE_ASM_WRITTEN (id) = 1; > emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); > shadow_base = expand_binop (Pmode, lshr_optab, base, > - GEN_INT (ASAN_SHADOW_SHIFT), > + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), > NULL_RTX, 1, OPTAB_DIRECT); > shadow_base > = plus_constant (Pmode, shadow_base, > Index: gcc/calls.c > =================================================================== > --- gcc/calls.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/calls.c 2017-11-20 20:37:51.657320781 +0000 > @@ -2742,15 +2742,17 @@ shift_return_value (machine_mode mode, b > HOST_WIDE_INT shift; > > gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); > - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); > + machine_mode value_mode = GET_MODE (value); > + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); > if (shift == 0) > return false; > > /* Use ashr rather than lshr for right shifts. This is for the benefit > of the MIPS port, which requires SImode values to be sign-extended > when stored in 64-bit registers. */ > - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, > - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) > + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, > + value, gen_int_shift_amount (value_mode, shift), > + value, 1, OPTAB_WIDEN)) > gcc_unreachable (); > return true; > } > Index: gcc/cse.c > =================================================================== > --- gcc/cse.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/cse.c 2017-11-20 20:37:51.660320782 +0000 > @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) > || INTVAL (const_arg1) < 0)) > { > if (SHIFT_COUNT_TRUNCATED) > - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) > - & (GET_MODE_UNIT_BITSIZE (mode) > - - 1)); > + canon_const_arg1 = gen_int_shift_amount > + (mode, (INTVAL (const_arg1) > + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); > else > break; > } > @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) > || INTVAL (inner_const) < 0)) > { > if (SHIFT_COUNT_TRUNCATED) > - inner_const = GEN_INT (INTVAL (inner_const) > - & (GET_MODE_UNIT_BITSIZE (mode) > - - 1)); > + inner_const = gen_int_shift_amount > + (mode, (INTVAL (inner_const) > + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); > else > break; > } > @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) > /* As an exception, we can turn an ASHIFTRT of this > form into a shift of the number of bits - 1. */ > if (code == ASHIFTRT) > - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); > + new_const = gen_int_shift_amount > + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); > else if (!side_effects_p (XEXP (y, 0))) > return CONST0_RTX (mode); > else > Index: gcc/dse.c > =================================================================== > --- gcc/dse.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/dse.c 2017-11-20 20:37:51.660320782 +0000 > @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, > store_mode, byte); > if (ret && CONSTANT_P (ret)) > { > + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); > ret = simplify_const_binary_operation (LSHIFTRT, new_mode, > - ret, GEN_INT (shift)); > + ret, shift_rtx); > if (ret && CONSTANT_P (ret)) > { > byte = subreg_lowpart_offset (read_mode, new_mode); > @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, > of one dsp where the cost of these two was not the same. But > this really is a rare case anyway. */ > target = expand_binop (new_mode, lshr_optab, new_reg, > - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); > + gen_int_shift_amount (new_mode, shift), > + new_reg, 1, OPTAB_DIRECT); > > shift_seq = get_insns (); > end_sequence (); > Index: gcc/expmed.c > =================================================================== > --- gcc/expmed.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/expmed.c 2017-11-20 20:37:51.661320782 +0000 > @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed > PUT_MODE (all->zext, wider_mode); > PUT_MODE (all->wide_mult, wider_mode); > PUT_MODE (all->wide_lshr, wider_mode); > - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); > + XEXP (all->wide_lshr, 1) > + = gen_int_shift_amount (wider_mode, mode_bitsize); > > set_mul_widen_cost (speed, wider_mode, > set_src_cost (all->wide_mult, wider_mode, speed)); > @@ -909,12 +910,14 @@ store_bit_field_1 (rtx str_rtx, unsigned > to make sure that for big-endian machines the higher order > bits are used. */ > if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) > - value_word = simplify_expand_binop (word_mode, lshr_optab, > - value_word, > - GEN_INT (BITS_PER_WORD > - - new_bitsize), > - NULL_RTX, true, > - OPTAB_LIB_WIDEN); > + { > + int shift = BITS_PER_WORD - new_bitsize; > + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); > + value_word = simplify_expand_binop (word_mode, lshr_optab, > + value_word, shift_rtx, > + NULL_RTX, true, > + OPTAB_LIB_WIDEN); > + } > > if (!store_bit_field_1 (op0, new_bitsize, > bitnum + bit_offset, > @@ -2365,8 +2368,9 @@ expand_shift_1 (enum tree_code code, mac > if (CONST_INT_P (op1) > && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= > (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) > - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) > - % GET_MODE_BITSIZE (scalar_mode)); > + op1 = gen_int_shift_amount (mode, > + (unsigned HOST_WIDE_INT) INTVAL (op1) > + % GET_MODE_BITSIZE (scalar_mode)); > else if (GET_CODE (op1) == SUBREG > && subreg_lowpart_p (op1) > && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) > @@ -2383,7 +2387,8 @@ expand_shift_1 (enum tree_code code, mac > && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, > GET_MODE_BITSIZE (scalar_mode) - 1)) > { > - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); > + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) > + - INTVAL (op1))); > left = !left; > code = left ? LROTATE_EXPR : RROTATE_EXPR; > } > @@ -2463,8 +2468,8 @@ expand_shift_1 (enum tree_code code, mac > if (op1 == const0_rtx) > return shifted; > else if (CONST_INT_P (op1)) > - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) > - - INTVAL (op1)); > + other_amount = gen_int_shift_amount > + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); > else > { > other_amount > @@ -2537,8 +2542,9 @@ expand_shift_1 (enum tree_code code, mac > expand_shift (enum tree_code code, machine_mode mode, rtx shifted, > int amount, rtx target, int unsignedp) > { > - return expand_shift_1 (code, mode, > - shifted, GEN_INT (amount), target, unsignedp); > + return expand_shift_1 (code, mode, shifted, > + gen_int_shift_amount (mode, amount), > + target, unsignedp); > } > > /* Likewise, but return 0 if that cannot be done. */ > @@ -3856,7 +3862,7 @@ expand_smod_pow2 (scalar_int_mode mode, > { > HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; > signmask = force_reg (mode, signmask); > - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); > + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); > > /* Use the rtx_cost of a LSHIFTRT instruction to determine > which instruction sequence to use. If logical right shifts > Index: gcc/lower-subreg.c > =================================================================== > --- gcc/lower-subreg.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/lower-subreg.c 2017-11-20 20:37:51.661320782 +0000 > @@ -141,7 +141,7 @@ shift_cost (bool speed_p, struct cost_rt > PUT_CODE (rtxes->shift, code); > PUT_MODE (rtxes->shift, mode); > PUT_MODE (rtxes->source, mode); > - XEXP (rtxes->shift, 1) = GEN_INT (op1); > + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); > return set_src_cost (rtxes->shift, mode, speed_p); > } > > Index: gcc/simplify-rtx.c > =================================================================== > --- gcc/simplify-rtx.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/simplify-rtx.c 2017-11-20 20:37:51.663320783 +0000 > @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod > if (STORE_FLAG_VALUE == 1) > { > temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), > - GEN_INT (isize - 1)); > + gen_int_shift_amount (inner, > + isize - 1)); > if (int_mode == inner) > return temp; > if (GET_MODE_PRECISION (int_mode) > isize) > @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod > else if (STORE_FLAG_VALUE == -1) > { > temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), > - GEN_INT (isize - 1)); > + gen_int_shift_amount (inner, > + isize - 1)); > if (int_mode == inner) > return temp; > if (GET_MODE_PRECISION (int_mode) > isize) > @@ -2672,7 +2674,8 @@ simplify_binary_operation_1 (enum rtx_co > { > val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); > if (val >= 0) > - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); > + return simplify_gen_binary (ASHIFT, mode, op0, > + gen_int_shift_amount (mode, val)); > } > > /* x*2 is x+x and x*(-1) is -x */ > @@ -3296,7 +3299,8 @@ simplify_binary_operation_1 (enum rtx_co > /* Convert divide by power of two into shift. */ > if (CONST_INT_P (trueop1) > && (val = exact_log2 (UINTVAL (trueop1))) > 0) > - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); > + return simplify_gen_binary (LSHIFTRT, mode, op0, > + gen_int_shift_amount (mode, val)); > break; > > case DIV: > @@ -3416,10 +3420,12 @@ simplify_binary_operation_1 (enum rtx_co > && IN_RANGE (INTVAL (trueop1), > GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), > GET_MODE_UNIT_PRECISION (mode) - 1)) > - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, > - mode, op0, > - GEN_INT (GET_MODE_UNIT_PRECISION (mode) > - - INTVAL (trueop1))); > + { > + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); > + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); > + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, > + mode, op0, new_amount_rtx); > + } > #endif > /* FALLTHRU */ > case ASHIFTRT: > @@ -3460,8 +3466,8 @@ simplify_binary_operation_1 (enum rtx_co > == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) > && subreg_lowpart_p (op0)) > { > - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) > - + INTVAL (op1)); > + rtx tmp = gen_int_shift_amount > + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); > tmp = simplify_gen_binary (code, inner_mode, > XEXP (SUBREG_REG (op0), 0), > tmp); > @@ -3472,7 +3478,8 @@ simplify_binary_operation_1 (enum rtx_co > { > val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); > if (val != INTVAL (op1)) > - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); > + return simplify_gen_binary (code, mode, op0, > + gen_int_shift_amount (mode, val)); > } > break; > > Index: gcc/combine.c > =================================================================== > --- gcc/combine.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/combine.c 2017-11-20 20:37:51.659320782 +0000 > @@ -3792,8 +3792,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, > && INTVAL (XEXP (*split, 1)) > 0 > && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) > { > + rtx i_rtx = gen_int_shift_amount (split_mode, i); > SUBST (*split, gen_rtx_ASHIFT (split_mode, > - XEXP (*split, 0), GEN_INT (i))); > + XEXP (*split, 0), i_rtx)); > /* Update split_code because we may not have a multiply > anymore. */ > split_code = GET_CODE (*split); > @@ -3807,8 +3808,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, > && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) > { > rtx nsplit = XEXP (*split, 0); > + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); > SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), > - XEXP (nsplit, 0), GEN_INT (i))); > + XEXP (nsplit, 0), > + i_rtx)); > /* Update split_code because we may not have a multiply > anymore. */ > split_code = GET_CODE (*split); > @@ -5077,12 +5080,12 @@ find_split_point (rtx *loc, rtx_insn *in > GET_MODE (XEXP (SET_SRC (x), 0)))))) > { > machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); > - > + rtx pos_rtx = gen_int_shift_amount (mode, pos); > SUBST (SET_SRC (x), > gen_rtx_NEG (mode, > gen_rtx_LSHIFTRT (mode, > XEXP (SET_SRC (x), 0), > - GEN_INT (pos)))); > + pos_rtx))); > > split = find_split_point (&SET_SRC (x), insn, true); > if (split && split != &SET_SRC (x)) > @@ -5140,11 +5143,11 @@ find_split_point (rtx *loc, rtx_insn *in > { > unsigned HOST_WIDE_INT mask > = (HOST_WIDE_INT_1U << len) - 1; > + rtx pos_rtx = gen_int_shift_amount (mode, pos); > SUBST (SET_SRC (x), > gen_rtx_AND (mode, > gen_rtx_LSHIFTRT > - (mode, gen_lowpart (mode, inner), > - GEN_INT (pos)), > + (mode, gen_lowpart (mode, inner), pos_rtx), > gen_int_mode (mask, mode))); > > split = find_split_point (&SET_SRC (x), insn, true); > @@ -5153,14 +5156,15 @@ find_split_point (rtx *loc, rtx_insn *in > } > else > { > + int left_bits = GET_MODE_PRECISION (mode) - len - pos; > + int right_bits = GET_MODE_PRECISION (mode) - len; > SUBST (SET_SRC (x), > gen_rtx_fmt_ee > (unsignedp ? LSHIFTRT : ASHIFTRT, mode, > gen_rtx_ASHIFT (mode, > gen_lowpart (mode, inner), > - GEN_INT (GET_MODE_PRECISION (mode) > - - len - pos)), > - GEN_INT (GET_MODE_PRECISION (mode) - len))); > + gen_int_shift_amount (mode, left_bits)), > + gen_int_shift_amount (mode, right_bits))); > > split = find_split_point (&SET_SRC (x), insn, true); > if (split && split != &SET_SRC (x)) > @@ -8935,10 +8939,11 @@ force_int_to_mode (rtx x, scalar_int_mod > /* Must be more sign bit copies than the mask needs. */ > && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) > >= exact_log2 (mask + 1))) > - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), > - GEN_INT (GET_MODE_PRECISION (xmode) > - - exact_log2 (mask + 1))); > - > + { > + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); > + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), > + gen_int_shift_amount (xmode, nbits)); > + } > goto shiftrt; > > case ASHIFTRT: > @@ -10431,7 +10436,7 @@ simplify_shift_const_1 (enum rtx_code co > { > enum rtx_code orig_code = code; > rtx orig_varop = varop; > - int count; > + int count, log2; > machine_mode mode = result_mode; > machine_mode shift_mode; > scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; > @@ -10634,13 +10639,11 @@ simplify_shift_const_1 (enum rtx_code co > is cheaper. But it is still better on those machines to > merge two shifts into one. */ > if (CONST_INT_P (XEXP (varop, 1)) > - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) > + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) > { > - varop > - = simplify_gen_binary (ASHIFT, GET_MODE (varop), > - XEXP (varop, 0), > - GEN_INT (exact_log2 ( > - UINTVAL (XEXP (varop, 1))))); > + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); > + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), > + XEXP (varop, 0), log2_rtx); > continue; > } > break; > @@ -10648,13 +10651,11 @@ simplify_shift_const_1 (enum rtx_code co > case UDIV: > /* Similar, for when divides are cheaper. */ > if (CONST_INT_P (XEXP (varop, 1)) > - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) > + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) > { > - varop > - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), > - XEXP (varop, 0), > - GEN_INT (exact_log2 ( > - UINTVAL (XEXP (varop, 1))))); > + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); > + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), > + XEXP (varop, 0), log2_rtx); > continue; > } > break; > @@ -10789,10 +10790,10 @@ simplify_shift_const_1 (enum rtx_code co > > mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), > int_result_mode); > - > + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); > mask_rtx > = simplify_const_binary_operation (code, int_result_mode, > - mask_rtx, GEN_INT (count)); > + mask_rtx, count_rtx); > > /* Give up if we can't compute an outer operation to use. */ > if (mask_rtx == 0 > @@ -10848,9 +10849,10 @@ simplify_shift_const_1 (enum rtx_code co > if (code == ASHIFTRT && int_mode != int_result_mode) > break; > > + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); > rtx new_rtx = simplify_const_binary_operation (code, int_mode, > XEXP (varop, 0), > - GEN_INT (count)); > + count_rtx); > varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); > count = 0; > continue; > @@ -10916,7 +10918,7 @@ simplify_shift_const_1 (enum rtx_code co > && (new_rtx = simplify_const_binary_operation > (code, int_result_mode, > gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), > - GEN_INT (count))) != 0 > + gen_int_shift_amount (int_result_mode, count))) != 0 > && CONST_INT_P (new_rtx) > && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), > INTVAL (new_rtx), int_result_mode, > @@ -11059,7 +11061,7 @@ simplify_shift_const_1 (enum rtx_code co > && (new_rtx = simplify_const_binary_operation > (ASHIFT, int_result_mode, > gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), > - GEN_INT (count))) != 0 > + gen_int_shift_amount (int_result_mode, count))) != 0 > && CONST_INT_P (new_rtx) > && merge_outer_ops (&outer_op, &outer_const, PLUS, > INTVAL (new_rtx), int_result_mode, > @@ -11080,7 +11082,7 @@ simplify_shift_const_1 (enum rtx_code co > && (new_rtx = simplify_const_binary_operation > (code, int_result_mode, > gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), > - GEN_INT (count))) != 0 > + gen_int_shift_amount (int_result_mode, count))) != 0 > && CONST_INT_P (new_rtx) > && merge_outer_ops (&outer_op, &outer_const, XOR, > INTVAL (new_rtx), int_result_mode, > @@ -11135,12 +11137,12 @@ simplify_shift_const_1 (enum rtx_code co > - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) > { > rtx varop_inner = XEXP (varop, 0); > - > - varop_inner > - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), > - XEXP (varop_inner, 0), > - GEN_INT > - (count + INTVAL (XEXP (varop_inner, 1)))); > + int new_count = count + INTVAL (XEXP (varop_inner, 1)); > + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), > + new_count); > + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), > + XEXP (varop_inner, 0), > + new_count_rtx); > varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); > count = 0; > continue; > @@ -11192,7 +11194,8 @@ simplify_shift_const_1 (enum rtx_code co > x = NULL_RTX; > > if (x == NULL_RTX) > - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); > + x = simplify_gen_binary (code, shift_mode, varop, > + gen_int_shift_amount (shift_mode, count)); > > /* If we were doing an LSHIFTRT in a wider mode than it was originally, > turn off all the bits that the shift would have turned off. */ > @@ -11254,7 +11257,8 @@ simplify_shift_const (rtx x, enum rtx_co > return tem; > > if (!x) > - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); > + x = simplify_gen_binary (code, GET_MODE (varop), varop, > + gen_int_shift_amount (GET_MODE (varop), count)); > if (GET_MODE (x) != result_mode) > x = gen_lowpart (result_mode, x); > return x; > @@ -11445,8 +11449,9 @@ change_zero_ext (rtx pat) > if (BITS_BIG_ENDIAN) > start = GET_MODE_PRECISION (inner_mode) - size - start; > > - if (start) > - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); > + if (start != 0) > + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), > + gen_int_shift_amount (inner_mode, start)); > else > x = XEXP (x, 0); > if (mode != inner_mode) > Index: gcc/optabs.c > =================================================================== > --- gcc/optabs.c 2017-11-20 20:37:41.918226976 +0000 > +++ gcc/optabs.c 2017-11-20 20:37:51.662320782 +0000 > @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, > if (binoptab != ashr_optab) > emit_move_insn (outof_target, CONST0_RTX (word_mode)); > else > - if (!force_expand_binop (word_mode, binoptab, > - outof_input, GEN_INT (BITS_PER_WORD - 1), > + if (!force_expand_binop (word_mode, binoptab, outof_input, > + gen_int_shift_amount (word_mode, > + BITS_PER_WORD - 1), > outof_target, unsignedp, methods)) > return false; > } > @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod > { > int low = (WORDS_BIG_ENDIAN ? 1 : 0); > int high = (WORDS_BIG_ENDIAN ? 0 : 1); > - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); > + rtx wordm1 = (umulp ? NULL_RTX > + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); > rtx product, adjust, product_high, temp; > > rtx op0_high = operand_subword_force (op0, high, mode); > @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b > unsigned int bits = GET_MODE_PRECISION (int_mode); > > if (CONST_INT_P (op1)) > - newop1 = GEN_INT (bits - INTVAL (op1)); > + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); > else if (targetm.shift_truncation_mask (int_mode) == bits - 1) > newop1 = negate_rtx (GET_MODE (op1), op1); > else > @@ -1403,7 +1405,7 @@ expand_binop (machine_mode mode, optab b > > /* Apply the truncation to constant shifts. */ > if (double_shift_mask > 0 && CONST_INT_P (op1)) > - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); > + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); > > if (op1 == CONST0_RTX (op1_mode)) > return op0; > @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b > else > { > rtx into_temp1, into_temp2, outof_temp1, outof_temp2; > - rtx first_shift_count, second_shift_count; > + HOST_WIDE_INT first_shift_count, second_shift_count; > optab reverse_unsigned_shift, unsigned_shift; > > reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) > @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b > > if (shift_count > BITS_PER_WORD) > { > - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); > - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); > + first_shift_count = shift_count - BITS_PER_WORD; > + second_shift_count = 2 * BITS_PER_WORD - shift_count; > } > else > { > - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); > - second_shift_count = GEN_INT (shift_count); > + first_shift_count = BITS_PER_WORD - shift_count; > + second_shift_count = shift_count; > } > + rtx first_shift_count_rtx > + = gen_int_shift_amount (word_mode, first_shift_count); > + rtx second_shift_count_rtx > + = gen_int_shift_amount (word_mode, second_shift_count); > > into_temp1 = expand_binop (word_mode, unsigned_shift, > - outof_input, first_shift_count, > + outof_input, first_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, > - into_input, second_shift_count, > + into_input, second_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > > if (into_temp1 != 0 && into_temp2 != 0) > @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b > emit_move_insn (into_target, inter); > > outof_temp1 = expand_binop (word_mode, unsigned_shift, > - into_input, first_shift_count, > + into_input, first_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, > - outof_input, second_shift_count, > + outof_input, second_shift_count_rtx, > NULL_RTX, unsignedp, next_methods); > > if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) > @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un > > if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) > { > - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, > - unsignedp, OPTAB_DIRECT); > + temp = expand_binop (mode, rotl_optab, op0, > + gen_int_shift_amount (mode, 8), > + target, unsignedp, OPTAB_DIRECT); > if (temp) > return temp; > } > > if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) > { > - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, > - unsignedp, OPTAB_DIRECT); > + temp = expand_binop (mode, rotr_optab, op0, > + gen_int_shift_amount (mode, 8), > + target, unsignedp, OPTAB_DIRECT); > if (temp) > return temp; > } > > last = get_last_insn (); > > - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, > + temp1 = expand_binop (mode, ashl_optab, op0, > + gen_int_shift_amount (mode, 8), NULL_RTX, > unsignedp, OPTAB_WIDEN); > - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, > + temp2 = expand_binop (mode, lshr_optab, op0, > + gen_int_shift_amount (mode, 8), NULL_RTX, > unsignedp, OPTAB_WIDEN); > if (temp1 && temp2) > { > @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod > } > > /* Checks if vec_perm mask SEL is a constant equivalent to a shift of the first > - vec_perm operand, assuming the second operand is a constant vector of zeroes. > - Return the shift distance in bits if so, or NULL_RTX if the vec_perm is not a > - shift. */ > + vec_perm operand (which has mode OP0_MODE), assuming the second > + operand is a constant vector of zeroes. Return the shift distance in > + bits if so, or NULL_RTX if the vec_perm is not a shift. */ > static rtx > -shift_amt_for_vec_perm_mask (rtx sel) > +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) > { > unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); > unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); > @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) > return NULL_RTX; > } > > - return GEN_INT (first * bitsize); > + return gen_int_shift_amount (op0_mode, first * bitsize); > } > > /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ > @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx > && (shift_code != CODE_FOR_nothing > || shift_code_qi != CODE_FOR_nothing)) > { > - shift_amt = shift_amt_for_vec_perm_mask (sel); > + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); > if (shift_amt) > { > struct expand_operand ops[3]; > @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx > NULL, 0, OPTAB_DIRECT); > else > sel = expand_simple_binop (selmode, ASHIFT, sel, > - GEN_INT (exact_log2 (u)), > + gen_int_shift_amount (selmode, > + exact_log2 (u)), > NULL, 0, OPTAB_DIRECT); > gcc_assert (sel != NULL); >
Richard Biener <richard.guenther@gmail.com> writes: > On Mon, Nov 20, 2017 at 10:02 PM, Richard Sandiford > <richard.sandiford@linaro.org> wrote: >> Richard Biener <richard.guenther@gmail.com> writes: >>> On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener >>> <richard.guenther@gmail.com> wrote: >>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>> <richard.sandiford@linaro.org> wrote: >>>>> This patch adds a stub helper routine to provide the mode >>>>> of a scalar shift amount, given the mode of the values >>>>> being shifted. >>>>> >>>>> One long-standing problem has been to decide what this mode >>>>> should be for arbitrary rtxes (as opposed to those directly >>>>> tied to a target pattern). Is it the mode of the shifted >>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>> the corresponding target pattern says? (In which case what >>>>> should the mode be when the target doesn't have a pattern?) >>>>> >>>>> For now the patch picks word_mode, which should be safe on >>>>> all targets but could perhaps become suboptimal if the helper >>>>> routine is used more often than it is in this patch. As it >>>>> stands the patch does not change the generated code. >>>>> >>>>> The patch also adds a helper function that constructs rtxes >>>>> for constant shift amounts, again given the mode of the value >>>>> being shifted. As well as helping with the SVE patches, this >>>>> is one step towards allowing CONST_INTs to have a real mode. >>>> >>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>> constant shift amount RTX generation into a gen_int_shift_amount >>>> looks good to me I'd rather have that ??? in this function (and >>>> I'd use the mode of the RTX shifted, not word_mode...). >> >> OK. I'd gone for word_mode because that's what expand_binop uses >> for CONST_INTs: >> >> op1_mode = (GET_MODE (op1) != VOIDmode >> ? as_a <scalar_int_mode> (GET_MODE (op1)) >> : word_mode); >> >> But using the inner mode should be fine too. The patch below does that. >> >>>> In the end it's up to insn recognizing to convert the op to the >>>> expected mode and for generic RTL it's us that should decide >>>> on the mode -- on GENERIC the shift amount has to be an >>>> integer so why not simply use a mode that is large enough to >>>> make the constant fit? >> >> ...but I can do that instead if you think it's better. >> >>>> Just throwing in some comments here, RTL isn't my primary >>>> expertise. >>> >>> To add a little bit - shift amounts is maybe the only(?) place >>> where a modeless CONST_INT makes sense! So "fixing" >>> that first sounds backwards. >> >> But even here they have a mode conceptually, since out-of-range shift >> amounts are target-defined rather than undefined. E.g. if the target >> interprets the shift amount as unsigned, then for a shift amount >> (const_int -1) it matters whether the mode is QImode (and so we're >> shifting by 255) or HImode (and so we're shifting by 65535. > > I think RTL is well-defined (at least I hope so ...) and machine constraints > need to be modeled explicitely (like embedding an implicit bit_and in > shift patterns). Well, RTL is well-defined in the sense that if you have (ashift X (foo:HI ...)) then the shift amount must be interpreted as HImode rather than some other mode. The problem here is to define a default choice of mode for const_ints, in cases where the shift is being created out of the blue. Whether the shift amount is effectively signed or unsigned isn't defined by RTL without SHIFT_COUNT_TRUNCATED, since the choice only matters for out-of-range values, and the behaviour for out-of-range RTL shifts is specifically treated as target-defined without SHIFT_COUNT_TRUNCATED. I think the revised patch does implement your suggestion of using the integer equivalent of the inner mode as the default, but we need to decide whether to go with it, go with the original word_mode approach (taken from existing expand_binop code) or something else. Something else could include the widest supported integer mode, so that we never change the value. Thanks, Richard >> OK, so shifts by 65535 make no sense in practice, but *conceptually*... :-) >> >> Jeff Law <law@redhat.com> writes: >>> On 10/26/2017 06:06 AM, Richard Biener wrote: >>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>> <richard.sandiford@linaro.org> wrote: >>>>> This patch adds a stub helper routine to provide the mode >>>>> of a scalar shift amount, given the mode of the values >>>>> being shifted. >>>>> >>>>> One long-standing problem has been to decide what this mode >>>>> should be for arbitrary rtxes (as opposed to those directly >>>>> tied to a target pattern). Is it the mode of the shifted >>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>> the corresponding target pattern says? (In which case what >>>>> should the mode be when the target doesn't have a pattern?) >>>>> >>>>> For now the patch picks word_mode, which should be safe on >>>>> all targets but could perhaps become suboptimal if the helper >>>>> routine is used more often than it is in this patch. As it >>>>> stands the patch does not change the generated code. >>>>> >>>>> The patch also adds a helper function that constructs rtxes >>>>> for constant shift amounts, again given the mode of the value >>>>> being shifted. As well as helping with the SVE patches, this >>>>> is one step towards allowing CONST_INTs to have a real mode. >>>> >>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>> constant shift amount RTX generation into a gen_int_shift_amount >>>> looks good to me I'd rather have that ??? in this function (and >>>> I'd use the mode of the RTX shifted, not word_mode...). >>>> >>>> In the end it's up to insn recognizing to convert the op to the >>>> expected mode and for generic RTL it's us that should decide >>>> on the mode -- on GENERIC the shift amount has to be an >>>> integer so why not simply use a mode that is large enough to >>>> make the constant fit? >>>> >>>> Just throwing in some comments here, RTL isn't my primary >>>> expertise. >>> I wonder if encapsulation + a target hook to specify the mode would be >>> better? We'd then have to argue over word_mode, vs QImode vs something >>> else for the default, but at least we'd have a way for the target to >>> specify the mode is generally best when working on shift counts. >>> >>> In the end I doubt there's a single definition that is overall better. >>> Largely because I suspect there are times when the narrowest mode is >>> best, or the mode of the operand being shifted. >>> >>> So thoughts on doing the encapsulation with a target hook to specify the >>> desired mode? Does that get us what we need for SVE and does it provide >>> us a path forward on this issue if we were to try to move towards >>> CONST_INTs with modes? >> >> I think it'd better to do that only if we have a use case, since >> it's hard to predict what the best way of handling it is until then. >> E.g. I'd still like to hold out the possibility of doing this automatically >> from the .md file instead, if some kind of override ends up being necessary. >> >> Like you say, we have to argue over the default either way, and I think >> that's been the sticking point. >> >> Thanks, >> Richard >> >> >> 2017-11-20 Richard Sandiford <richard.sandiford@linaro.org> >> Alan Hayward <alan.hayward@arm.com> >> David Sherwood <david.sherwood@arm.com> >> >> gcc/ >> * emit-rtl.h (gen_int_shift_amount): Declare. >> * emit-rtl.c (gen_int_shift_amount): New function. >> * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount >> instead of GEN_INT. >> * calls.c (shift_return_value): Likewise. >> * cse.c (fold_rtx): Likewise. >> * dse.c (find_shift_sequence): Likewise. >> * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) >> (expand_shift, expand_smod_pow2): Likewise. >> * lower-subreg.c (shift_cost): Likewise. >> * simplify-rtx.c (simplify_unary_operation_1): Likewise. >> (simplify_binary_operation_1): Likewise. >> * combine.c (try_combine, find_split_point, force_int_to_mode) >> (simplify_shift_const_1, simplify_shift_const): Likewise. >> (change_zero_ext): Likewise. Use simplify_gen_binary. >> * optabs.c (expand_superword_shift, expand_doubleword_mult) >> (expand_unop, expand_binop): Use gen_int_shift_amount instead >> of GEN_INT. >> (shift_amt_for_vec_perm_mask): Add a machine_mode argument. >> Use gen_int_shift_amount instead of GEN_INT. >> (expand_vec_perm): Update caller accordingly. Use >> gen_int_shift_amount instead of GEN_INT. >> >> Index: gcc/emit-rtl.h >> =================================================================== >> --- gcc/emit-rtl.h 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/emit-rtl.h 2017-11-20 20:37:51.661320782 +0000 >> @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, >> extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); >> extern void adjust_reg_mode (rtx, machine_mode); >> extern int mem_expr_equal_p (const_tree, const_tree); >> +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); >> >> extern bool need_atomic_barrier_p (enum memmodel, bool); >> >> Index: gcc/emit-rtl.c >> =================================================================== >> --- gcc/emit-rtl.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/emit-rtl.c 2017-11-20 20:37:51.660320782 +0000 >> @@ -6507,6 +6507,24 @@ need_atomic_barrier_p (enum memmodel mod >> } >> } >> >> +/* Return a constant shift amount for shifting a value of mode MODE >> + by VALUE bits. */ >> + >> +rtx >> +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) >> +{ >> + /* ??? Using the inner mode should be wide enough for all useful >> + cases (e.g. QImode usually has 8 shiftable bits, while a QImode >> + shift amount has a range of [-128, 127]). But in principle >> + a target could require target-dependent behaviour for a >> + shift whose shift amount is wider than the shifted value. >> + Perhaps this should be automatically derived from the .md >> + files instead, or perhaps have a target hook. */ >> + scalar_int_mode shift_mode >> + = int_mode_for_mode (GET_MODE_INNER (mode)).require (); >> + return gen_int_mode (value, shift_mode); >> +} >> + >> /* Initialize fields of rtl_data related to stack alignment. */ >> >> void >> Index: gcc/asan.c >> =================================================================== >> --- gcc/asan.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/asan.c 2017-11-20 20:37:51.657320781 +0000 >> @@ -1386,7 +1386,7 @@ asan_emit_stack_protection (rtx base, rt >> TREE_ASM_WRITTEN (id) = 1; >> emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); >> shadow_base = expand_binop (Pmode, lshr_optab, base, >> - GEN_INT (ASAN_SHADOW_SHIFT), >> + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), >> NULL_RTX, 1, OPTAB_DIRECT); >> shadow_base >> = plus_constant (Pmode, shadow_base, >> Index: gcc/calls.c >> =================================================================== >> --- gcc/calls.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/calls.c 2017-11-20 20:37:51.657320781 +0000 >> @@ -2742,15 +2742,17 @@ shift_return_value (machine_mode mode, b >> HOST_WIDE_INT shift; >> >> gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); >> - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); >> + machine_mode value_mode = GET_MODE (value); >> + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); >> if (shift == 0) >> return false; >> >> /* Use ashr rather than lshr for right shifts. This is for the benefit >> of the MIPS port, which requires SImode values to be sign-extended >> when stored in 64-bit registers. */ >> - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : > ashr_optab, >> - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) >> + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, >> + value, gen_int_shift_amount (value_mode, shift), >> + value, 1, OPTAB_WIDEN)) >> gcc_unreachable (); >> return true; >> } >> Index: gcc/cse.c >> =================================================================== >> --- gcc/cse.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/cse.c 2017-11-20 20:37:51.660320782 +0000 >> @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) >> || INTVAL (const_arg1) < 0)) >> { >> if (SHIFT_COUNT_TRUNCATED) >> - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) >> - & (GET_MODE_UNIT_BITSIZE (mode) >> - - 1)); >> + canon_const_arg1 = gen_int_shift_amount >> + (mode, (INTVAL (const_arg1) >> + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); >> else >> break; >> } >> @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) >> || INTVAL (inner_const) < 0)) >> { >> if (SHIFT_COUNT_TRUNCATED) >> - inner_const = GEN_INT (INTVAL (inner_const) >> - & (GET_MODE_UNIT_BITSIZE (mode) >> - - 1)); >> + inner_const = gen_int_shift_amount >> + (mode, (INTVAL (inner_const) >> + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); >> else >> break; >> } >> @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) >> /* As an exception, we can turn an ASHIFTRT of this >> form into a shift of the number of bits - 1. */ >> if (code == ASHIFTRT) >> - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); >> + new_const = gen_int_shift_amount >> + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); >> else if (!side_effects_p (XEXP (y, 0))) >> return CONST0_RTX (mode); >> else >> Index: gcc/dse.c >> =================================================================== >> --- gcc/dse.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/dse.c 2017-11-20 20:37:51.660320782 +0000 >> @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, >> store_mode, byte); >> if (ret && CONSTANT_P (ret)) >> { >> + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); >> ret = simplify_const_binary_operation (LSHIFTRT, new_mode, >> - ret, GEN_INT (shift)); >> + ret, shift_rtx); >> if (ret && CONSTANT_P (ret)) >> { >> byte = subreg_lowpart_offset (read_mode, new_mode); >> @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, >> of one dsp where the cost of these two was not the same. But >> this really is a rare case anyway. */ >> target = expand_binop (new_mode, lshr_optab, new_reg, >> - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); >> + gen_int_shift_amount (new_mode, shift), >> + new_reg, 1, OPTAB_DIRECT); >> >> shift_seq = get_insns (); >> end_sequence (); >> Index: gcc/expmed.c >> =================================================================== >> --- gcc/expmed.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/expmed.c 2017-11-20 20:37:51.661320782 +0000 >> @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed >> PUT_MODE (all->zext, wider_mode); >> PUT_MODE (all->wide_mult, wider_mode); >> PUT_MODE (all->wide_lshr, wider_mode); >> - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); >> + XEXP (all->wide_lshr, 1) >> + = gen_int_shift_amount (wider_mode, mode_bitsize); >> >> set_mul_widen_cost (speed, wider_mode, >> set_src_cost (all->wide_mult, wider_mode, speed)); >> @@ -909,12 +910,14 @@ store_bit_field_1 (rtx str_rtx, unsigned >> to make sure that for big-endian machines the higher order >> bits are used. */ >> if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) >> - value_word = simplify_expand_binop (word_mode, lshr_optab, >> - value_word, >> - GEN_INT (BITS_PER_WORD >> - - new_bitsize), >> - NULL_RTX, true, >> - OPTAB_LIB_WIDEN); >> + { >> + int shift = BITS_PER_WORD - new_bitsize; >> + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); >> + value_word = simplify_expand_binop (word_mode, lshr_optab, >> + value_word, shift_rtx, >> + NULL_RTX, true, >> + OPTAB_LIB_WIDEN); >> + } >> >> if (!store_bit_field_1 (op0, new_bitsize, >> bitnum + bit_offset, >> @@ -2365,8 +2368,9 @@ expand_shift_1 (enum tree_code code, mac >> if (CONST_INT_P (op1) >> && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= >> (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) >> - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) >> - % GET_MODE_BITSIZE (scalar_mode)); >> + op1 = gen_int_shift_amount (mode, >> + (unsigned HOST_WIDE_INT) INTVAL (op1) >> + % GET_MODE_BITSIZE (scalar_mode)); >> else if (GET_CODE (op1) == SUBREG >> && subreg_lowpart_p (op1) >> && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) >> @@ -2383,7 +2387,8 @@ expand_shift_1 (enum tree_code code, mac >> && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, >> GET_MODE_BITSIZE (scalar_mode) - 1)) >> { >> - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); >> + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) >> + - INTVAL (op1))); >> left = !left; >> code = left ? LROTATE_EXPR : RROTATE_EXPR; >> } >> @@ -2463,8 +2468,8 @@ expand_shift_1 (enum tree_code code, mac >> if (op1 == const0_rtx) >> return shifted; >> else if (CONST_INT_P (op1)) >> - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) >> - - INTVAL (op1)); >> + other_amount = gen_int_shift_amount >> + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); >> else >> { >> other_amount >> @@ -2537,8 +2542,9 @@ expand_shift_1 (enum tree_code code, mac >> expand_shift (enum tree_code code, machine_mode mode, rtx shifted, >> int amount, rtx target, int unsignedp) >> { >> - return expand_shift_1 (code, mode, >> - shifted, GEN_INT (amount), target, unsignedp); >> + return expand_shift_1 (code, mode, shifted, >> + gen_int_shift_amount (mode, amount), >> + target, unsignedp); >> } >> >> /* Likewise, but return 0 if that cannot be done. */ >> @@ -3856,7 +3862,7 @@ expand_smod_pow2 (scalar_int_mode mode, >> { >> HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; >> signmask = force_reg (mode, signmask); >> - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); >> + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); >> >> /* Use the rtx_cost of a LSHIFTRT instruction to determine >> which instruction sequence to use. If logical right shifts >> Index: gcc/lower-subreg.c >> =================================================================== >> --- gcc/lower-subreg.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/lower-subreg.c 2017-11-20 20:37:51.661320782 +0000 >> @@ -141,7 +141,7 @@ shift_cost (bool speed_p, struct cost_rt >> PUT_CODE (rtxes->shift, code); >> PUT_MODE (rtxes->shift, mode); >> PUT_MODE (rtxes->source, mode); >> - XEXP (rtxes->shift, 1) = GEN_INT (op1); >> + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); >> return set_src_cost (rtxes->shift, mode, speed_p); >> } >> >> Index: gcc/simplify-rtx.c >> =================================================================== >> --- gcc/simplify-rtx.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/simplify-rtx.c 2017-11-20 20:37:51.663320783 +0000 >> @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod >> if (STORE_FLAG_VALUE == 1) >> { >> temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), >> - GEN_INT (isize - 1)); >> + gen_int_shift_amount (inner, >> + isize - 1)); >> if (int_mode == inner) >> return temp; >> if (GET_MODE_PRECISION (int_mode) > isize) >> @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod >> else if (STORE_FLAG_VALUE == -1) >> { >> temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), >> - GEN_INT (isize - 1)); >> + gen_int_shift_amount (inner, >> + isize - 1)); >> if (int_mode == inner) >> return temp; >> if (GET_MODE_PRECISION (int_mode) > isize) >> @@ -2672,7 +2674,8 @@ simplify_binary_operation_1 (enum rtx_co >> { >> val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); >> if (val >= 0) >> - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); >> + return simplify_gen_binary (ASHIFT, mode, op0, >> + gen_int_shift_amount (mode, val)); >> } >> >> /* x*2 is x+x and x*(-1) is -x */ >> @@ -3296,7 +3299,8 @@ simplify_binary_operation_1 (enum rtx_co >> /* Convert divide by power of two into shift. */ >> if (CONST_INT_P (trueop1) >> && (val = exact_log2 (UINTVAL (trueop1))) > 0) >> - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); >> + return simplify_gen_binary (LSHIFTRT, mode, op0, >> + gen_int_shift_amount (mode, val)); >> break; >> >> case DIV: >> @@ -3416,10 +3420,12 @@ simplify_binary_operation_1 (enum rtx_co >> && IN_RANGE (INTVAL (trueop1), >> GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), >> GET_MODE_UNIT_PRECISION (mode) - 1)) >> - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, >> - mode, op0, >> - GEN_INT (GET_MODE_UNIT_PRECISION (mode) >> - - INTVAL (trueop1))); >> + { >> + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); >> + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); >> + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, >> + mode, op0, new_amount_rtx); >> + } >> #endif >> /* FALLTHRU */ >> case ASHIFTRT: >> @@ -3460,8 +3466,8 @@ simplify_binary_operation_1 (enum rtx_co >> == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) >> && subreg_lowpart_p (op0)) >> { >> - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) >> - + INTVAL (op1)); >> + rtx tmp = gen_int_shift_amount >> + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); >> tmp = simplify_gen_binary (code, inner_mode, >> XEXP (SUBREG_REG (op0), 0), >> tmp); >> @@ -3472,7 +3478,8 @@ simplify_binary_operation_1 (enum rtx_co >> { >> val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); >> if (val != INTVAL (op1)) >> - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); >> + return simplify_gen_binary (code, mode, op0, >> + gen_int_shift_amount (mode, val)); >> } >> break; >> >> Index: gcc/combine.c >> =================================================================== >> --- gcc/combine.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/combine.c 2017-11-20 20:37:51.659320782 +0000 >> @@ -3792,8 +3792,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, >> && INTVAL (XEXP (*split, 1)) > 0 >> && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) >> { >> + rtx i_rtx = gen_int_shift_amount (split_mode, i); >> SUBST (*split, gen_rtx_ASHIFT (split_mode, >> - XEXP (*split, 0), GEN_INT (i))); >> + XEXP (*split, 0), i_rtx)); >> /* Update split_code because we may not have a multiply >> anymore. */ >> split_code = GET_CODE (*split); >> @@ -3807,8 +3808,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, >> && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) >> { >> rtx nsplit = XEXP (*split, 0); >> + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); >> SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), >> - XEXP (nsplit, 0), GEN_INT (i))); >> + XEXP (nsplit, 0), >> + i_rtx)); >> /* Update split_code because we may not have a multiply >> anymore. */ >> split_code = GET_CODE (*split); >> @@ -5077,12 +5080,12 @@ find_split_point (rtx *loc, rtx_insn *in >> GET_MODE (XEXP (SET_SRC (x), 0)))))) >> { >> machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); >> - >> + rtx pos_rtx = gen_int_shift_amount (mode, pos); >> SUBST (SET_SRC (x), >> gen_rtx_NEG (mode, >> gen_rtx_LSHIFTRT (mode, >> XEXP (SET_SRC (x), 0), >> - GEN_INT (pos)))); >> + pos_rtx))); >> >> split = find_split_point (&SET_SRC (x), insn, true); >> if (split && split != &SET_SRC (x)) >> @@ -5140,11 +5143,11 @@ find_split_point (rtx *loc, rtx_insn *in >> { >> unsigned HOST_WIDE_INT mask >> = (HOST_WIDE_INT_1U << len) - 1; >> + rtx pos_rtx = gen_int_shift_amount (mode, pos); >> SUBST (SET_SRC (x), >> gen_rtx_AND (mode, >> gen_rtx_LSHIFTRT >> - (mode, gen_lowpart (mode, inner), >> - GEN_INT (pos)), >> + (mode, gen_lowpart (mode, inner), pos_rtx), >> gen_int_mode (mask, mode))); >> >> split = find_split_point (&SET_SRC (x), insn, true); >> @@ -5153,14 +5156,15 @@ find_split_point (rtx *loc, rtx_insn *in >> } >> else >> { >> + int left_bits = GET_MODE_PRECISION (mode) - len - pos; >> + int right_bits = GET_MODE_PRECISION (mode) - len; >> SUBST (SET_SRC (x), >> gen_rtx_fmt_ee >> (unsignedp ? LSHIFTRT : ASHIFTRT, mode, >> gen_rtx_ASHIFT (mode, >> gen_lowpart (mode, inner), >> - GEN_INT (GET_MODE_PRECISION (mode) >> - - len - pos)), >> - GEN_INT (GET_MODE_PRECISION (mode) - len))); >> + gen_int_shift_amount (mode, left_bits)), >> + gen_int_shift_amount (mode, right_bits))); >> >> split = find_split_point (&SET_SRC (x), insn, true); >> if (split && split != &SET_SRC (x)) >> @@ -8935,10 +8939,11 @@ force_int_to_mode (rtx x, scalar_int_mod >> /* Must be more sign bit copies than the mask needs. */ >> && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >> >= exact_log2 (mask + 1))) >> - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), >> - GEN_INT (GET_MODE_PRECISION (xmode) >> - - exact_log2 (mask + 1))); >> - >> + { >> + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); >> + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), >> + gen_int_shift_amount (xmode, nbits)); >> + } >> goto shiftrt; >> >> case ASHIFTRT: >> @@ -10431,7 +10436,7 @@ simplify_shift_const_1 (enum rtx_code co >> { >> enum rtx_code orig_code = code; >> rtx orig_varop = varop; >> - int count; >> + int count, log2; >> machine_mode mode = result_mode; >> machine_mode shift_mode; >> scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, > int_result_mode; >> @@ -10634,13 +10639,11 @@ simplify_shift_const_1 (enum rtx_code co >> is cheaper. But it is still better on those machines to >> merge two shifts into one. */ >> if (CONST_INT_P (XEXP (varop, 1)) >> - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) >> + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) >> { >> - varop >> - = simplify_gen_binary (ASHIFT, GET_MODE (varop), >> - XEXP (varop, 0), >> - GEN_INT (exact_log2 ( >> - UINTVAL (XEXP (varop, 1))))); >> + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); >> + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), >> + XEXP (varop, 0), log2_rtx); >> continue; >> } >> break; >> @@ -10648,13 +10651,11 @@ simplify_shift_const_1 (enum rtx_code co >> case UDIV: >> /* Similar, for when divides are cheaper. */ >> if (CONST_INT_P (XEXP (varop, 1)) >> - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) >> + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) >> { >> - varop >> - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), >> - XEXP (varop, 0), >> - GEN_INT (exact_log2 ( >> - UINTVAL (XEXP (varop, 1))))); >> + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); >> + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), >> + XEXP (varop, 0), log2_rtx); >> continue; >> } >> break; >> @@ -10789,10 +10790,10 @@ simplify_shift_const_1 (enum rtx_code co >> >> mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), >> int_result_mode); >> - >> + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); >> mask_rtx >> = simplify_const_binary_operation (code, int_result_mode, >> - mask_rtx, GEN_INT (count)); >> + mask_rtx, count_rtx); >> >> /* Give up if we can't compute an outer operation to use. */ >> if (mask_rtx == 0 >> @@ -10848,9 +10849,10 @@ simplify_shift_const_1 (enum rtx_code co >> if (code == ASHIFTRT && int_mode != int_result_mode) >> break; >> >> + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); >> rtx new_rtx = simplify_const_binary_operation (code, int_mode, >> XEXP (varop, 0), >> - GEN_INT (count)); >> + count_rtx); >> varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); >> count = 0; >> continue; >> @@ -10916,7 +10918,7 @@ simplify_shift_const_1 (enum rtx_code co >> && (new_rtx = simplify_const_binary_operation >> (code, int_result_mode, >> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >> - GEN_INT (count))) != 0 >> + gen_int_shift_amount (int_result_mode, count))) != 0 >> && CONST_INT_P (new_rtx) >> && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), >> INTVAL (new_rtx), int_result_mode, >> @@ -11059,7 +11061,7 @@ simplify_shift_const_1 (enum rtx_code co >> && (new_rtx = simplify_const_binary_operation >> (ASHIFT, int_result_mode, >> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >> - GEN_INT (count))) != 0 >> + gen_int_shift_amount (int_result_mode, count))) != 0 >> && CONST_INT_P (new_rtx) >> && merge_outer_ops (&outer_op, &outer_const, PLUS, >> INTVAL (new_rtx), int_result_mode, >> @@ -11080,7 +11082,7 @@ simplify_shift_const_1 (enum rtx_code co >> && (new_rtx = simplify_const_binary_operation >> (code, int_result_mode, >> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >> - GEN_INT (count))) != 0 >> + gen_int_shift_amount (int_result_mode, count))) != 0 >> && CONST_INT_P (new_rtx) >> && merge_outer_ops (&outer_op, &outer_const, XOR, >> INTVAL (new_rtx), int_result_mode, >> @@ -11135,12 +11137,12 @@ simplify_shift_const_1 (enum rtx_code co >> - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) >> { >> rtx varop_inner = XEXP (varop, 0); >> - >> - varop_inner >> - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), >> - XEXP (varop_inner, 0), >> - GEN_INT >> - (count + INTVAL (XEXP (varop_inner, 1)))); >> + int new_count = count + INTVAL (XEXP (varop_inner, 1)); >> + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), >> + new_count); >> + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), >> + XEXP (varop_inner, 0), >> + new_count_rtx); >> varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); >> count = 0; >> continue; >> @@ -11192,7 +11194,8 @@ simplify_shift_const_1 (enum rtx_code co >> x = NULL_RTX; >> >> if (x == NULL_RTX) >> - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); >> + x = simplify_gen_binary (code, shift_mode, varop, >> + gen_int_shift_amount (shift_mode, count)); >> >> /* If we were doing an LSHIFTRT in a wider mode than it was originally, >> turn off all the bits that the shift would have turned off. */ >> @@ -11254,7 +11257,8 @@ simplify_shift_const (rtx x, enum rtx_co >> return tem; >> >> if (!x) >> - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); >> + x = simplify_gen_binary (code, GET_MODE (varop), varop, >> + gen_int_shift_amount (GET_MODE (varop), count)); >> if (GET_MODE (x) != result_mode) >> x = gen_lowpart (result_mode, x); >> return x; >> @@ -11445,8 +11449,9 @@ change_zero_ext (rtx pat) >> if (BITS_BIG_ENDIAN) >> start = GET_MODE_PRECISION (inner_mode) - size - start; >> >> - if (start) >> - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); >> + if (start != 0) >> + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), >> + gen_int_shift_amount (inner_mode, start)); >> else >> x = XEXP (x, 0); >> if (mode != inner_mode) >> Index: gcc/optabs.c >> =================================================================== >> --- gcc/optabs.c 2017-11-20 20:37:41.918226976 +0000 >> +++ gcc/optabs.c 2017-11-20 20:37:51.662320782 +0000 >> @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, >> if (binoptab != ashr_optab) >> emit_move_insn (outof_target, CONST0_RTX (word_mode)); >> else >> - if (!force_expand_binop (word_mode, binoptab, >> - outof_input, GEN_INT (BITS_PER_WORD - 1), >> + if (!force_expand_binop (word_mode, binoptab, outof_input, >> + gen_int_shift_amount (word_mode, >> + BITS_PER_WORD - 1), >> outof_target, unsignedp, methods)) >> return false; >> } >> @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod >> { >> int low = (WORDS_BIG_ENDIAN ? 1 : 0); >> int high = (WORDS_BIG_ENDIAN ? 0 : 1); >> - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); >> + rtx wordm1 = (umulp ? NULL_RTX >> + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); >> rtx product, adjust, product_high, temp; >> >> rtx op0_high = operand_subword_force (op0, high, mode); >> @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b >> unsigned int bits = GET_MODE_PRECISION (int_mode); >> >> if (CONST_INT_P (op1)) >> - newop1 = GEN_INT (bits - INTVAL (op1)); >> + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); >> else if (targetm.shift_truncation_mask (int_mode) == bits - 1) >> newop1 = negate_rtx (GET_MODE (op1), op1); >> else >> @@ -1403,7 +1405,7 @@ expand_binop (machine_mode mode, optab b >> >> /* Apply the truncation to constant shifts. */ >> if (double_shift_mask > 0 && CONST_INT_P (op1)) >> - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); >> + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); >> >> if (op1 == CONST0_RTX (op1_mode)) >> return op0; >> @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b >> else >> { >> rtx into_temp1, into_temp2, outof_temp1, outof_temp2; >> - rtx first_shift_count, second_shift_count; >> + HOST_WIDE_INT first_shift_count, second_shift_count; >> optab reverse_unsigned_shift, unsigned_shift; >> >> reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) >> @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b >> >> if (shift_count > BITS_PER_WORD) >> { >> - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); >> - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); >> + first_shift_count = shift_count - BITS_PER_WORD; >> + second_shift_count = 2 * BITS_PER_WORD - shift_count; >> } >> else >> { >> - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); >> - second_shift_count = GEN_INT (shift_count); >> + first_shift_count = BITS_PER_WORD - shift_count; >> + second_shift_count = shift_count; >> } >> + rtx first_shift_count_rtx >> + = gen_int_shift_amount (word_mode, first_shift_count); >> + rtx second_shift_count_rtx >> + = gen_int_shift_amount (word_mode, second_shift_count); >> >> into_temp1 = expand_binop (word_mode, unsigned_shift, >> - outof_input, first_shift_count, >> + outof_input, first_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, >> - into_input, second_shift_count, >> + into_input, second_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> >> if (into_temp1 != 0 && into_temp2 != 0) >> @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b >> emit_move_insn (into_target, inter); >> >> outof_temp1 = expand_binop (word_mode, unsigned_shift, >> - into_input, first_shift_count, >> + into_input, first_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, >> - outof_input, second_shift_count, >> + outof_input, second_shift_count_rtx, >> NULL_RTX, unsignedp, next_methods); >> >> if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) >> @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un >> >> if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) >> { >> - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, >> - unsignedp, OPTAB_DIRECT); >> + temp = expand_binop (mode, rotl_optab, op0, >> + gen_int_shift_amount (mode, 8), >> + target, unsignedp, OPTAB_DIRECT); >> if (temp) >> return temp; >> } >> >> if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) >> { >> - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, >> - unsignedp, OPTAB_DIRECT); >> + temp = expand_binop (mode, rotr_optab, op0, >> + gen_int_shift_amount (mode, 8), >> + target, unsignedp, OPTAB_DIRECT); >> if (temp) >> return temp; >> } >> >> last = get_last_insn (); >> >> - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, >> + temp1 = expand_binop (mode, ashl_optab, op0, >> + gen_int_shift_amount (mode, 8), NULL_RTX, >> unsignedp, OPTAB_WIDEN); >> - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, >> + temp2 = expand_binop (mode, lshr_optab, op0, >> + gen_int_shift_amount (mode, 8), NULL_RTX, >> unsignedp, OPTAB_WIDEN); >> if (temp1 && temp2) >> { >> @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod >> } >> >> /* Checks if vec_perm mask SEL is a constant equivalent to a shift of > the first >> - vec_perm operand, assuming the second operand is a constant vector > of zeroes. >> - Return the shift distance in bits if so, or NULL_RTX if the vec_perm > is not a >> - shift. */ >> + vec_perm operand (which has mode OP0_MODE), assuming the second >> + operand is a constant vector of zeroes. Return the shift distance in >> + bits if so, or NULL_RTX if the vec_perm is not a shift. */ >> static rtx >> -shift_amt_for_vec_perm_mask (rtx sel) >> +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) >> { >> unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); >> unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); >> @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) >> return NULL_RTX; >> } >> >> - return GEN_INT (first * bitsize); >> + return gen_int_shift_amount (op0_mode, first * bitsize); >> } >> >> /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ >> @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx >> && (shift_code != CODE_FOR_nothing >> || shift_code_qi != CODE_FOR_nothing)) >> { >> - shift_amt = shift_amt_for_vec_perm_mask (sel); >> + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); >> if (shift_amt) >> { >> struct expand_operand ops[3]; >> @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx >> NULL, 0, OPTAB_DIRECT); >> else >> sel = expand_simple_binop (selmode, ASHIFT, sel, >> - GEN_INT (exact_log2 (u)), >> + gen_int_shift_amount (selmode, >> + exact_log2 (u)), >> NULL, 0, OPTAB_DIRECT); >> gcc_assert (sel != NULL); >>
On Fri, Dec 15, 2017 at 1:48 AM, Richard Sandiford <richard.sandiford@linaro.org> wrote: > Richard Biener <richard.guenther@gmail.com> writes: >> On Mon, Nov 20, 2017 at 10:02 PM, Richard Sandiford >> <richard.sandiford@linaro.org> wrote: >>> Richard Biener <richard.guenther@gmail.com> writes: >>>> On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener >>>> <richard.guenther@gmail.com> wrote: >>>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>>> <richard.sandiford@linaro.org> wrote: >>>>>> This patch adds a stub helper routine to provide the mode >>>>>> of a scalar shift amount, given the mode of the values >>>>>> being shifted. >>>>>> >>>>>> One long-standing problem has been to decide what this mode >>>>>> should be for arbitrary rtxes (as opposed to those directly >>>>>> tied to a target pattern). Is it the mode of the shifted >>>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>>> the corresponding target pattern says? (In which case what >>>>>> should the mode be when the target doesn't have a pattern?) >>>>>> >>>>>> For now the patch picks word_mode, which should be safe on >>>>>> all targets but could perhaps become suboptimal if the helper >>>>>> routine is used more often than it is in this patch. As it >>>>>> stands the patch does not change the generated code. >>>>>> >>>>>> The patch also adds a helper function that constructs rtxes >>>>>> for constant shift amounts, again given the mode of the value >>>>>> being shifted. As well as helping with the SVE patches, this >>>>>> is one step towards allowing CONST_INTs to have a real mode. >>>>> >>>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>>> constant shift amount RTX generation into a gen_int_shift_amount >>>>> looks good to me I'd rather have that ??? in this function (and >>>>> I'd use the mode of the RTX shifted, not word_mode...). >>> >>> OK. I'd gone for word_mode because that's what expand_binop uses >>> for CONST_INTs: >>> >>> op1_mode = (GET_MODE (op1) != VOIDmode >>> ? as_a <scalar_int_mode> (GET_MODE (op1)) >>> : word_mode); >>> >>> But using the inner mode should be fine too. The patch below does that. >>> >>>>> In the end it's up to insn recognizing to convert the op to the >>>>> expected mode and for generic RTL it's us that should decide >>>>> on the mode -- on GENERIC the shift amount has to be an >>>>> integer so why not simply use a mode that is large enough to >>>>> make the constant fit? >>> >>> ...but I can do that instead if you think it's better. >>> >>>>> Just throwing in some comments here, RTL isn't my primary >>>>> expertise. >>>> >>>> To add a little bit - shift amounts is maybe the only(?) place >>>> where a modeless CONST_INT makes sense! So "fixing" >>>> that first sounds backwards. >>> >>> But even here they have a mode conceptually, since out-of-range shift >>> amounts are target-defined rather than undefined. E.g. if the target >>> interprets the shift amount as unsigned, then for a shift amount >>> (const_int -1) it matters whether the mode is QImode (and so we're >>> shifting by 255) or HImode (and so we're shifting by 65535. >> >> I think RTL is well-defined (at least I hope so ...) and machine constraints >> need to be modeled explicitely (like embedding an implicit bit_and in >> shift patterns). > > Well, RTL is well-defined in the sense that if you have > > (ashift X (foo:HI ...)) > > then the shift amount must be interpreted as HImode rather than some > other mode. The problem here is to define a default choice of mode for > const_ints, in cases where the shift is being created out of the blue. > > Whether the shift amount is effectively signed or unsigned isn't defined > by RTL without SHIFT_COUNT_TRUNCATED, since the choice only matters for > out-of-range values, and the behaviour for out-of-range RTL shifts is > specifically treated as target-defined without SHIFT_COUNT_TRUNCATED. > > I think the revised patch does implement your suggestion of using the > integer equivalent of the inner mode as the default, but we need to > decide whether to go with it, go with the original word_mode approach > (taken from existing expand_binop code) or something else. Something > else could include the widest supported integer mode, so that we never > change the value. I guess it's pretty arbitrary what we choose (but we might need to adjust targets?). For something like this an appealing choice would be sth that is host and target idependent, like [u]int32_t or given CONST_INT is always 64bits now and signed int64_t aka HOST_WIDE_INT (bad name now). That means it's the "infinite precision" thing that fits into CONST_INT ;) Richard. > Thanks, > Richard > >>> OK, so shifts by 65535 make no sense in practice, but *conceptually*... :-) >>> >>> Jeff Law <law@redhat.com> writes: >>>> On 10/26/2017 06:06 AM, Richard Biener wrote: >>>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>>> <richard.sandiford@linaro.org> wrote: >>>>>> This patch adds a stub helper routine to provide the mode >>>>>> of a scalar shift amount, given the mode of the values >>>>>> being shifted. >>>>>> >>>>>> One long-standing problem has been to decide what this mode >>>>>> should be for arbitrary rtxes (as opposed to those directly >>>>>> tied to a target pattern). Is it the mode of the shifted >>>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>>> the corresponding target pattern says? (In which case what >>>>>> should the mode be when the target doesn't have a pattern?) >>>>>> >>>>>> For now the patch picks word_mode, which should be safe on >>>>>> all targets but could perhaps become suboptimal if the helper >>>>>> routine is used more often than it is in this patch. As it >>>>>> stands the patch does not change the generated code. >>>>>> >>>>>> The patch also adds a helper function that constructs rtxes >>>>>> for constant shift amounts, again given the mode of the value >>>>>> being shifted. As well as helping with the SVE patches, this >>>>>> is one step towards allowing CONST_INTs to have a real mode. >>>>> >>>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>>> constant shift amount RTX generation into a gen_int_shift_amount >>>>> looks good to me I'd rather have that ??? in this function (and >>>>> I'd use the mode of the RTX shifted, not word_mode...). >>>>> >>>>> In the end it's up to insn recognizing to convert the op to the >>>>> expected mode and for generic RTL it's us that should decide >>>>> on the mode -- on GENERIC the shift amount has to be an >>>>> integer so why not simply use a mode that is large enough to >>>>> make the constant fit? >>>>> >>>>> Just throwing in some comments here, RTL isn't my primary >>>>> expertise. >>>> I wonder if encapsulation + a target hook to specify the mode would be >>>> better? We'd then have to argue over word_mode, vs QImode vs something >>>> else for the default, but at least we'd have a way for the target to >>>> specify the mode is generally best when working on shift counts. >>>> >>>> In the end I doubt there's a single definition that is overall better. >>>> Largely because I suspect there are times when the narrowest mode is >>>> best, or the mode of the operand being shifted. >>>> >>>> So thoughts on doing the encapsulation with a target hook to specify the >>>> desired mode? Does that get us what we need for SVE and does it provide >>>> us a path forward on this issue if we were to try to move towards >>>> CONST_INTs with modes? >>> >>> I think it'd better to do that only if we have a use case, since >>> it's hard to predict what the best way of handling it is until then. >>> E.g. I'd still like to hold out the possibility of doing this automatically >>> from the .md file instead, if some kind of override ends up being necessary. >>> >>> Like you say, we have to argue over the default either way, and I think >>> that's been the sticking point. >>> >>> Thanks, >>> Richard >>> >>> >>> 2017-11-20 Richard Sandiford <richard.sandiford@linaro.org> >>> Alan Hayward <alan.hayward@arm.com> >>> David Sherwood <david.sherwood@arm.com> >>> >>> gcc/ >>> * emit-rtl.h (gen_int_shift_amount): Declare. >>> * emit-rtl.c (gen_int_shift_amount): New function. >>> * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount >>> instead of GEN_INT. >>> * calls.c (shift_return_value): Likewise. >>> * cse.c (fold_rtx): Likewise. >>> * dse.c (find_shift_sequence): Likewise. >>> * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) >>> (expand_shift, expand_smod_pow2): Likewise. >>> * lower-subreg.c (shift_cost): Likewise. >>> * simplify-rtx.c (simplify_unary_operation_1): Likewise. >>> (simplify_binary_operation_1): Likewise. >>> * combine.c (try_combine, find_split_point, force_int_to_mode) >>> (simplify_shift_const_1, simplify_shift_const): Likewise. >>> (change_zero_ext): Likewise. Use simplify_gen_binary. >>> * optabs.c (expand_superword_shift, expand_doubleword_mult) >>> (expand_unop, expand_binop): Use gen_int_shift_amount instead >>> of GEN_INT. >>> (shift_amt_for_vec_perm_mask): Add a machine_mode argument. >>> Use gen_int_shift_amount instead of GEN_INT. >>> (expand_vec_perm): Update caller accordingly. Use >>> gen_int_shift_amount instead of GEN_INT. >>> >>> Index: gcc/emit-rtl.h >>> =================================================================== >>> --- gcc/emit-rtl.h 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/emit-rtl.h 2017-11-20 20:37:51.661320782 +0000 >>> @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, >>> extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); >>> extern void adjust_reg_mode (rtx, machine_mode); >>> extern int mem_expr_equal_p (const_tree, const_tree); >>> +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); >>> >>> extern bool need_atomic_barrier_p (enum memmodel, bool); >>> >>> Index: gcc/emit-rtl.c >>> =================================================================== >>> --- gcc/emit-rtl.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/emit-rtl.c 2017-11-20 20:37:51.660320782 +0000 >>> @@ -6507,6 +6507,24 @@ need_atomic_barrier_p (enum memmodel mod >>> } >>> } >>> >>> +/* Return a constant shift amount for shifting a value of mode MODE >>> + by VALUE bits. */ >>> + >>> +rtx >>> +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) >>> +{ >>> + /* ??? Using the inner mode should be wide enough for all useful >>> + cases (e.g. QImode usually has 8 shiftable bits, while a QImode >>> + shift amount has a range of [-128, 127]). But in principle >>> + a target could require target-dependent behaviour for a >>> + shift whose shift amount is wider than the shifted value. >>> + Perhaps this should be automatically derived from the .md >>> + files instead, or perhaps have a target hook. */ >>> + scalar_int_mode shift_mode >>> + = int_mode_for_mode (GET_MODE_INNER (mode)).require (); >>> + return gen_int_mode (value, shift_mode); >>> +} >>> + >>> /* Initialize fields of rtl_data related to stack alignment. */ >>> >>> void >>> Index: gcc/asan.c >>> =================================================================== >>> --- gcc/asan.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/asan.c 2017-11-20 20:37:51.657320781 +0000 >>> @@ -1386,7 +1386,7 @@ asan_emit_stack_protection (rtx base, rt >>> TREE_ASM_WRITTEN (id) = 1; >>> emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); >>> shadow_base = expand_binop (Pmode, lshr_optab, base, >>> - GEN_INT (ASAN_SHADOW_SHIFT), >>> + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), >>> NULL_RTX, 1, OPTAB_DIRECT); >>> shadow_base >>> = plus_constant (Pmode, shadow_base, >>> Index: gcc/calls.c >>> =================================================================== >>> --- gcc/calls.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/calls.c 2017-11-20 20:37:51.657320781 +0000 >>> @@ -2742,15 +2742,17 @@ shift_return_value (machine_mode mode, b >>> HOST_WIDE_INT shift; >>> >>> gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); >>> - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); >>> + machine_mode value_mode = GET_MODE (value); >>> + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); >>> if (shift == 0) >>> return false; >>> >>> /* Use ashr rather than lshr for right shifts. This is for the benefit >>> of the MIPS port, which requires SImode values to be sign-extended >>> when stored in 64-bit registers. */ >>> - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : >> ashr_optab, >>> - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) >>> + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, >>> + value, gen_int_shift_amount (value_mode, shift), >>> + value, 1, OPTAB_WIDEN)) >>> gcc_unreachable (); >>> return true; >>> } >>> Index: gcc/cse.c >>> =================================================================== >>> --- gcc/cse.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/cse.c 2017-11-20 20:37:51.660320782 +0000 >>> @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) >>> || INTVAL (const_arg1) < 0)) >>> { >>> if (SHIFT_COUNT_TRUNCATED) >>> - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) >>> - & (GET_MODE_UNIT_BITSIZE (mode) >>> - - 1)); >>> + canon_const_arg1 = gen_int_shift_amount >>> + (mode, (INTVAL (const_arg1) >>> + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); >>> else >>> break; >>> } >>> @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) >>> || INTVAL (inner_const) < 0)) >>> { >>> if (SHIFT_COUNT_TRUNCATED) >>> - inner_const = GEN_INT (INTVAL (inner_const) >>> - & (GET_MODE_UNIT_BITSIZE (mode) >>> - - 1)); >>> + inner_const = gen_int_shift_amount >>> + (mode, (INTVAL (inner_const) >>> + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); >>> else >>> break; >>> } >>> @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) >>> /* As an exception, we can turn an ASHIFTRT of this >>> form into a shift of the number of bits - 1. */ >>> if (code == ASHIFTRT) >>> - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); >>> + new_const = gen_int_shift_amount >>> + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); >>> else if (!side_effects_p (XEXP (y, 0))) >>> return CONST0_RTX (mode); >>> else >>> Index: gcc/dse.c >>> =================================================================== >>> --- gcc/dse.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/dse.c 2017-11-20 20:37:51.660320782 +0000 >>> @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, >>> store_mode, byte); >>> if (ret && CONSTANT_P (ret)) >>> { >>> + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); >>> ret = simplify_const_binary_operation (LSHIFTRT, new_mode, >>> - ret, GEN_INT (shift)); >>> + ret, shift_rtx); >>> if (ret && CONSTANT_P (ret)) >>> { >>> byte = subreg_lowpart_offset (read_mode, new_mode); >>> @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, >>> of one dsp where the cost of these two was not the same. But >>> this really is a rare case anyway. */ >>> target = expand_binop (new_mode, lshr_optab, new_reg, >>> - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); >>> + gen_int_shift_amount (new_mode, shift), >>> + new_reg, 1, OPTAB_DIRECT); >>> >>> shift_seq = get_insns (); >>> end_sequence (); >>> Index: gcc/expmed.c >>> =================================================================== >>> --- gcc/expmed.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/expmed.c 2017-11-20 20:37:51.661320782 +0000 >>> @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed >>> PUT_MODE (all->zext, wider_mode); >>> PUT_MODE (all->wide_mult, wider_mode); >>> PUT_MODE (all->wide_lshr, wider_mode); >>> - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); >>> + XEXP (all->wide_lshr, 1) >>> + = gen_int_shift_amount (wider_mode, mode_bitsize); >>> >>> set_mul_widen_cost (speed, wider_mode, >>> set_src_cost (all->wide_mult, wider_mode, speed)); >>> @@ -909,12 +910,14 @@ store_bit_field_1 (rtx str_rtx, unsigned >>> to make sure that for big-endian machines the higher order >>> bits are used. */ >>> if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) >>> - value_word = simplify_expand_binop (word_mode, lshr_optab, >>> - value_word, >>> - GEN_INT (BITS_PER_WORD >>> - - new_bitsize), >>> - NULL_RTX, true, >>> - OPTAB_LIB_WIDEN); >>> + { >>> + int shift = BITS_PER_WORD - new_bitsize; >>> + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); >>> + value_word = simplify_expand_binop (word_mode, lshr_optab, >>> + value_word, shift_rtx, >>> + NULL_RTX, true, >>> + OPTAB_LIB_WIDEN); >>> + } >>> >>> if (!store_bit_field_1 (op0, new_bitsize, >>> bitnum + bit_offset, >>> @@ -2365,8 +2368,9 @@ expand_shift_1 (enum tree_code code, mac >>> if (CONST_INT_P (op1) >>> && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= >>> (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) >>> - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) >>> - % GET_MODE_BITSIZE (scalar_mode)); >>> + op1 = gen_int_shift_amount (mode, >>> + (unsigned HOST_WIDE_INT) INTVAL (op1) >>> + % GET_MODE_BITSIZE (scalar_mode)); >>> else if (GET_CODE (op1) == SUBREG >>> && subreg_lowpart_p (op1) >>> && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) >>> @@ -2383,7 +2387,8 @@ expand_shift_1 (enum tree_code code, mac >>> && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, >>> GET_MODE_BITSIZE (scalar_mode) - 1)) >>> { >>> - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); >>> + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) >>> + - INTVAL (op1))); >>> left = !left; >>> code = left ? LROTATE_EXPR : RROTATE_EXPR; >>> } >>> @@ -2463,8 +2468,8 @@ expand_shift_1 (enum tree_code code, mac >>> if (op1 == const0_rtx) >>> return shifted; >>> else if (CONST_INT_P (op1)) >>> - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) >>> - - INTVAL (op1)); >>> + other_amount = gen_int_shift_amount >>> + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); >>> else >>> { >>> other_amount >>> @@ -2537,8 +2542,9 @@ expand_shift_1 (enum tree_code code, mac >>> expand_shift (enum tree_code code, machine_mode mode, rtx shifted, >>> int amount, rtx target, int unsignedp) >>> { >>> - return expand_shift_1 (code, mode, >>> - shifted, GEN_INT (amount), target, unsignedp); >>> + return expand_shift_1 (code, mode, shifted, >>> + gen_int_shift_amount (mode, amount), >>> + target, unsignedp); >>> } >>> >>> /* Likewise, but return 0 if that cannot be done. */ >>> @@ -3856,7 +3862,7 @@ expand_smod_pow2 (scalar_int_mode mode, >>> { >>> HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; >>> signmask = force_reg (mode, signmask); >>> - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); >>> + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); >>> >>> /* Use the rtx_cost of a LSHIFTRT instruction to determine >>> which instruction sequence to use. If logical right shifts >>> Index: gcc/lower-subreg.c >>> =================================================================== >>> --- gcc/lower-subreg.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/lower-subreg.c 2017-11-20 20:37:51.661320782 +0000 >>> @@ -141,7 +141,7 @@ shift_cost (bool speed_p, struct cost_rt >>> PUT_CODE (rtxes->shift, code); >>> PUT_MODE (rtxes->shift, mode); >>> PUT_MODE (rtxes->source, mode); >>> - XEXP (rtxes->shift, 1) = GEN_INT (op1); >>> + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); >>> return set_src_cost (rtxes->shift, mode, speed_p); >>> } >>> >>> Index: gcc/simplify-rtx.c >>> =================================================================== >>> --- gcc/simplify-rtx.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/simplify-rtx.c 2017-11-20 20:37:51.663320783 +0000 >>> @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod >>> if (STORE_FLAG_VALUE == 1) >>> { >>> temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), >>> - GEN_INT (isize - 1)); >>> + gen_int_shift_amount (inner, >>> + isize - 1)); >>> if (int_mode == inner) >>> return temp; >>> if (GET_MODE_PRECISION (int_mode) > isize) >>> @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod >>> else if (STORE_FLAG_VALUE == -1) >>> { >>> temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), >>> - GEN_INT (isize - 1)); >>> + gen_int_shift_amount (inner, >>> + isize - 1)); >>> if (int_mode == inner) >>> return temp; >>> if (GET_MODE_PRECISION (int_mode) > isize) >>> @@ -2672,7 +2674,8 @@ simplify_binary_operation_1 (enum rtx_co >>> { >>> val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); >>> if (val >= 0) >>> - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); >>> + return simplify_gen_binary (ASHIFT, mode, op0, >>> + gen_int_shift_amount (mode, val)); >>> } >>> >>> /* x*2 is x+x and x*(-1) is -x */ >>> @@ -3296,7 +3299,8 @@ simplify_binary_operation_1 (enum rtx_co >>> /* Convert divide by power of two into shift. */ >>> if (CONST_INT_P (trueop1) >>> && (val = exact_log2 (UINTVAL (trueop1))) > 0) >>> - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); >>> + return simplify_gen_binary (LSHIFTRT, mode, op0, >>> + gen_int_shift_amount (mode, val)); >>> break; >>> >>> case DIV: >>> @@ -3416,10 +3420,12 @@ simplify_binary_operation_1 (enum rtx_co >>> && IN_RANGE (INTVAL (trueop1), >>> GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), >>> GET_MODE_UNIT_PRECISION (mode) - 1)) >>> - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, >>> - mode, op0, >>> - GEN_INT (GET_MODE_UNIT_PRECISION (mode) >>> - - INTVAL (trueop1))); >>> + { >>> + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); >>> + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); >>> + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, >>> + mode, op0, new_amount_rtx); >>> + } >>> #endif >>> /* FALLTHRU */ >>> case ASHIFTRT: >>> @@ -3460,8 +3466,8 @@ simplify_binary_operation_1 (enum rtx_co >>> == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) >>> && subreg_lowpart_p (op0)) >>> { >>> - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) >>> - + INTVAL (op1)); >>> + rtx tmp = gen_int_shift_amount >>> + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); >>> tmp = simplify_gen_binary (code, inner_mode, >>> XEXP (SUBREG_REG (op0), 0), >>> tmp); >>> @@ -3472,7 +3478,8 @@ simplify_binary_operation_1 (enum rtx_co >>> { >>> val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); >>> if (val != INTVAL (op1)) >>> - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); >>> + return simplify_gen_binary (code, mode, op0, >>> + gen_int_shift_amount (mode, val)); >>> } >>> break; >>> >>> Index: gcc/combine.c >>> =================================================================== >>> --- gcc/combine.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/combine.c 2017-11-20 20:37:51.659320782 +0000 >>> @@ -3792,8 +3792,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, >>> && INTVAL (XEXP (*split, 1)) > 0 >>> && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) >>> { >>> + rtx i_rtx = gen_int_shift_amount (split_mode, i); >>> SUBST (*split, gen_rtx_ASHIFT (split_mode, >>> - XEXP (*split, 0), GEN_INT (i))); >>> + XEXP (*split, 0), i_rtx)); >>> /* Update split_code because we may not have a multiply >>> anymore. */ >>> split_code = GET_CODE (*split); >>> @@ -3807,8 +3808,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, >>> && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) >>> { >>> rtx nsplit = XEXP (*split, 0); >>> + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); >>> SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), >>> - XEXP (nsplit, 0), GEN_INT (i))); >>> + XEXP (nsplit, 0), >>> + i_rtx)); >>> /* Update split_code because we may not have a multiply >>> anymore. */ >>> split_code = GET_CODE (*split); >>> @@ -5077,12 +5080,12 @@ find_split_point (rtx *loc, rtx_insn *in >>> GET_MODE (XEXP (SET_SRC (x), 0)))))) >>> { >>> machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); >>> - >>> + rtx pos_rtx = gen_int_shift_amount (mode, pos); >>> SUBST (SET_SRC (x), >>> gen_rtx_NEG (mode, >>> gen_rtx_LSHIFTRT (mode, >>> XEXP (SET_SRC (x), 0), >>> - GEN_INT (pos)))); >>> + pos_rtx))); >>> >>> split = find_split_point (&SET_SRC (x), insn, true); >>> if (split && split != &SET_SRC (x)) >>> @@ -5140,11 +5143,11 @@ find_split_point (rtx *loc, rtx_insn *in >>> { >>> unsigned HOST_WIDE_INT mask >>> = (HOST_WIDE_INT_1U << len) - 1; >>> + rtx pos_rtx = gen_int_shift_amount (mode, pos); >>> SUBST (SET_SRC (x), >>> gen_rtx_AND (mode, >>> gen_rtx_LSHIFTRT >>> - (mode, gen_lowpart (mode, inner), >>> - GEN_INT (pos)), >>> + (mode, gen_lowpart (mode, inner), pos_rtx), >>> gen_int_mode (mask, mode))); >>> >>> split = find_split_point (&SET_SRC (x), insn, true); >>> @@ -5153,14 +5156,15 @@ find_split_point (rtx *loc, rtx_insn *in >>> } >>> else >>> { >>> + int left_bits = GET_MODE_PRECISION (mode) - len - pos; >>> + int right_bits = GET_MODE_PRECISION (mode) - len; >>> SUBST (SET_SRC (x), >>> gen_rtx_fmt_ee >>> (unsignedp ? LSHIFTRT : ASHIFTRT, mode, >>> gen_rtx_ASHIFT (mode, >>> gen_lowpart (mode, inner), >>> - GEN_INT (GET_MODE_PRECISION (mode) >>> - - len - pos)), >>> - GEN_INT (GET_MODE_PRECISION (mode) - len))); >>> + gen_int_shift_amount (mode, left_bits)), >>> + gen_int_shift_amount (mode, right_bits))); >>> >>> split = find_split_point (&SET_SRC (x), insn, true); >>> if (split && split != &SET_SRC (x)) >>> @@ -8935,10 +8939,11 @@ force_int_to_mode (rtx x, scalar_int_mod >>> /* Must be more sign bit copies than the mask needs. */ >>> && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >>> >= exact_log2 (mask + 1))) >>> - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), >>> - GEN_INT (GET_MODE_PRECISION (xmode) >>> - - exact_log2 (mask + 1))); >>> - >>> + { >>> + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); >>> + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), >>> + gen_int_shift_amount (xmode, nbits)); >>> + } >>> goto shiftrt; >>> >>> case ASHIFTRT: >>> @@ -10431,7 +10436,7 @@ simplify_shift_const_1 (enum rtx_code co >>> { >>> enum rtx_code orig_code = code; >>> rtx orig_varop = varop; >>> - int count; >>> + int count, log2; >>> machine_mode mode = result_mode; >>> machine_mode shift_mode; >>> scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, >> int_result_mode; >>> @@ -10634,13 +10639,11 @@ simplify_shift_const_1 (enum rtx_code co >>> is cheaper. But it is still better on those machines to >>> merge two shifts into one. */ >>> if (CONST_INT_P (XEXP (varop, 1)) >>> - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) >>> + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) >>> { >>> - varop >>> - = simplify_gen_binary (ASHIFT, GET_MODE (varop), >>> - XEXP (varop, 0), >>> - GEN_INT (exact_log2 ( >>> - UINTVAL (XEXP (varop, 1))))); >>> + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); >>> + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), >>> + XEXP (varop, 0), log2_rtx); >>> continue; >>> } >>> break; >>> @@ -10648,13 +10651,11 @@ simplify_shift_const_1 (enum rtx_code co >>> case UDIV: >>> /* Similar, for when divides are cheaper. */ >>> if (CONST_INT_P (XEXP (varop, 1)) >>> - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) >>> + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) >>> { >>> - varop >>> - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), >>> - XEXP (varop, 0), >>> - GEN_INT (exact_log2 ( >>> - UINTVAL (XEXP (varop, 1))))); >>> + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); >>> + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), >>> + XEXP (varop, 0), log2_rtx); >>> continue; >>> } >>> break; >>> @@ -10789,10 +10790,10 @@ simplify_shift_const_1 (enum rtx_code co >>> >>> mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), >>> int_result_mode); >>> - >>> + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); >>> mask_rtx >>> = simplify_const_binary_operation (code, int_result_mode, >>> - mask_rtx, GEN_INT (count)); >>> + mask_rtx, count_rtx); >>> >>> /* Give up if we can't compute an outer operation to use. */ >>> if (mask_rtx == 0 >>> @@ -10848,9 +10849,10 @@ simplify_shift_const_1 (enum rtx_code co >>> if (code == ASHIFTRT && int_mode != int_result_mode) >>> break; >>> >>> + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); >>> rtx new_rtx = simplify_const_binary_operation (code, int_mode, >>> XEXP (varop, 0), >>> - GEN_INT (count)); >>> + count_rtx); >>> varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); >>> count = 0; >>> continue; >>> @@ -10916,7 +10918,7 @@ simplify_shift_const_1 (enum rtx_code co >>> && (new_rtx = simplify_const_binary_operation >>> (code, int_result_mode, >>> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >>> - GEN_INT (count))) != 0 >>> + gen_int_shift_amount (int_result_mode, count))) != 0 >>> && CONST_INT_P (new_rtx) >>> && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), >>> INTVAL (new_rtx), int_result_mode, >>> @@ -11059,7 +11061,7 @@ simplify_shift_const_1 (enum rtx_code co >>> && (new_rtx = simplify_const_binary_operation >>> (ASHIFT, int_result_mode, >>> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >>> - GEN_INT (count))) != 0 >>> + gen_int_shift_amount (int_result_mode, count))) != 0 >>> && CONST_INT_P (new_rtx) >>> && merge_outer_ops (&outer_op, &outer_const, PLUS, >>> INTVAL (new_rtx), int_result_mode, >>> @@ -11080,7 +11082,7 @@ simplify_shift_const_1 (enum rtx_code co >>> && (new_rtx = simplify_const_binary_operation >>> (code, int_result_mode, >>> gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), >>> - GEN_INT (count))) != 0 >>> + gen_int_shift_amount (int_result_mode, count))) != 0 >>> && CONST_INT_P (new_rtx) >>> && merge_outer_ops (&outer_op, &outer_const, XOR, >>> INTVAL (new_rtx), int_result_mode, >>> @@ -11135,12 +11137,12 @@ simplify_shift_const_1 (enum rtx_code co >>> - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) >>> { >>> rtx varop_inner = XEXP (varop, 0); >>> - >>> - varop_inner >>> - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), >>> - XEXP (varop_inner, 0), >>> - GEN_INT >>> - (count + INTVAL (XEXP (varop_inner, 1)))); >>> + int new_count = count + INTVAL (XEXP (varop_inner, 1)); >>> + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), >>> + new_count); >>> + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), >>> + XEXP (varop_inner, 0), >>> + new_count_rtx); >>> varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); >>> count = 0; >>> continue; >>> @@ -11192,7 +11194,8 @@ simplify_shift_const_1 (enum rtx_code co >>> x = NULL_RTX; >>> >>> if (x == NULL_RTX) >>> - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); >>> + x = simplify_gen_binary (code, shift_mode, varop, >>> + gen_int_shift_amount (shift_mode, count)); >>> >>> /* If we were doing an LSHIFTRT in a wider mode than it was originally, >>> turn off all the bits that the shift would have turned off. */ >>> @@ -11254,7 +11257,8 @@ simplify_shift_const (rtx x, enum rtx_co >>> return tem; >>> >>> if (!x) >>> - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); >>> + x = simplify_gen_binary (code, GET_MODE (varop), varop, >>> + gen_int_shift_amount (GET_MODE (varop), count)); >>> if (GET_MODE (x) != result_mode) >>> x = gen_lowpart (result_mode, x); >>> return x; >>> @@ -11445,8 +11449,9 @@ change_zero_ext (rtx pat) >>> if (BITS_BIG_ENDIAN) >>> start = GET_MODE_PRECISION (inner_mode) - size - start; >>> >>> - if (start) >>> - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); >>> + if (start != 0) >>> + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), >>> + gen_int_shift_amount (inner_mode, start)); >>> else >>> x = XEXP (x, 0); >>> if (mode != inner_mode) >>> Index: gcc/optabs.c >>> =================================================================== >>> --- gcc/optabs.c 2017-11-20 20:37:41.918226976 +0000 >>> +++ gcc/optabs.c 2017-11-20 20:37:51.662320782 +0000 >>> @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, >>> if (binoptab != ashr_optab) >>> emit_move_insn (outof_target, CONST0_RTX (word_mode)); >>> else >>> - if (!force_expand_binop (word_mode, binoptab, >>> - outof_input, GEN_INT (BITS_PER_WORD - 1), >>> + if (!force_expand_binop (word_mode, binoptab, outof_input, >>> + gen_int_shift_amount (word_mode, >>> + BITS_PER_WORD - 1), >>> outof_target, unsignedp, methods)) >>> return false; >>> } >>> @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod >>> { >>> int low = (WORDS_BIG_ENDIAN ? 1 : 0); >>> int high = (WORDS_BIG_ENDIAN ? 0 : 1); >>> - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); >>> + rtx wordm1 = (umulp ? NULL_RTX >>> + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); >>> rtx product, adjust, product_high, temp; >>> >>> rtx op0_high = operand_subword_force (op0, high, mode); >>> @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b >>> unsigned int bits = GET_MODE_PRECISION (int_mode); >>> >>> if (CONST_INT_P (op1)) >>> - newop1 = GEN_INT (bits - INTVAL (op1)); >>> + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); >>> else if (targetm.shift_truncation_mask (int_mode) == bits - 1) >>> newop1 = negate_rtx (GET_MODE (op1), op1); >>> else >>> @@ -1403,7 +1405,7 @@ expand_binop (machine_mode mode, optab b >>> >>> /* Apply the truncation to constant shifts. */ >>> if (double_shift_mask > 0 && CONST_INT_P (op1)) >>> - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); >>> + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); >>> >>> if (op1 == CONST0_RTX (op1_mode)) >>> return op0; >>> @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b >>> else >>> { >>> rtx into_temp1, into_temp2, outof_temp1, outof_temp2; >>> - rtx first_shift_count, second_shift_count; >>> + HOST_WIDE_INT first_shift_count, second_shift_count; >>> optab reverse_unsigned_shift, unsigned_shift; >>> >>> reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) >>> @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b >>> >>> if (shift_count > BITS_PER_WORD) >>> { >>> - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); >>> - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); >>> + first_shift_count = shift_count - BITS_PER_WORD; >>> + second_shift_count = 2 * BITS_PER_WORD - shift_count; >>> } >>> else >>> { >>> - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); >>> - second_shift_count = GEN_INT (shift_count); >>> + first_shift_count = BITS_PER_WORD - shift_count; >>> + second_shift_count = shift_count; >>> } >>> + rtx first_shift_count_rtx >>> + = gen_int_shift_amount (word_mode, first_shift_count); >>> + rtx second_shift_count_rtx >>> + = gen_int_shift_amount (word_mode, second_shift_count); >>> >>> into_temp1 = expand_binop (word_mode, unsigned_shift, >>> - outof_input, first_shift_count, >>> + outof_input, first_shift_count_rtx, >>> NULL_RTX, unsignedp, next_methods); >>> into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, >>> - into_input, second_shift_count, >>> + into_input, second_shift_count_rtx, >>> NULL_RTX, unsignedp, next_methods); >>> >>> if (into_temp1 != 0 && into_temp2 != 0) >>> @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b >>> emit_move_insn (into_target, inter); >>> >>> outof_temp1 = expand_binop (word_mode, unsigned_shift, >>> - into_input, first_shift_count, >>> + into_input, first_shift_count_rtx, >>> NULL_RTX, unsignedp, next_methods); >>> outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, >>> - outof_input, second_shift_count, >>> + outof_input, second_shift_count_rtx, >>> NULL_RTX, unsignedp, next_methods); >>> >>> if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) >>> @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un >>> >>> if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) >>> { >>> - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, >>> - unsignedp, OPTAB_DIRECT); >>> + temp = expand_binop (mode, rotl_optab, op0, >>> + gen_int_shift_amount (mode, 8), >>> + target, unsignedp, OPTAB_DIRECT); >>> if (temp) >>> return temp; >>> } >>> >>> if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) >>> { >>> - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, >>> - unsignedp, OPTAB_DIRECT); >>> + temp = expand_binop (mode, rotr_optab, op0, >>> + gen_int_shift_amount (mode, 8), >>> + target, unsignedp, OPTAB_DIRECT); >>> if (temp) >>> return temp; >>> } >>> >>> last = get_last_insn (); >>> >>> - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, >>> + temp1 = expand_binop (mode, ashl_optab, op0, >>> + gen_int_shift_amount (mode, 8), NULL_RTX, >>> unsignedp, OPTAB_WIDEN); >>> - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, >>> + temp2 = expand_binop (mode, lshr_optab, op0, >>> + gen_int_shift_amount (mode, 8), NULL_RTX, >>> unsignedp, OPTAB_WIDEN); >>> if (temp1 && temp2) >>> { >>> @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod >>> } >>> >>> /* Checks if vec_perm mask SEL is a constant equivalent to a shift of >> the first >>> - vec_perm operand, assuming the second operand is a constant vector >> of zeroes. >>> - Return the shift distance in bits if so, or NULL_RTX if the vec_perm >> is not a >>> - shift. */ >>> + vec_perm operand (which has mode OP0_MODE), assuming the second >>> + operand is a constant vector of zeroes. Return the shift distance in >>> + bits if so, or NULL_RTX if the vec_perm is not a shift. */ >>> static rtx >>> -shift_amt_for_vec_perm_mask (rtx sel) >>> +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) >>> { >>> unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); >>> unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); >>> @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) >>> return NULL_RTX; >>> } >>> >>> - return GEN_INT (first * bitsize); >>> + return gen_int_shift_amount (op0_mode, first * bitsize); >>> } >>> >>> /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ >>> @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx >>> && (shift_code != CODE_FOR_nothing >>> || shift_code_qi != CODE_FOR_nothing)) >>> { >>> - shift_amt = shift_amt_for_vec_perm_mask (sel); >>> + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); >>> if (shift_amt) >>> { >>> struct expand_operand ops[3]; >>> @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx >>> NULL, 0, OPTAB_DIRECT); >>> else >>> sel = expand_simple_binop (selmode, ASHIFT, sel, >>> - GEN_INT (exact_log2 (u)), >>> + gen_int_shift_amount (selmode, >>> + exact_log2 (u)), >>> NULL, 0, OPTAB_DIRECT); >>> gcc_assert (sel != NULL); >>>
Richard Biener <richard.guenther@gmail.com> writes: > On Fri, Dec 15, 2017 at 1:48 AM, Richard Sandiford > <richard.sandiford@linaro.org> wrote: >> Richard Biener <richard.guenther@gmail.com> writes: >>> On Mon, Nov 20, 2017 at 10:02 PM, Richard Sandiford >>> <richard.sandiford@linaro.org> wrote: >>>> Richard Biener <richard.guenther@gmail.com> writes: >>>>> On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener >>>>> <richard.guenther@gmail.com> wrote: >>>>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>>>> <richard.sandiford@linaro.org> wrote: >>>>>>> This patch adds a stub helper routine to provide the mode >>>>>>> of a scalar shift amount, given the mode of the values >>>>>>> being shifted. >>>>>>> >>>>>>> One long-standing problem has been to decide what this mode >>>>>>> should be for arbitrary rtxes (as opposed to those directly >>>>>>> tied to a target pattern). Is it the mode of the shifted >>>>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>>>> the corresponding target pattern says? (In which case what >>>>>>> should the mode be when the target doesn't have a pattern?) >>>>>>> >>>>>>> For now the patch picks word_mode, which should be safe on >>>>>>> all targets but could perhaps become suboptimal if the helper >>>>>>> routine is used more often than it is in this patch. As it >>>>>>> stands the patch does not change the generated code. >>>>>>> >>>>>>> The patch also adds a helper function that constructs rtxes >>>>>>> for constant shift amounts, again given the mode of the value >>>>>>> being shifted. As well as helping with the SVE patches, this >>>>>>> is one step towards allowing CONST_INTs to have a real mode. >>>>>> >>>>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>>>> constant shift amount RTX generation into a gen_int_shift_amount >>>>>> looks good to me I'd rather have that ??? in this function (and >>>>>> I'd use the mode of the RTX shifted, not word_mode...). >>>> >>>> OK. I'd gone for word_mode because that's what expand_binop uses >>>> for CONST_INTs: >>>> >>>> op1_mode = (GET_MODE (op1) != VOIDmode >>>> ? as_a <scalar_int_mode> (GET_MODE (op1)) >>>> : word_mode); >>>> >>>> But using the inner mode should be fine too. The patch below does that. >>>> >>>>>> In the end it's up to insn recognizing to convert the op to the >>>>>> expected mode and for generic RTL it's us that should decide >>>>>> on the mode -- on GENERIC the shift amount has to be an >>>>>> integer so why not simply use a mode that is large enough to >>>>>> make the constant fit? >>>> >>>> ...but I can do that instead if you think it's better. >>>> >>>>>> Just throwing in some comments here, RTL isn't my primary >>>>>> expertise. >>>>> >>>>> To add a little bit - shift amounts is maybe the only(?) place >>>>> where a modeless CONST_INT makes sense! So "fixing" >>>>> that first sounds backwards. >>>> >>>> But even here they have a mode conceptually, since out-of-range shift >>>> amounts are target-defined rather than undefined. E.g. if the target >>>> interprets the shift amount as unsigned, then for a shift amount >>>> (const_int -1) it matters whether the mode is QImode (and so we're >>>> shifting by 255) or HImode (and so we're shifting by 65535. >>> >>> I think RTL is well-defined (at least I hope so ...) and machine constraints >>> need to be modeled explicitely (like embedding an implicit bit_and in >>> shift patterns). >> >> Well, RTL is well-defined in the sense that if you have >> >> (ashift X (foo:HI ...)) >> >> then the shift amount must be interpreted as HImode rather than some >> other mode. The problem here is to define a default choice of mode for >> const_ints, in cases where the shift is being created out of the blue. >> >> Whether the shift amount is effectively signed or unsigned isn't defined >> by RTL without SHIFT_COUNT_TRUNCATED, since the choice only matters for >> out-of-range values, and the behaviour for out-of-range RTL shifts is >> specifically treated as target-defined without SHIFT_COUNT_TRUNCATED. >> >> I think the revised patch does implement your suggestion of using the >> integer equivalent of the inner mode as the default, but we need to >> decide whether to go with it, go with the original word_mode approach >> (taken from existing expand_binop code) or something else. Something >> else could include the widest supported integer mode, so that we never >> change the value. > > I guess it's pretty arbitrary what we choose (but we might need to adjust > targets?). For something like this an appealing choice would be sth > that is host and target idependent, like [u]int32_t or given CONST_INT > is always 64bits now and signed int64_t aka HOST_WIDE_INT (bad > name now). That means it's the "infinite precision" thing that fits > into CONST_INT ;) Sounds OK to me. How about the attached? Thanks, Richard 2017-12-15 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * emit-rtl.h (gen_int_shift_amount): Declare. * emit-rtl.c (gen_int_shift_amount): New function. * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount instead of GEN_INT. * calls.c (shift_return_value): Likewise. * cse.c (fold_rtx): Likewise. * dse.c (find_shift_sequence): Likewise. * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) (expand_shift, expand_smod_pow2): Likewise. * lower-subreg.c (shift_cost): Likewise. * optabs.c (expand_superword_shift, expand_doubleword_mult) (expand_unop, expand_binop, shift_amt_for_vec_perm_mask) (expand_vec_perm_var): Likewise. * simplify-rtx.c (simplify_unary_operation_1): Likewise. (simplify_binary_operation_1): Likewise. * combine.c (try_combine, find_split_point, force_int_to_mode) (simplify_shift_const_1, simplify_shift_const): Likewise. (change_zero_ext): Likewise. Use simplify_gen_binary. Index: gcc/emit-rtl.h =================================================================== --- gcc/emit-rtl.h 2017-12-15 15:14:43.101350556 +0000 +++ gcc/emit-rtl.h 2017-12-15 15:14:43.345343745 +0000 @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); extern void adjust_reg_mode (rtx, machine_mode); extern int mem_expr_equal_p (const_tree, const_tree); +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); extern bool need_atomic_barrier_p (enum memmodel, bool); Index: gcc/emit-rtl.c =================================================================== --- gcc/emit-rtl.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/emit-rtl.c 2017-12-15 15:14:43.345343745 +0000 @@ -6418,6 +6418,21 @@ need_atomic_barrier_p (enum memmodel mod } } +/* Return a constant shift amount for shifting a value of mode MODE + by VALUE bits. */ + +rtx +gen_int_shift_amount (machine_mode, HOST_WIDE_INT value) +{ + /* Try to use a 64-bit mode, to avoid any truncation, but honor + MAX_FIXED_MODE_SIZE if that's smaller. + + ??? Perhaps this should be automatically derived from the .md files + instead, or perhaps have a target hook. */ + scalar_int_mode shift_mode = int_mode_for_size (64, 1).require (); + return gen_int_mode (value, shift_mode); +} + /* Initialize fields of rtl_data related to stack alignment. */ void Index: gcc/asan.c =================================================================== --- gcc/asan.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/asan.c 2017-12-15 15:14:43.342343829 +0000 @@ -1386,7 +1386,7 @@ asan_emit_stack_protection (rtx base, rt TREE_ASM_WRITTEN (id) = 1; emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); shadow_base = expand_binop (Pmode, lshr_optab, base, - GEN_INT (ASAN_SHADOW_SHIFT), + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), NULL_RTX, 1, OPTAB_DIRECT); shadow_base = plus_constant (Pmode, shadow_base, Index: gcc/calls.c =================================================================== --- gcc/calls.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/calls.c 2017-12-15 15:14:43.342343829 +0000 @@ -2900,15 +2900,17 @@ shift_return_value (machine_mode mode, b HOST_WIDE_INT shift; gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); + machine_mode value_mode = GET_MODE (value); + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); if (shift == 0) return false; /* Use ashr rather than lshr for right shifts. This is for the benefit of the MIPS port, which requires SImode values to be sign-extended when stored in 64-bit registers. */ - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, + value, gen_int_shift_amount (value_mode, shift), + value, 1, OPTAB_WIDEN)) gcc_unreachable (); return true; } Index: gcc/cse.c =================================================================== --- gcc/cse.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/cse.c 2017-12-15 15:14:43.344343773 +0000 @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (const_arg1) < 0)) { if (SHIFT_COUNT_TRUNCATED) - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + canon_const_arg1 = gen_int_shift_amount + (mode, (INTVAL (const_arg1) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (inner_const) < 0)) { if (SHIFT_COUNT_TRUNCATED) - inner_const = GEN_INT (INTVAL (inner_const) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + inner_const = gen_int_shift_amount + (mode, (INTVAL (inner_const) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) /* As an exception, we can turn an ASHIFTRT of this form into a shift of the number of bits - 1. */ if (code == ASHIFTRT) - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); + new_const = gen_int_shift_amount + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); else if (!side_effects_p (XEXP (y, 0))) return CONST0_RTX (mode); else Index: gcc/dse.c =================================================================== --- gcc/dse.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/dse.c 2017-12-15 15:14:43.345343745 +0000 @@ -1642,8 +1642,9 @@ find_shift_sequence (int access_size, store_mode, byte); if (ret && CONSTANT_P (ret)) { + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); ret = simplify_const_binary_operation (LSHIFTRT, new_mode, - ret, GEN_INT (shift)); + ret, shift_rtx); if (ret && CONSTANT_P (ret)) { byte = subreg_lowpart_offset (read_mode, new_mode); @@ -1679,7 +1680,8 @@ find_shift_sequence (int access_size, of one dsp where the cost of these two was not the same. But this really is a rare case anyway. */ target = expand_binop (new_mode, lshr_optab, new_reg, - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); + gen_int_shift_amount (new_mode, shift), + new_reg, 1, OPTAB_DIRECT); shift_seq = get_insns (); end_sequence (); Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/expmed.c 2017-12-15 15:14:43.346343717 +0000 @@ -223,7 +223,8 @@ init_expmed_one_mode (struct init_expmed PUT_MODE (all->zext, wider_mode); PUT_MODE (all->wide_mult, wider_mode); PUT_MODE (all->wide_lshr, wider_mode); - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); + XEXP (all->wide_lshr, 1) + = gen_int_shift_amount (wider_mode, mode_bitsize); set_mul_widen_cost (speed, wider_mode, set_src_cost (all->wide_mult, wider_mode, speed)); @@ -910,12 +911,14 @@ store_bit_field_1 (rtx str_rtx, unsigned to make sure that for big-endian machines the higher order bits are used. */ if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) - value_word = simplify_expand_binop (word_mode, lshr_optab, - value_word, - GEN_INT (BITS_PER_WORD - - new_bitsize), - NULL_RTX, true, - OPTAB_LIB_WIDEN); + { + int shift = BITS_PER_WORD - new_bitsize; + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); + value_word = simplify_expand_binop (word_mode, lshr_optab, + value_word, shift_rtx, + NULL_RTX, true, + OPTAB_LIB_WIDEN); + } if (!store_bit_field_1 (op0, new_bitsize, bitnum + bit_offset, @@ -2366,8 +2369,9 @@ expand_shift_1 (enum tree_code code, mac if (CONST_INT_P (op1) && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) - % GET_MODE_BITSIZE (scalar_mode)); + op1 = gen_int_shift_amount (mode, + (unsigned HOST_WIDE_INT) INTVAL (op1) + % GET_MODE_BITSIZE (scalar_mode)); else if (GET_CODE (op1) == SUBREG && subreg_lowpart_p (op1) && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) @@ -2384,7 +2388,8 @@ expand_shift_1 (enum tree_code code, mac && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, GET_MODE_BITSIZE (scalar_mode) - 1)) { - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) + - INTVAL (op1))); left = !left; code = left ? LROTATE_EXPR : RROTATE_EXPR; } @@ -2464,8 +2469,8 @@ expand_shift_1 (enum tree_code code, mac if (op1 == const0_rtx) return shifted; else if (CONST_INT_P (op1)) - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - - INTVAL (op1)); + other_amount = gen_int_shift_amount + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); else { other_amount @@ -2538,8 +2543,9 @@ expand_shift_1 (enum tree_code code, mac expand_shift (enum tree_code code, machine_mode mode, rtx shifted, int amount, rtx target, int unsignedp) { - return expand_shift_1 (code, mode, - shifted, GEN_INT (amount), target, unsignedp); + return expand_shift_1 (code, mode, shifted, + gen_int_shift_amount (mode, amount), + target, unsignedp); } /* Likewise, but return 0 if that cannot be done. */ @@ -3857,7 +3863,7 @@ expand_smod_pow2 (scalar_int_mode mode, { HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; signmask = force_reg (mode, signmask); - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); /* Use the rtx_cost of a LSHIFTRT instruction to determine which instruction sequence to use. If logical right shifts Index: gcc/lower-subreg.c =================================================================== --- gcc/lower-subreg.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/lower-subreg.c 2017-12-15 15:14:43.346343717 +0000 @@ -141,7 +141,7 @@ shift_cost (bool speed_p, struct cost_rt PUT_CODE (rtxes->shift, code); PUT_MODE (rtxes->shift, mode); PUT_MODE (rtxes->source, mode); - XEXP (rtxes->shift, 1) = GEN_INT (op1); + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); return set_src_cost (rtxes->shift, mode, speed_p); } Index: gcc/optabs.c =================================================================== --- gcc/optabs.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/optabs.c 2017-12-15 15:14:43.347343689 +0000 @@ -421,8 +421,9 @@ expand_superword_shift (optab binoptab, if (binoptab != ashr_optab) emit_move_insn (outof_target, CONST0_RTX (word_mode)); else - if (!force_expand_binop (word_mode, binoptab, - outof_input, GEN_INT (BITS_PER_WORD - 1), + if (!force_expand_binop (word_mode, binoptab, outof_input, + gen_int_shift_amount (word_mode, + BITS_PER_WORD - 1), outof_target, unsignedp, methods)) return false; } @@ -779,7 +780,8 @@ expand_doubleword_mult (machine_mode mod { int low = (WORDS_BIG_ENDIAN ? 1 : 0); int high = (WORDS_BIG_ENDIAN ? 0 : 1); - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); + rtx wordm1 = (umulp ? NULL_RTX + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); rtx product, adjust, product_high, temp; rtx op0_high = operand_subword_force (op0, high, mode); @@ -1180,7 +1182,7 @@ expand_binop (machine_mode mode, optab b unsigned int bits = GET_MODE_PRECISION (int_mode); if (CONST_INT_P (op1)) - newop1 = GEN_INT (bits - INTVAL (op1)); + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); else if (targetm.shift_truncation_mask (int_mode) == bits - 1) newop1 = negate_rtx (GET_MODE (op1), op1); else @@ -1402,7 +1404,7 @@ expand_binop (machine_mode mode, optab b /* Apply the truncation to constant shifts. */ if (double_shift_mask > 0 && CONST_INT_P (op1)) - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); if (op1 == CONST0_RTX (op1_mode)) return op0; @@ -1512,7 +1514,7 @@ expand_binop (machine_mode mode, optab b else { rtx into_temp1, into_temp2, outof_temp1, outof_temp2; - rtx first_shift_count, second_shift_count; + HOST_WIDE_INT first_shift_count, second_shift_count; optab reverse_unsigned_shift, unsigned_shift; reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) @@ -1523,20 +1525,24 @@ expand_binop (machine_mode mode, optab b if (shift_count > BITS_PER_WORD) { - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); + first_shift_count = shift_count - BITS_PER_WORD; + second_shift_count = 2 * BITS_PER_WORD - shift_count; } else { - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); - second_shift_count = GEN_INT (shift_count); + first_shift_count = BITS_PER_WORD - shift_count; + second_shift_count = shift_count; } + rtx first_shift_count_rtx + = gen_int_shift_amount (word_mode, first_shift_count); + rtx second_shift_count_rtx + = gen_int_shift_amount (word_mode, second_shift_count); into_temp1 = expand_binop (word_mode, unsigned_shift, - outof_input, first_shift_count, + outof_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - into_input, second_shift_count, + into_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (into_temp1 != 0 && into_temp2 != 0) @@ -1549,10 +1555,10 @@ expand_binop (machine_mode mode, optab b emit_move_insn (into_target, inter); outof_temp1 = expand_binop (word_mode, unsigned_shift, - into_input, first_shift_count, + into_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - outof_input, second_shift_count, + outof_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) @@ -2792,25 +2798,29 @@ expand_unop (machine_mode mode, optab un if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotl_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotr_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } last = get_last_insn (); - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, + temp1 = expand_binop (mode, ashl_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, + temp2 = expand_binop (mode, lshr_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); if (temp1 && temp2) { @@ -5392,7 +5402,7 @@ shift_amt_for_vec_perm_mask (rtx sel) return NULL_RTX; } - return GEN_INT (first * bitsize); + return gen_int_shift_amount (GET_MODE (sel), first * bitsize); } /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ @@ -5562,7 +5572,8 @@ expand_vec_perm (machine_mode mode, rtx NULL, 0, OPTAB_DIRECT); else sel = expand_simple_binop (selmode, ASHIFT, sel, - GEN_INT (exact_log2 (u)), + gen_int_shift_amount (selmode, + exact_log2 (u)), NULL, 0, OPTAB_DIRECT); gcc_assert (sel != NULL); Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/simplify-rtx.c 2017-12-15 15:14:43.347343689 +0000 @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod if (STORE_FLAG_VALUE == 1) { temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod else if (STORE_FLAG_VALUE == -1) { temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -2672,7 +2674,8 @@ simplify_binary_operation_1 (enum rtx_co { val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); if (val >= 0) - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (ASHIFT, mode, op0, + gen_int_shift_amount (mode, val)); } /* x*2 is x+x and x*(-1) is -x */ @@ -3296,7 +3299,8 @@ simplify_binary_operation_1 (enum rtx_co /* Convert divide by power of two into shift. */ if (CONST_INT_P (trueop1) && (val = exact_log2 (UINTVAL (trueop1))) > 0) - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (LSHIFTRT, mode, op0, + gen_int_shift_amount (mode, val)); break; case DIV: @@ -3416,10 +3420,12 @@ simplify_binary_operation_1 (enum rtx_co && IN_RANGE (INTVAL (trueop1), GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), GET_MODE_UNIT_PRECISION (mode) - 1)) - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, - mode, op0, - GEN_INT (GET_MODE_UNIT_PRECISION (mode) - - INTVAL (trueop1))); + { + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, + mode, op0, new_amount_rtx); + } #endif /* FALLTHRU */ case ASHIFTRT: @@ -3460,8 +3466,8 @@ simplify_binary_operation_1 (enum rtx_co == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) && subreg_lowpart_p (op0)) { - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) - + INTVAL (op1)); + rtx tmp = gen_int_shift_amount + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); tmp = simplify_gen_binary (code, inner_mode, XEXP (SUBREG_REG (op0), 0), tmp); @@ -3472,7 +3478,8 @@ simplify_binary_operation_1 (enum rtx_co { val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); if (val != INTVAL (op1)) - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); + return simplify_gen_binary (code, mode, op0, + gen_int_shift_amount (mode, val)); } break; Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-12-15 15:14:43.101350556 +0000 +++ gcc/combine.c 2017-12-15 15:14:43.344343773 +0000 @@ -3804,8 +3804,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && INTVAL (XEXP (*split, 1)) > 0 && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) { + rtx i_rtx = gen_int_shift_amount (split_mode, i); SUBST (*split, gen_rtx_ASHIFT (split_mode, - XEXP (*split, 0), GEN_INT (i))); + XEXP (*split, 0), i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -3819,8 +3820,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) { rtx nsplit = XEXP (*split, 0); + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), - XEXP (nsplit, 0), GEN_INT (i))); + XEXP (nsplit, 0), + i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -5088,12 +5091,12 @@ find_split_point (rtx *loc, rtx_insn *in GET_MODE (XEXP (SET_SRC (x), 0)))))) { machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); - + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_NEG (mode, gen_rtx_LSHIFTRT (mode, XEXP (SET_SRC (x), 0), - GEN_INT (pos)))); + pos_rtx))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -5151,11 +5154,11 @@ find_split_point (rtx *loc, rtx_insn *in { unsigned HOST_WIDE_INT mask = (HOST_WIDE_INT_1U << len) - 1; + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_AND (mode, gen_rtx_LSHIFTRT - (mode, gen_lowpart (mode, inner), - GEN_INT (pos)), + (mode, gen_lowpart (mode, inner), pos_rtx), gen_int_mode (mask, mode))); split = find_split_point (&SET_SRC (x), insn, true); @@ -5164,14 +5167,15 @@ find_split_point (rtx *loc, rtx_insn *in } else { + int left_bits = GET_MODE_PRECISION (mode) - len - pos; + int right_bits = GET_MODE_PRECISION (mode) - len; SUBST (SET_SRC (x), gen_rtx_fmt_ee (unsignedp ? LSHIFTRT : ASHIFTRT, mode, gen_rtx_ASHIFT (mode, gen_lowpart (mode, inner), - GEN_INT (GET_MODE_PRECISION (mode) - - len - pos)), - GEN_INT (GET_MODE_PRECISION (mode) - len))); + gen_int_shift_amount (mode, left_bits)), + gen_int_shift_amount (mode, right_bits))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -8952,10 +8956,11 @@ force_int_to_mode (rtx x, scalar_int_mod /* Must be more sign bit copies than the mask needs. */ && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >= exact_log2 (mask + 1))) - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), - GEN_INT (GET_MODE_PRECISION (xmode) - - exact_log2 (mask + 1))); - + { + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), + gen_int_shift_amount (xmode, nbits)); + } goto shiftrt; case ASHIFTRT: @@ -10448,7 +10453,7 @@ simplify_shift_const_1 (enum rtx_code co { enum rtx_code orig_code = code; rtx orig_varop = varop; - int count; + int count, log2; machine_mode mode = result_mode; machine_mode shift_mode; scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; @@ -10651,13 +10656,11 @@ simplify_shift_const_1 (enum rtx_code co is cheaper. But it is still better on those machines to merge two shifts into one. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (ASHIFT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10665,13 +10668,11 @@ simplify_shift_const_1 (enum rtx_code co case UDIV: /* Similar, for when divides are cheaper. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10806,10 +10807,10 @@ simplify_shift_const_1 (enum rtx_code co mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), int_result_mode); - + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); mask_rtx = simplify_const_binary_operation (code, int_result_mode, - mask_rtx, GEN_INT (count)); + mask_rtx, count_rtx); /* Give up if we can't compute an outer operation to use. */ if (mask_rtx == 0 @@ -10865,9 +10866,10 @@ simplify_shift_const_1 (enum rtx_code co if (code == ASHIFTRT && int_mode != int_result_mode) break; + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); rtx new_rtx = simplify_const_binary_operation (code, int_mode, XEXP (varop, 0), - GEN_INT (count)); + count_rtx); varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); count = 0; continue; @@ -10933,7 +10935,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), INTVAL (new_rtx), int_result_mode, @@ -11076,7 +11078,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (ASHIFT, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, PLUS, INTVAL (new_rtx), int_result_mode, @@ -11097,7 +11099,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, XOR, INTVAL (new_rtx), int_result_mode, @@ -11152,12 +11154,12 @@ simplify_shift_const_1 (enum rtx_code co - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) { rtx varop_inner = XEXP (varop, 0); - - varop_inner - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), - XEXP (varop_inner, 0), - GEN_INT - (count + INTVAL (XEXP (varop_inner, 1)))); + int new_count = count + INTVAL (XEXP (varop_inner, 1)); + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), + new_count); + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), + XEXP (varop_inner, 0), + new_count_rtx); varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); count = 0; continue; @@ -11209,7 +11211,8 @@ simplify_shift_const_1 (enum rtx_code co x = NULL_RTX; if (x == NULL_RTX) - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); + x = simplify_gen_binary (code, shift_mode, varop, + gen_int_shift_amount (shift_mode, count)); /* If we were doing an LSHIFTRT in a wider mode than it was originally, turn off all the bits that the shift would have turned off. */ @@ -11271,7 +11274,8 @@ simplify_shift_const (rtx x, enum rtx_co return tem; if (!x) - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); + x = simplify_gen_binary (code, GET_MODE (varop), varop, + gen_int_shift_amount (GET_MODE (varop), count)); if (GET_MODE (x) != result_mode) x = gen_lowpart (result_mode, x); return x; @@ -11462,8 +11466,9 @@ change_zero_ext (rtx pat) if (BITS_BIG_ENDIAN) start = GET_MODE_PRECISION (inner_mode) - size - start; - if (start) - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); + if (start != 0) + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), + gen_int_shift_amount (inner_mode, start)); else x = XEXP (x, 0); if (mode != inner_mode)
Richard Sandiford <richard.sandiford@linaro.org> writes: > Richard Biener <richard.guenther@gmail.com> writes: >> On Fri, Dec 15, 2017 at 1:48 AM, Richard Sandiford >> <richard.sandiford@linaro.org> wrote: >>> Richard Biener <richard.guenther@gmail.com> writes: >>>> On Mon, Nov 20, 2017 at 10:02 PM, Richard Sandiford >>>> <richard.sandiford@linaro.org> wrote: >>>>> Richard Biener <richard.guenther@gmail.com> writes: >>>>>> On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener >>>>>> <richard.guenther@gmail.com> wrote: >>>>>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>>>>> <richard.sandiford@linaro.org> wrote: >>>>>>>> This patch adds a stub helper routine to provide the mode >>>>>>>> of a scalar shift amount, given the mode of the values >>>>>>>> being shifted. >>>>>>>> >>>>>>>> One long-standing problem has been to decide what this mode >>>>>>>> should be for arbitrary rtxes (as opposed to those directly >>>>>>>> tied to a target pattern). Is it the mode of the shifted >>>>>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>>>>> the corresponding target pattern says? (In which case what >>>>>>>> should the mode be when the target doesn't have a pattern?) >>>>>>>> >>>>>>>> For now the patch picks word_mode, which should be safe on >>>>>>>> all targets but could perhaps become suboptimal if the helper >>>>>>>> routine is used more often than it is in this patch. As it >>>>>>>> stands the patch does not change the generated code. >>>>>>>> >>>>>>>> The patch also adds a helper function that constructs rtxes >>>>>>>> for constant shift amounts, again given the mode of the value >>>>>>>> being shifted. As well as helping with the SVE patches, this >>>>>>>> is one step towards allowing CONST_INTs to have a real mode. >>>>>>> >>>>>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>>>>> constant shift amount RTX generation into a gen_int_shift_amount >>>>>>> looks good to me I'd rather have that ??? in this function (and >>>>>>> I'd use the mode of the RTX shifted, not word_mode...). >>>>> >>>>> OK. I'd gone for word_mode because that's what expand_binop uses >>>>> for CONST_INTs: >>>>> >>>>> op1_mode = (GET_MODE (op1) != VOIDmode >>>>> ? as_a <scalar_int_mode> (GET_MODE (op1)) >>>>> : word_mode); >>>>> >>>>> But using the inner mode should be fine too. The patch below does that. >>>>> >>>>>>> In the end it's up to insn recognizing to convert the op to the >>>>>>> expected mode and for generic RTL it's us that should decide >>>>>>> on the mode -- on GENERIC the shift amount has to be an >>>>>>> integer so why not simply use a mode that is large enough to >>>>>>> make the constant fit? >>>>> >>>>> ...but I can do that instead if you think it's better. >>>>> >>>>>>> Just throwing in some comments here, RTL isn't my primary >>>>>>> expertise. >>>>>> >>>>>> To add a little bit - shift amounts is maybe the only(?) place >>>>>> where a modeless CONST_INT makes sense! So "fixing" >>>>>> that first sounds backwards. >>>>> >>>>> But even here they have a mode conceptually, since out-of-range shift >>>>> amounts are target-defined rather than undefined. E.g. if the target >>>>> interprets the shift amount as unsigned, then for a shift amount >>>>> (const_int -1) it matters whether the mode is QImode (and so we're >>>>> shifting by 255) or HImode (and so we're shifting by 65535. >>>> >>>> I think RTL is well-defined (at least I hope so ...) and machine constraints >>>> need to be modeled explicitely (like embedding an implicit bit_and in >>>> shift patterns). >>> >>> Well, RTL is well-defined in the sense that if you have >>> >>> (ashift X (foo:HI ...)) >>> >>> then the shift amount must be interpreted as HImode rather than some >>> other mode. The problem here is to define a default choice of mode for >>> const_ints, in cases where the shift is being created out of the blue. >>> >>> Whether the shift amount is effectively signed or unsigned isn't defined >>> by RTL without SHIFT_COUNT_TRUNCATED, since the choice only matters for >>> out-of-range values, and the behaviour for out-of-range RTL shifts is >>> specifically treated as target-defined without SHIFT_COUNT_TRUNCATED. >>> >>> I think the revised patch does implement your suggestion of using the >>> integer equivalent of the inner mode as the default, but we need to >>> decide whether to go with it, go with the original word_mode approach >>> (taken from existing expand_binop code) or something else. Something >>> else could include the widest supported integer mode, so that we never >>> change the value. >> >> I guess it's pretty arbitrary what we choose (but we might need to adjust >> targets?). For something like this an appealing choice would be sth >> that is host and target idependent, like [u]int32_t or given CONST_INT >> is always 64bits now and signed int64_t aka HOST_WIDE_INT (bad >> name now). That means it's the "infinite precision" thing that fits >> into CONST_INT ;) > > Sounds OK to me. How about the attached? Taking MAX_FIXED_MODE_SIZE into account was bogus, since we'd then just fail to find a mode. This version has survived the full cross-target testing. Also bootstrapped & regression-tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64-linux-gnu. OK to install? At this stage this is the patch that is holding up the rest of the approved ones. Thanks, Richard 2017-12-19 Richard Sandiford <richard.sandiford@linaro.org> Alan Hayward <alan.hayward@arm.com> David Sherwood <david.sherwood@arm.com> gcc/ * emit-rtl.h (gen_int_shift_amount): Declare. * emit-rtl.c (gen_int_shift_amount): New function. * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount instead of GEN_INT. * calls.c (shift_return_value): Likewise. * cse.c (fold_rtx): Likewise. * dse.c (find_shift_sequence): Likewise. * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) (expand_shift, expand_smod_pow2): Likewise. * lower-subreg.c (shift_cost): Likewise. * optabs.c (expand_superword_shift, expand_doubleword_mult) (expand_unop, expand_binop, shift_amt_for_vec_perm_mask) (expand_vec_perm_var): Likewise. * simplify-rtx.c (simplify_unary_operation_1): Likewise. (simplify_binary_operation_1): Likewise. * combine.c (try_combine, find_split_point, force_int_to_mode) (simplify_shift_const_1, simplify_shift_const): Likewise. (change_zero_ext): Likewise. Use simplify_gen_binary. Index: gcc/emit-rtl.h =================================================================== --- gcc/emit-rtl.h 2017-12-16 14:23:26.068200011 +0000 +++ gcc/emit-rtl.h 2017-12-19 19:09:23.877365740 +0000 @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); extern void adjust_reg_mode (rtx, machine_mode); extern int mem_expr_equal_p (const_tree, const_tree); +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); extern bool need_atomic_barrier_p (enum memmodel, bool); Index: gcc/emit-rtl.c =================================================================== --- gcc/emit-rtl.c 2017-12-16 14:23:26.068200011 +0000 +++ gcc/emit-rtl.c 2017-12-19 19:09:23.877365740 +0000 @@ -6418,6 +6418,22 @@ need_atomic_barrier_p (enum memmodel mod } } +/* Return a constant shift amount for shifting a value of mode MODE + by VALUE bits. */ + +rtx +gen_int_shift_amount (machine_mode, HOST_WIDE_INT value) +{ + /* Use a 64-bit mode, to avoid any truncation. + + ??? Perhaps this should be automatically derived from the .md files + instead, or perhaps have a target hook. */ + scalar_int_mode shift_mode = (BITS_PER_UNIT == 8 + ? DImode + : int_mode_for_size (64, 0).require ()); + return gen_int_mode (value, shift_mode); +} + /* Initialize fields of rtl_data related to stack alignment. */ void Index: gcc/asan.c =================================================================== --- gcc/asan.c 2017-12-16 14:23:26.065200853 +0000 +++ gcc/asan.c 2017-12-19 19:09:23.867366133 +0000 @@ -1386,7 +1386,7 @@ asan_emit_stack_protection (rtx base, rt TREE_ASM_WRITTEN (id) = 1; emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); shadow_base = expand_binop (Pmode, lshr_optab, base, - GEN_INT (ASAN_SHADOW_SHIFT), + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), NULL_RTX, 1, OPTAB_DIRECT); shadow_base = plus_constant (Pmode, shadow_base, Index: gcc/calls.c =================================================================== --- gcc/calls.c 2017-12-16 14:23:26.066200572 +0000 +++ gcc/calls.c 2017-12-19 19:09:23.868366094 +0000 @@ -2900,15 +2900,17 @@ shift_return_value (machine_mode mode, b HOST_WIDE_INT shift; gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); + machine_mode value_mode = GET_MODE (value); + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); if (shift == 0) return false; /* Use ashr rather than lshr for right shifts. This is for the benefit of the MIPS port, which requires SImode values to be sign-extended when stored in 64-bit registers. */ - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, + value, gen_int_shift_amount (value_mode, shift), + value, 1, OPTAB_WIDEN)) gcc_unreachable (); return true; } Index: gcc/cse.c =================================================================== --- gcc/cse.c 2017-12-16 14:23:26.067200292 +0000 +++ gcc/cse.c 2017-12-19 19:09:23.874365858 +0000 @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (const_arg1) < 0)) { if (SHIFT_COUNT_TRUNCATED) - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + canon_const_arg1 = gen_int_shift_amount + (mode, (INTVAL (const_arg1) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (inner_const) < 0)) { if (SHIFT_COUNT_TRUNCATED) - inner_const = GEN_INT (INTVAL (inner_const) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + inner_const = gen_int_shift_amount + (mode, (INTVAL (inner_const) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) /* As an exception, we can turn an ASHIFTRT of this form into a shift of the number of bits - 1. */ if (code == ASHIFTRT) - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); + new_const = gen_int_shift_amount + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); else if (!side_effects_p (XEXP (y, 0))) return CONST0_RTX (mode); else Index: gcc/dse.c =================================================================== --- gcc/dse.c 2017-12-16 14:23:26.068200011 +0000 +++ gcc/dse.c 2017-12-19 19:09:23.875365819 +0000 @@ -1642,8 +1642,9 @@ find_shift_sequence (int access_size, store_mode, byte); if (ret && CONSTANT_P (ret)) { + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); ret = simplify_const_binary_operation (LSHIFTRT, new_mode, - ret, GEN_INT (shift)); + ret, shift_rtx); if (ret && CONSTANT_P (ret)) { byte = subreg_lowpart_offset (read_mode, new_mode); @@ -1679,7 +1680,8 @@ find_shift_sequence (int access_size, of one dsp where the cost of these two was not the same. But this really is a rare case anyway. */ target = expand_binop (new_mode, lshr_optab, new_reg, - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); + gen_int_shift_amount (new_mode, shift), + new_reg, 1, OPTAB_DIRECT); shift_seq = get_insns (); end_sequence (); Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2017-12-16 14:23:26.069199731 +0000 +++ gcc/expmed.c 2017-12-19 19:09:23.879365662 +0000 @@ -223,7 +223,8 @@ init_expmed_one_mode (struct init_expmed PUT_MODE (all->zext, wider_mode); PUT_MODE (all->wide_mult, wider_mode); PUT_MODE (all->wide_lshr, wider_mode); - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); + XEXP (all->wide_lshr, 1) + = gen_int_shift_amount (wider_mode, mode_bitsize); set_mul_widen_cost (speed, wider_mode, set_src_cost (all->wide_mult, wider_mode, speed)); @@ -910,12 +911,14 @@ store_bit_field_1 (rtx str_rtx, unsigned to make sure that for big-endian machines the higher order bits are used. */ if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) - value_word = simplify_expand_binop (word_mode, lshr_optab, - value_word, - GEN_INT (BITS_PER_WORD - - new_bitsize), - NULL_RTX, true, - OPTAB_LIB_WIDEN); + { + int shift = BITS_PER_WORD - new_bitsize; + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); + value_word = simplify_expand_binop (word_mode, lshr_optab, + value_word, shift_rtx, + NULL_RTX, true, + OPTAB_LIB_WIDEN); + } if (!store_bit_field_1 (op0, new_bitsize, bitnum + bit_offset, @@ -2366,8 +2369,9 @@ expand_shift_1 (enum tree_code code, mac if (CONST_INT_P (op1) && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) - % GET_MODE_BITSIZE (scalar_mode)); + op1 = gen_int_shift_amount (mode, + (unsigned HOST_WIDE_INT) INTVAL (op1) + % GET_MODE_BITSIZE (scalar_mode)); else if (GET_CODE (op1) == SUBREG && subreg_lowpart_p (op1) && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) @@ -2384,7 +2388,8 @@ expand_shift_1 (enum tree_code code, mac && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, GET_MODE_BITSIZE (scalar_mode) - 1)) { - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) + - INTVAL (op1))); left = !left; code = left ? LROTATE_EXPR : RROTATE_EXPR; } @@ -2464,8 +2469,8 @@ expand_shift_1 (enum tree_code code, mac if (op1 == const0_rtx) return shifted; else if (CONST_INT_P (op1)) - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - - INTVAL (op1)); + other_amount = gen_int_shift_amount + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); else { other_amount @@ -2538,8 +2543,9 @@ expand_shift_1 (enum tree_code code, mac expand_shift (enum tree_code code, machine_mode mode, rtx shifted, int amount, rtx target, int unsignedp) { - return expand_shift_1 (code, mode, - shifted, GEN_INT (amount), target, unsignedp); + return expand_shift_1 (code, mode, shifted, + gen_int_shift_amount (mode, amount), + target, unsignedp); } /* Likewise, but return 0 if that cannot be done. */ @@ -3857,7 +3863,7 @@ expand_smod_pow2 (scalar_int_mode mode, { HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; signmask = force_reg (mode, signmask); - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); /* Use the rtx_cost of a LSHIFTRT instruction to determine which instruction sequence to use. If logical right shifts Index: gcc/lower-subreg.c =================================================================== --- gcc/lower-subreg.c 2017-12-16 14:23:26.069199731 +0000 +++ gcc/lower-subreg.c 2017-12-19 19:09:23.879365662 +0000 @@ -141,7 +141,7 @@ shift_cost (bool speed_p, struct cost_rt PUT_CODE (rtxes->shift, code); PUT_MODE (rtxes->shift, mode); PUT_MODE (rtxes->source, mode); - XEXP (rtxes->shift, 1) = GEN_INT (op1); + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); return set_src_cost (rtxes->shift, mode, speed_p); } Index: gcc/optabs.c =================================================================== --- gcc/optabs.c 2017-12-16 14:23:26.070199450 +0000 +++ gcc/optabs.c 2017-12-19 19:09:23.882365544 +0000 @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, if (binoptab != ashr_optab) emit_move_insn (outof_target, CONST0_RTX (word_mode)); else - if (!force_expand_binop (word_mode, binoptab, - outof_input, GEN_INT (BITS_PER_WORD - 1), + if (!force_expand_binop (word_mode, binoptab, outof_input, + gen_int_shift_amount (word_mode, + BITS_PER_WORD - 1), outof_target, unsignedp, methods)) return false; } @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod { int low = (WORDS_BIG_ENDIAN ? 1 : 0); int high = (WORDS_BIG_ENDIAN ? 0 : 1); - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); + rtx wordm1 = (umulp ? NULL_RTX + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); rtx product, adjust, product_high, temp; rtx op0_high = operand_subword_force (op0, high, mode); @@ -1190,7 +1192,7 @@ expand_binop (machine_mode mode, optab b unsigned int bits = GET_MODE_PRECISION (int_mode); if (CONST_INT_P (op1)) - newop1 = GEN_INT (bits - INTVAL (op1)); + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); else if (targetm.shift_truncation_mask (int_mode) == bits - 1) newop1 = negate_rtx (GET_MODE (op1), op1); else @@ -1412,7 +1414,7 @@ expand_binop (machine_mode mode, optab b /* Apply the truncation to constant shifts. */ if (double_shift_mask > 0 && CONST_INT_P (op1)) - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); if (op1 == CONST0_RTX (op1_mode)) return op0; @@ -1522,7 +1524,7 @@ expand_binop (machine_mode mode, optab b else { rtx into_temp1, into_temp2, outof_temp1, outof_temp2; - rtx first_shift_count, second_shift_count; + HOST_WIDE_INT first_shift_count, second_shift_count; optab reverse_unsigned_shift, unsigned_shift; reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) @@ -1533,20 +1535,24 @@ expand_binop (machine_mode mode, optab b if (shift_count > BITS_PER_WORD) { - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); + first_shift_count = shift_count - BITS_PER_WORD; + second_shift_count = 2 * BITS_PER_WORD - shift_count; } else { - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); - second_shift_count = GEN_INT (shift_count); + first_shift_count = BITS_PER_WORD - shift_count; + second_shift_count = shift_count; } + rtx first_shift_count_rtx + = gen_int_shift_amount (word_mode, first_shift_count); + rtx second_shift_count_rtx + = gen_int_shift_amount (word_mode, second_shift_count); into_temp1 = expand_binop (word_mode, unsigned_shift, - outof_input, first_shift_count, + outof_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - into_input, second_shift_count, + into_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (into_temp1 != 0 && into_temp2 != 0) @@ -1559,10 +1565,10 @@ expand_binop (machine_mode mode, optab b emit_move_insn (into_target, inter); outof_temp1 = expand_binop (word_mode, unsigned_shift, - into_input, first_shift_count, + into_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - outof_input, second_shift_count, + outof_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) @@ -2802,25 +2808,29 @@ expand_unop (machine_mode mode, optab un if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotl_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotr_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } last = get_last_insn (); - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, + temp1 = expand_binop (mode, ashl_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, + temp2 = expand_binop (mode, lshr_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); if (temp1 && temp2) { @@ -5402,7 +5412,7 @@ shift_amt_for_vec_perm_mask (rtx sel) return NULL_RTX; } - return GEN_INT (first * bitsize); + return gen_int_shift_amount (GET_MODE (sel), first * bitsize); } /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ @@ -5572,7 +5582,8 @@ expand_vec_perm (machine_mode mode, rtx NULL, 0, OPTAB_DIRECT); else sel = expand_simple_binop (selmode, ASHIFT, sel, - GEN_INT (exact_log2 (u)), + gen_int_shift_amount (selmode, + exact_log2 (u)), NULL, 0, OPTAB_DIRECT); gcc_assert (sel != NULL); Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2017-12-16 14:23:26.070199450 +0000 +++ gcc/simplify-rtx.c 2017-12-19 19:09:23.884365465 +0000 @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod if (STORE_FLAG_VALUE == 1) { temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod else if (STORE_FLAG_VALUE == -1) { temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -2672,7 +2674,8 @@ simplify_binary_operation_1 (enum rtx_co { val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); if (val >= 0) - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (ASHIFT, mode, op0, + gen_int_shift_amount (mode, val)); } /* x*2 is x+x and x*(-1) is -x */ @@ -3296,7 +3299,8 @@ simplify_binary_operation_1 (enum rtx_co /* Convert divide by power of two into shift. */ if (CONST_INT_P (trueop1) && (val = exact_log2 (UINTVAL (trueop1))) > 0) - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (LSHIFTRT, mode, op0, + gen_int_shift_amount (mode, val)); break; case DIV: @@ -3416,10 +3420,12 @@ simplify_binary_operation_1 (enum rtx_co && IN_RANGE (INTVAL (trueop1), GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), GET_MODE_UNIT_PRECISION (mode) - 1)) - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, - mode, op0, - GEN_INT (GET_MODE_UNIT_PRECISION (mode) - - INTVAL (trueop1))); + { + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, + mode, op0, new_amount_rtx); + } #endif /* FALLTHRU */ case ASHIFTRT: @@ -3460,8 +3466,8 @@ simplify_binary_operation_1 (enum rtx_co == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) && subreg_lowpart_p (op0)) { - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) - + INTVAL (op1)); + rtx tmp = gen_int_shift_amount + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); tmp = simplify_gen_binary (code, inner_mode, XEXP (SUBREG_REG (op0), 0), tmp); @@ -3472,7 +3478,8 @@ simplify_binary_operation_1 (enum rtx_co { val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); if (val != INTVAL (op1)) - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); + return simplify_gen_binary (code, mode, op0, + gen_int_shift_amount (mode, val)); } break; Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-12-16 14:23:26.067200292 +0000 +++ gcc/combine.c 2017-12-19 19:09:23.873365897 +0000 @@ -3804,8 +3804,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && INTVAL (XEXP (*split, 1)) > 0 && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) { + rtx i_rtx = gen_int_shift_amount (split_mode, i); SUBST (*split, gen_rtx_ASHIFT (split_mode, - XEXP (*split, 0), GEN_INT (i))); + XEXP (*split, 0), i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -3819,8 +3820,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) { rtx nsplit = XEXP (*split, 0); + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), - XEXP (nsplit, 0), GEN_INT (i))); + XEXP (nsplit, 0), + i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -5088,12 +5091,12 @@ find_split_point (rtx *loc, rtx_insn *in GET_MODE (XEXP (SET_SRC (x), 0)))))) { machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); - + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_NEG (mode, gen_rtx_LSHIFTRT (mode, XEXP (SET_SRC (x), 0), - GEN_INT (pos)))); + pos_rtx))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -5151,11 +5154,11 @@ find_split_point (rtx *loc, rtx_insn *in { unsigned HOST_WIDE_INT mask = (HOST_WIDE_INT_1U << len) - 1; + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_AND (mode, gen_rtx_LSHIFTRT - (mode, gen_lowpart (mode, inner), - GEN_INT (pos)), + (mode, gen_lowpart (mode, inner), pos_rtx), gen_int_mode (mask, mode))); split = find_split_point (&SET_SRC (x), insn, true); @@ -5164,14 +5167,15 @@ find_split_point (rtx *loc, rtx_insn *in } else { + int left_bits = GET_MODE_PRECISION (mode) - len - pos; + int right_bits = GET_MODE_PRECISION (mode) - len; SUBST (SET_SRC (x), gen_rtx_fmt_ee (unsignedp ? LSHIFTRT : ASHIFTRT, mode, gen_rtx_ASHIFT (mode, gen_lowpart (mode, inner), - GEN_INT (GET_MODE_PRECISION (mode) - - len - pos)), - GEN_INT (GET_MODE_PRECISION (mode) - len))); + gen_int_shift_amount (mode, left_bits)), + gen_int_shift_amount (mode, right_bits))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -8952,10 +8956,11 @@ force_int_to_mode (rtx x, scalar_int_mod /* Must be more sign bit copies than the mask needs. */ && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >= exact_log2 (mask + 1))) - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), - GEN_INT (GET_MODE_PRECISION (xmode) - - exact_log2 (mask + 1))); - + { + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), + gen_int_shift_amount (xmode, nbits)); + } goto shiftrt; case ASHIFTRT: @@ -10448,7 +10453,7 @@ simplify_shift_const_1 (enum rtx_code co { enum rtx_code orig_code = code; rtx orig_varop = varop; - int count; + int count, log2; machine_mode mode = result_mode; machine_mode shift_mode; scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; @@ -10651,13 +10656,11 @@ simplify_shift_const_1 (enum rtx_code co is cheaper. But it is still better on those machines to merge two shifts into one. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (ASHIFT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10665,13 +10668,11 @@ simplify_shift_const_1 (enum rtx_code co case UDIV: /* Similar, for when divides are cheaper. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10806,10 +10807,10 @@ simplify_shift_const_1 (enum rtx_code co mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), int_result_mode); - + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); mask_rtx = simplify_const_binary_operation (code, int_result_mode, - mask_rtx, GEN_INT (count)); + mask_rtx, count_rtx); /* Give up if we can't compute an outer operation to use. */ if (mask_rtx == 0 @@ -10865,9 +10866,10 @@ simplify_shift_const_1 (enum rtx_code co if (code == ASHIFTRT && int_mode != int_result_mode) break; + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); rtx new_rtx = simplify_const_binary_operation (code, int_mode, XEXP (varop, 0), - GEN_INT (count)); + count_rtx); varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); count = 0; continue; @@ -10933,7 +10935,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), INTVAL (new_rtx), int_result_mode, @@ -11076,7 +11078,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (ASHIFT, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, PLUS, INTVAL (new_rtx), int_result_mode, @@ -11097,7 +11099,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, XOR, INTVAL (new_rtx), int_result_mode, @@ -11152,12 +11154,12 @@ simplify_shift_const_1 (enum rtx_code co - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) { rtx varop_inner = XEXP (varop, 0); - - varop_inner - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), - XEXP (varop_inner, 0), - GEN_INT - (count + INTVAL (XEXP (varop_inner, 1)))); + int new_count = count + INTVAL (XEXP (varop_inner, 1)); + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), + new_count); + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), + XEXP (varop_inner, 0), + new_count_rtx); varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); count = 0; continue; @@ -11209,7 +11211,8 @@ simplify_shift_const_1 (enum rtx_code co x = NULL_RTX; if (x == NULL_RTX) - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); + x = simplify_gen_binary (code, shift_mode, varop, + gen_int_shift_amount (shift_mode, count)); /* If we were doing an LSHIFTRT in a wider mode than it was originally, turn off all the bits that the shift would have turned off. */ @@ -11271,7 +11274,8 @@ simplify_shift_const (rtx x, enum rtx_co return tem; if (!x) - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); + x = simplify_gen_binary (code, GET_MODE (varop), varop, + gen_int_shift_amount (GET_MODE (varop), count)); if (GET_MODE (x) != result_mode) x = gen_lowpart (result_mode, x); return x; @@ -11462,8 +11466,9 @@ change_zero_ext (rtx pat) if (BITS_BIG_ENDIAN) start = GET_MODE_PRECISION (inner_mode) - size - start; - if (start) - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); + if (start != 0) + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), + gen_int_shift_amount (inner_mode, start)); else x = XEXP (x, 0); if (mode != inner_mode)
On 12/19/2017 12:13 PM, Richard Sandiford wrote: > Richard Sandiford <richard.sandiford@linaro.org> writes: >> Richard Biener <richard.guenther@gmail.com> writes: >>> On Fri, Dec 15, 2017 at 1:48 AM, Richard Sandiford >>> <richard.sandiford@linaro.org> wrote: >>>> Richard Biener <richard.guenther@gmail.com> writes: >>>>> On Mon, Nov 20, 2017 at 10:02 PM, Richard Sandiford >>>>> <richard.sandiford@linaro.org> wrote: >>>>>> Richard Biener <richard.guenther@gmail.com> writes: >>>>>>> On Thu, Oct 26, 2017 at 2:06 PM, Richard Biener >>>>>>> <richard.guenther@gmail.com> wrote: >>>>>>>> On Mon, Oct 23, 2017 at 1:25 PM, Richard Sandiford >>>>>>>> <richard.sandiford@linaro.org> wrote: >>>>>>>>> This patch adds a stub helper routine to provide the mode >>>>>>>>> of a scalar shift amount, given the mode of the values >>>>>>>>> being shifted. >>>>>>>>> >>>>>>>>> One long-standing problem has been to decide what this mode >>>>>>>>> should be for arbitrary rtxes (as opposed to those directly >>>>>>>>> tied to a target pattern). Is it the mode of the shifted >>>>>>>>> elements? Is it word_mode? Or maybe QImode? Is it whatever >>>>>>>>> the corresponding target pattern says? (In which case what >>>>>>>>> should the mode be when the target doesn't have a pattern?) >>>>>>>>> >>>>>>>>> For now the patch picks word_mode, which should be safe on >>>>>>>>> all targets but could perhaps become suboptimal if the helper >>>>>>>>> routine is used more often than it is in this patch. As it >>>>>>>>> stands the patch does not change the generated code. >>>>>>>>> >>>>>>>>> The patch also adds a helper function that constructs rtxes >>>>>>>>> for constant shift amounts, again given the mode of the value >>>>>>>>> being shifted. As well as helping with the SVE patches, this >>>>>>>>> is one step towards allowing CONST_INTs to have a real mode. >>>>>>>> >>>>>>>> I think gen_shift_amount_mode is flawed and while encapsulating >>>>>>>> constant shift amount RTX generation into a gen_int_shift_amount >>>>>>>> looks good to me I'd rather have that ??? in this function (and >>>>>>>> I'd use the mode of the RTX shifted, not word_mode...). >>>>>> >>>>>> OK. I'd gone for word_mode because that's what expand_binop uses >>>>>> for CONST_INTs: >>>>>> >>>>>> op1_mode = (GET_MODE (op1) != VOIDmode >>>>>> ? as_a <scalar_int_mode> (GET_MODE (op1)) >>>>>> : word_mode); >>>>>> >>>>>> But using the inner mode should be fine too. The patch below does that. >>>>>> >>>>>>>> In the end it's up to insn recognizing to convert the op to the >>>>>>>> expected mode and for generic RTL it's us that should decide >>>>>>>> on the mode -- on GENERIC the shift amount has to be an >>>>>>>> integer so why not simply use a mode that is large enough to >>>>>>>> make the constant fit? >>>>>> >>>>>> ...but I can do that instead if you think it's better. >>>>>> >>>>>>>> Just throwing in some comments here, RTL isn't my primary >>>>>>>> expertise. >>>>>>> >>>>>>> To add a little bit - shift amounts is maybe the only(?) place >>>>>>> where a modeless CONST_INT makes sense! So "fixing" >>>>>>> that first sounds backwards. >>>>>> >>>>>> But even here they have a mode conceptually, since out-of-range shift >>>>>> amounts are target-defined rather than undefined. E.g. if the target >>>>>> interprets the shift amount as unsigned, then for a shift amount >>>>>> (const_int -1) it matters whether the mode is QImode (and so we're >>>>>> shifting by 255) or HImode (and so we're shifting by 65535. >>>>> >>>>> I think RTL is well-defined (at least I hope so ...) and machine constraints >>>>> need to be modeled explicitely (like embedding an implicit bit_and in >>>>> shift patterns). >>>> >>>> Well, RTL is well-defined in the sense that if you have >>>> >>>> (ashift X (foo:HI ...)) >>>> >>>> then the shift amount must be interpreted as HImode rather than some >>>> other mode. The problem here is to define a default choice of mode for >>>> const_ints, in cases where the shift is being created out of the blue. >>>> >>>> Whether the shift amount is effectively signed or unsigned isn't defined >>>> by RTL without SHIFT_COUNT_TRUNCATED, since the choice only matters for >>>> out-of-range values, and the behaviour for out-of-range RTL shifts is >>>> specifically treated as target-defined without SHIFT_COUNT_TRUNCATED. >>>> >>>> I think the revised patch does implement your suggestion of using the >>>> integer equivalent of the inner mode as the default, but we need to >>>> decide whether to go with it, go with the original word_mode approach >>>> (taken from existing expand_binop code) or something else. Something >>>> else could include the widest supported integer mode, so that we never >>>> change the value. >>> >>> I guess it's pretty arbitrary what we choose (but we might need to adjust >>> targets?). For something like this an appealing choice would be sth >>> that is host and target idependent, like [u]int32_t or given CONST_INT >>> is always 64bits now and signed int64_t aka HOST_WIDE_INT (bad >>> name now). That means it's the "infinite precision" thing that fits >>> into CONST_INT ;) >> >> Sounds OK to me. How about the attached? > > Taking MAX_FIXED_MODE_SIZE into account was bogus, since we'd then just > fail to find a mode. This version has survived the full cross-target > testing. Also bootstrapped & regression-tested on aarch64-linux-gnu, > x86_64-linux-gnu and powerpc64-linux-gnu. OK to install? > > At this stage this is the patch that is holding up the rest of the > approved ones. > > Thanks, > Richard > > > 2017-12-19 Richard Sandiford <richard.sandiford@linaro.org> > Alan Hayward <alan.hayward@arm.com> > David Sherwood <david.sherwood@arm.com> > > gcc/ > * emit-rtl.h (gen_int_shift_amount): Declare. > * emit-rtl.c (gen_int_shift_amount): New function. > * asan.c (asan_emit_stack_protection): Use gen_int_shift_amount > instead of GEN_INT. > * calls.c (shift_return_value): Likewise. > * cse.c (fold_rtx): Likewise. > * dse.c (find_shift_sequence): Likewise. > * expmed.c (init_expmed_one_mode, store_bit_field_1, expand_shift_1) > (expand_shift, expand_smod_pow2): Likewise. > * lower-subreg.c (shift_cost): Likewise. > * optabs.c (expand_superword_shift, expand_doubleword_mult) > (expand_unop, expand_binop, shift_amt_for_vec_perm_mask) > (expand_vec_perm_var): Likewise. > * simplify-rtx.c (simplify_unary_operation_1): Likewise. > (simplify_binary_operation_1): Likewise. > * combine.c (try_combine, find_split_point, force_int_to_mode) > (simplify_shift_const_1, simplify_shift_const): Likewise. > (change_zero_ext): Likewise. Use simplify_gen_binary. > OK. jeff
Index: gcc/target.h =================================================================== --- gcc/target.h 2017-10-23 11:47:06.643477568 +0100 +++ gcc/target.h 2017-10-23 11:47:11.277288162 +0100 @@ -209,6 +209,17 @@ #define HOOKSTRUCT(FRAGMENT) FRAGMENT extern struct gcc_target targetm; +/* Return the mode that should be used to hold a scalar shift amount + when shifting values of the given mode. */ +/* ??? This could in principle be generated automatically from the .md + shift patterns, but for now word_mode should be universally OK. */ + +inline scalar_int_mode +get_shift_amount_mode (machine_mode) +{ + return word_mode; +} + #ifdef GCC_TM_H #ifndef CUMULATIVE_ARGS_MAGIC Index: gcc/emit-rtl.h =================================================================== --- gcc/emit-rtl.h 2017-10-23 11:47:06.643477568 +0100 +++ gcc/emit-rtl.h 2017-10-23 11:47:11.274393237 +0100 @@ -369,6 +369,7 @@ extern void set_reg_attrs_for_parm (rtx, extern void set_reg_attrs_for_decl_rtl (tree t, rtx x); extern void adjust_reg_mode (rtx, machine_mode); extern int mem_expr_equal_p (const_tree, const_tree); +extern rtx gen_int_shift_amount (machine_mode, HOST_WIDE_INT); extern bool need_atomic_barrier_p (enum memmodel, bool); Index: gcc/emit-rtl.c =================================================================== --- gcc/emit-rtl.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/emit-rtl.c 2017-10-23 11:47:11.273428262 +0100 @@ -6478,6 +6478,15 @@ need_atomic_barrier_p (enum memmodel mod } } +/* Return a constant shift amount for shifting a value of mode MODE + by VALUE bits. */ + +rtx +gen_int_shift_amount (machine_mode mode, HOST_WIDE_INT value) +{ + return gen_int_mode (value, get_shift_amount_mode (mode)); +} + /* Initialize fields of rtl_data related to stack alignment. */ void Index: gcc/asan.c =================================================================== --- gcc/asan.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/asan.c 2017-10-23 11:47:11.270533336 +0100 @@ -1388,7 +1388,7 @@ asan_emit_stack_protection (rtx base, rt TREE_ASM_WRITTEN (id) = 1; emit_move_insn (mem, expand_normal (build_fold_addr_expr (decl))); shadow_base = expand_binop (Pmode, lshr_optab, base, - GEN_INT (ASAN_SHADOW_SHIFT), + gen_int_shift_amount (Pmode, ASAN_SHADOW_SHIFT), NULL_RTX, 1, OPTAB_DIRECT); shadow_base = plus_constant (Pmode, shadow_base, Index: gcc/calls.c =================================================================== --- gcc/calls.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/calls.c 2017-10-23 11:47:11.270533336 +0100 @@ -2749,15 +2749,17 @@ shift_return_value (machine_mode mode, b HOST_WIDE_INT shift; gcc_assert (REG_P (value) && HARD_REGISTER_P (value)); - shift = GET_MODE_BITSIZE (GET_MODE (value)) - GET_MODE_BITSIZE (mode); + machine_mode value_mode = GET_MODE (value); + shift = GET_MODE_BITSIZE (value_mode) - GET_MODE_BITSIZE (mode); if (shift == 0) return false; /* Use ashr rather than lshr for right shifts. This is for the benefit of the MIPS port, which requires SImode values to be sign-extended when stored in 64-bit registers. */ - if (!force_expand_binop (GET_MODE (value), left_p ? ashl_optab : ashr_optab, - value, GEN_INT (shift), value, 1, OPTAB_WIDEN)) + if (!force_expand_binop (value_mode, left_p ? ashl_optab : ashr_optab, + value, gen_int_shift_amount (value_mode, shift), + value, 1, OPTAB_WIDEN)) gcc_unreachable (); return true; } Index: gcc/cse.c =================================================================== --- gcc/cse.c 2017-10-23 11:47:03.707058235 +0100 +++ gcc/cse.c 2017-10-23 11:47:11.273428262 +0100 @@ -3611,9 +3611,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (const_arg1) < 0)) { if (SHIFT_COUNT_TRUNCATED) - canon_const_arg1 = GEN_INT (INTVAL (const_arg1) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + canon_const_arg1 = gen_int_shift_amount + (mode, (INTVAL (const_arg1) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3660,9 +3660,9 @@ fold_rtx (rtx x, rtx_insn *insn) || INTVAL (inner_const) < 0)) { if (SHIFT_COUNT_TRUNCATED) - inner_const = GEN_INT (INTVAL (inner_const) - & (GET_MODE_UNIT_BITSIZE (mode) - - 1)); + inner_const = gen_int_shift_amount + (mode, (INTVAL (inner_const) + & (GET_MODE_UNIT_BITSIZE (mode) - 1))); else break; } @@ -3692,7 +3692,8 @@ fold_rtx (rtx x, rtx_insn *insn) /* As an exception, we can turn an ASHIFTRT of this form into a shift of the number of bits - 1. */ if (code == ASHIFTRT) - new_const = GEN_INT (GET_MODE_UNIT_BITSIZE (mode) - 1); + new_const = gen_int_shift_amount + (mode, GET_MODE_UNIT_BITSIZE (mode) - 1); else if (!side_effects_p (XEXP (y, 0))) return CONST0_RTX (mode); else Index: gcc/dse.c =================================================================== --- gcc/dse.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/dse.c 2017-10-23 11:47:11.273428262 +0100 @@ -1605,8 +1605,9 @@ find_shift_sequence (int access_size, store_mode, byte); if (ret && CONSTANT_P (ret)) { + rtx shift_rtx = gen_int_shift_amount (new_mode, shift); ret = simplify_const_binary_operation (LSHIFTRT, new_mode, - ret, GEN_INT (shift)); + ret, shift_rtx); if (ret && CONSTANT_P (ret)) { byte = subreg_lowpart_offset (read_mode, new_mode); @@ -1642,7 +1643,8 @@ find_shift_sequence (int access_size, of one dsp where the cost of these two was not the same. But this really is a rare case anyway. */ target = expand_binop (new_mode, lshr_optab, new_reg, - GEN_INT (shift), new_reg, 1, OPTAB_DIRECT); + gen_int_shift_amount (new_mode, shift), + new_reg, 1, OPTAB_DIRECT); shift_seq = get_insns (); end_sequence (); Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/expmed.c 2017-10-23 11:47:11.274393237 +0100 @@ -222,7 +222,8 @@ init_expmed_one_mode (struct init_expmed PUT_MODE (all->zext, wider_mode); PUT_MODE (all->wide_mult, wider_mode); PUT_MODE (all->wide_lshr, wider_mode); - XEXP (all->wide_lshr, 1) = GEN_INT (mode_bitsize); + XEXP (all->wide_lshr, 1) + = gen_int_shift_amount (wider_mode, mode_bitsize); set_mul_widen_cost (speed, wider_mode, set_src_cost (all->wide_mult, wider_mode, speed)); @@ -908,12 +909,14 @@ store_bit_field_1 (rtx str_rtx, unsigned to make sure that for big-endian machines the higher order bits are used. */ if (new_bitsize < BITS_PER_WORD && BYTES_BIG_ENDIAN && !backwards) - value_word = simplify_expand_binop (word_mode, lshr_optab, - value_word, - GEN_INT (BITS_PER_WORD - - new_bitsize), - NULL_RTX, true, - OPTAB_LIB_WIDEN); + { + int shift = BITS_PER_WORD - new_bitsize; + rtx shift_rtx = gen_int_shift_amount (word_mode, shift); + value_word = simplify_expand_binop (word_mode, lshr_optab, + value_word, shift_rtx, + NULL_RTX, true, + OPTAB_LIB_WIDEN); + } if (!store_bit_field_1 (op0, new_bitsize, bitnum + bit_offset, @@ -2366,8 +2369,9 @@ expand_shift_1 (enum tree_code code, mac if (CONST_INT_P (op1) && ((unsigned HOST_WIDE_INT) INTVAL (op1) >= (unsigned HOST_WIDE_INT) GET_MODE_BITSIZE (scalar_mode))) - op1 = GEN_INT ((unsigned HOST_WIDE_INT) INTVAL (op1) - % GET_MODE_BITSIZE (scalar_mode)); + op1 = gen_int_shift_amount (mode, + (unsigned HOST_WIDE_INT) INTVAL (op1) + % GET_MODE_BITSIZE (scalar_mode)); else if (GET_CODE (op1) == SUBREG && subreg_lowpart_p (op1) && SCALAR_INT_MODE_P (GET_MODE (SUBREG_REG (op1))) @@ -2384,7 +2388,8 @@ expand_shift_1 (enum tree_code code, mac && IN_RANGE (INTVAL (op1), GET_MODE_BITSIZE (scalar_mode) / 2 + left, GET_MODE_BITSIZE (scalar_mode) - 1)) { - op1 = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); + op1 = gen_int_shift_amount (mode, (GET_MODE_BITSIZE (scalar_mode) + - INTVAL (op1))); left = !left; code = left ? LROTATE_EXPR : RROTATE_EXPR; } @@ -2464,8 +2469,8 @@ expand_shift_1 (enum tree_code code, mac if (op1 == const0_rtx) return shifted; else if (CONST_INT_P (op1)) - other_amount = GEN_INT (GET_MODE_BITSIZE (scalar_mode) - - INTVAL (op1)); + other_amount = gen_int_shift_amount + (mode, GET_MODE_BITSIZE (scalar_mode) - INTVAL (op1)); else { other_amount @@ -2538,8 +2543,9 @@ expand_shift_1 (enum tree_code code, mac expand_shift (enum tree_code code, machine_mode mode, rtx shifted, int amount, rtx target, int unsignedp) { - return expand_shift_1 (code, mode, - shifted, GEN_INT (amount), target, unsignedp); + return expand_shift_1 (code, mode, shifted, + gen_int_shift_amount (mode, amount), + target, unsignedp); } /* Likewise, but return 0 if that cannot be done. */ @@ -3855,7 +3861,7 @@ expand_smod_pow2 (scalar_int_mode mode, { HOST_WIDE_INT masklow = (HOST_WIDE_INT_1 << logd) - 1; signmask = force_reg (mode, signmask); - shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); + shift = gen_int_shift_amount (mode, GET_MODE_BITSIZE (mode) - logd); /* Use the rtx_cost of a LSHIFTRT instruction to determine which instruction sequence to use. If logical right shifts Index: gcc/lower-subreg.c =================================================================== --- gcc/lower-subreg.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/lower-subreg.c 2017-10-23 11:47:11.274393237 +0100 @@ -129,7 +129,7 @@ shift_cost (bool speed_p, struct cost_rt PUT_CODE (rtxes->shift, code); PUT_MODE (rtxes->shift, mode); PUT_MODE (rtxes->source, mode); - XEXP (rtxes->shift, 1) = GEN_INT (op1); + XEXP (rtxes->shift, 1) = gen_int_shift_amount (mode, op1); return set_src_cost (rtxes->shift, mode, speed_p); } Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/simplify-rtx.c 2017-10-23 11:47:11.277288162 +0100 @@ -1165,7 +1165,8 @@ simplify_unary_operation_1 (enum rtx_cod if (STORE_FLAG_VALUE == 1) { temp = simplify_gen_binary (ASHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -1175,7 +1176,8 @@ simplify_unary_operation_1 (enum rtx_cod else if (STORE_FLAG_VALUE == -1) { temp = simplify_gen_binary (LSHIFTRT, inner, XEXP (op, 0), - GEN_INT (isize - 1)); + gen_int_shift_amount (inner, + isize - 1)); if (int_mode == inner) return temp; if (GET_MODE_PRECISION (int_mode) > isize) @@ -2679,7 +2681,8 @@ simplify_binary_operation_1 (enum rtx_co { val = wi::exact_log2 (rtx_mode_t (trueop1, mode)); if (val >= 0) - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (ASHIFT, mode, op0, + gen_int_shift_amount (mode, val)); } /* x*2 is x+x and x*(-1) is -x */ @@ -3303,7 +3306,8 @@ simplify_binary_operation_1 (enum rtx_co /* Convert divide by power of two into shift. */ if (CONST_INT_P (trueop1) && (val = exact_log2 (UINTVAL (trueop1))) > 0) - return simplify_gen_binary (LSHIFTRT, mode, op0, GEN_INT (val)); + return simplify_gen_binary (LSHIFTRT, mode, op0, + gen_int_shift_amount (mode, val)); break; case DIV: @@ -3423,10 +3427,12 @@ simplify_binary_operation_1 (enum rtx_co && IN_RANGE (INTVAL (trueop1), GET_MODE_UNIT_PRECISION (mode) / 2 + (code == ROTATE), GET_MODE_UNIT_PRECISION (mode) - 1)) - return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, - mode, op0, - GEN_INT (GET_MODE_UNIT_PRECISION (mode) - - INTVAL (trueop1))); + { + int new_amount = GET_MODE_UNIT_PRECISION (mode) - INTVAL (trueop1); + rtx new_amount_rtx = gen_int_shift_amount (mode, new_amount); + return simplify_gen_binary (code == ROTATE ? ROTATERT : ROTATE, + mode, op0, new_amount_rtx); + } #endif /* FALLTHRU */ case ASHIFTRT: @@ -3466,8 +3472,8 @@ simplify_binary_operation_1 (enum rtx_co == GET_MODE_BITSIZE (inner_mode) - GET_MODE_BITSIZE (int_mode)) && subreg_lowpart_p (op0)) { - rtx tmp = GEN_INT (INTVAL (XEXP (SUBREG_REG (op0), 1)) - + INTVAL (op1)); + rtx tmp = gen_int_shift_amount + (inner_mode, INTVAL (XEXP (SUBREG_REG (op0), 1)) + INTVAL (op1)); tmp = simplify_gen_binary (code, inner_mode, XEXP (SUBREG_REG (op0), 0), tmp); @@ -3478,7 +3484,8 @@ simplify_binary_operation_1 (enum rtx_co { val = INTVAL (op1) & (GET_MODE_UNIT_PRECISION (mode) - 1); if (val != INTVAL (op1)) - return simplify_gen_binary (code, mode, op0, GEN_INT (val)); + return simplify_gen_binary (code, mode, op0, + gen_int_shift_amount (mode, val)); } break; Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/combine.c 2017-10-23 11:47:11.272463287 +0100 @@ -3773,8 +3773,9 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && INTVAL (XEXP (*split, 1)) > 0 && (i = exact_log2 (UINTVAL (XEXP (*split, 1)))) >= 0) { + rtx i_rtx = gen_int_shift_amount (split_mode, i); SUBST (*split, gen_rtx_ASHIFT (split_mode, - XEXP (*split, 0), GEN_INT (i))); + XEXP (*split, 0), i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -3788,8 +3789,10 @@ try_combine (rtx_insn *i3, rtx_insn *i2, && (i = exact_log2 (UINTVAL (XEXP (XEXP (*split, 0), 1)))) >= 0) { rtx nsplit = XEXP (*split, 0); + rtx i_rtx = gen_int_shift_amount (GET_MODE (nsplit), i); SUBST (XEXP (*split, 0), gen_rtx_ASHIFT (GET_MODE (nsplit), - XEXP (nsplit, 0), GEN_INT (i))); + XEXP (nsplit, 0), + i_rtx)); /* Update split_code because we may not have a multiply anymore. */ split_code = GET_CODE (*split); @@ -5057,12 +5060,12 @@ find_split_point (rtx *loc, rtx_insn *in GET_MODE (XEXP (SET_SRC (x), 0)))))) { machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); - + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_NEG (mode, gen_rtx_LSHIFTRT (mode, XEXP (SET_SRC (x), 0), - GEN_INT (pos)))); + pos_rtx))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -5120,11 +5123,11 @@ find_split_point (rtx *loc, rtx_insn *in { unsigned HOST_WIDE_INT mask = (HOST_WIDE_INT_1U << len) - 1; + rtx pos_rtx = gen_int_shift_amount (mode, pos); SUBST (SET_SRC (x), gen_rtx_AND (mode, gen_rtx_LSHIFTRT - (mode, gen_lowpart (mode, inner), - GEN_INT (pos)), + (mode, gen_lowpart (mode, inner), pos_rtx), gen_int_mode (mask, mode))); split = find_split_point (&SET_SRC (x), insn, true); @@ -5133,14 +5136,15 @@ find_split_point (rtx *loc, rtx_insn *in } else { + int left_bits = GET_MODE_PRECISION (mode) - len - pos; + int right_bits = GET_MODE_PRECISION (mode) - len; SUBST (SET_SRC (x), gen_rtx_fmt_ee (unsignedp ? LSHIFTRT : ASHIFTRT, mode, gen_rtx_ASHIFT (mode, gen_lowpart (mode, inner), - GEN_INT (GET_MODE_PRECISION (mode) - - len - pos)), - GEN_INT (GET_MODE_PRECISION (mode) - len))); + gen_int_shift_amount (mode, left_bits)), + gen_int_shift_amount (mode, right_bits))); split = find_split_point (&SET_SRC (x), insn, true); if (split && split != &SET_SRC (x)) @@ -8915,10 +8919,11 @@ force_int_to_mode (rtx x, scalar_int_mod /* Must be more sign bit copies than the mask needs. */ && ((int) num_sign_bit_copies (XEXP (x, 0), GET_MODE (XEXP (x, 0))) >= exact_log2 (mask + 1))) - x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), - GEN_INT (GET_MODE_PRECISION (xmode) - - exact_log2 (mask + 1))); - + { + int nbits = GET_MODE_PRECISION (xmode) - exact_log2 (mask + 1); + x = simplify_gen_binary (LSHIFTRT, xmode, XEXP (x, 0), + gen_int_shift_amount (xmode, nbits)); + } goto shiftrt; case ASHIFTRT: @@ -10415,7 +10420,7 @@ simplify_shift_const_1 (enum rtx_code co { enum rtx_code orig_code = code; rtx orig_varop = varop; - int count; + int count, log2; machine_mode mode = result_mode; machine_mode shift_mode; scalar_int_mode tmode, inner_mode, int_mode, int_varop_mode, int_result_mode; @@ -10618,13 +10623,11 @@ simplify_shift_const_1 (enum rtx_code co is cheaper. But it is still better on those machines to merge two shifts into one. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (ASHIFT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (ASHIFT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10632,13 +10635,11 @@ simplify_shift_const_1 (enum rtx_code co case UDIV: /* Similar, for when divides are cheaper. */ if (CONST_INT_P (XEXP (varop, 1)) - && exact_log2 (UINTVAL (XEXP (varop, 1))) >= 0) + && (log2 = exact_log2 (UINTVAL (XEXP (varop, 1)))) >= 0) { - varop - = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), - XEXP (varop, 0), - GEN_INT (exact_log2 ( - UINTVAL (XEXP (varop, 1))))); + rtx log2_rtx = gen_int_shift_amount (GET_MODE (varop), log2); + varop = simplify_gen_binary (LSHIFTRT, GET_MODE (varop), + XEXP (varop, 0), log2_rtx); continue; } break; @@ -10773,10 +10774,10 @@ simplify_shift_const_1 (enum rtx_code co mask_rtx = gen_int_mode (nonzero_bits (varop, int_varop_mode), int_result_mode); - + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); mask_rtx = simplify_const_binary_operation (code, int_result_mode, - mask_rtx, GEN_INT (count)); + mask_rtx, count_rtx); /* Give up if we can't compute an outer operation to use. */ if (mask_rtx == 0 @@ -10832,9 +10833,10 @@ simplify_shift_const_1 (enum rtx_code co if (code == ASHIFTRT && int_mode != int_result_mode) break; + rtx count_rtx = gen_int_shift_amount (int_result_mode, count); rtx new_rtx = simplify_const_binary_operation (code, int_mode, XEXP (varop, 0), - GEN_INT (count)); + count_rtx); varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); count = 0; continue; @@ -10900,7 +10902,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, GET_CODE (varop), INTVAL (new_rtx), int_result_mode, @@ -11043,7 +11045,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (ASHIFT, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, PLUS, INTVAL (new_rtx), int_result_mode, @@ -11064,7 +11066,7 @@ simplify_shift_const_1 (enum rtx_code co && (new_rtx = simplify_const_binary_operation (code, int_result_mode, gen_int_mode (INTVAL (XEXP (varop, 1)), int_result_mode), - GEN_INT (count))) != 0 + gen_int_shift_amount (int_result_mode, count))) != 0 && CONST_INT_P (new_rtx) && merge_outer_ops (&outer_op, &outer_const, XOR, INTVAL (new_rtx), int_result_mode, @@ -11119,12 +11121,12 @@ simplify_shift_const_1 (enum rtx_code co - GET_MODE_UNIT_PRECISION (GET_MODE (varop))))) { rtx varop_inner = XEXP (varop, 0); - - varop_inner - = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), - XEXP (varop_inner, 0), - GEN_INT - (count + INTVAL (XEXP (varop_inner, 1)))); + int new_count = count + INTVAL (XEXP (varop_inner, 1)); + rtx new_count_rtx = gen_int_shift_amount (GET_MODE (varop_inner), + new_count); + varop_inner = gen_rtx_LSHIFTRT (GET_MODE (varop_inner), + XEXP (varop_inner, 0), + new_count_rtx); varop = gen_rtx_TRUNCATE (GET_MODE (varop), varop_inner); count = 0; continue; @@ -11176,7 +11178,8 @@ simplify_shift_const_1 (enum rtx_code co x = NULL_RTX; if (x == NULL_RTX) - x = simplify_gen_binary (code, shift_mode, varop, GEN_INT (count)); + x = simplify_gen_binary (code, shift_mode, varop, + gen_int_shift_amount (shift_mode, count)); /* If we were doing an LSHIFTRT in a wider mode than it was originally, turn off all the bits that the shift would have turned off. */ @@ -11238,7 +11241,8 @@ simplify_shift_const (rtx x, enum rtx_co return tem; if (!x) - x = simplify_gen_binary (code, GET_MODE (varop), varop, GEN_INT (count)); + x = simplify_gen_binary (code, GET_MODE (varop), varop, + gen_int_shift_amount (GET_MODE (varop), count)); if (GET_MODE (x) != result_mode) x = gen_lowpart (result_mode, x); return x; @@ -11429,8 +11433,9 @@ change_zero_ext (rtx pat) if (BITS_BIG_ENDIAN) start = GET_MODE_PRECISION (inner_mode) - size - start; - if (start) - x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), GEN_INT (start)); + if (start != 0) + x = gen_rtx_LSHIFTRT (inner_mode, XEXP (x, 0), + gen_int_shift_amount (inner_mode, start)); else x = XEXP (x, 0); if (mode != inner_mode) Index: gcc/optabs.c =================================================================== --- gcc/optabs.c 2017-10-23 11:47:06.643477568 +0100 +++ gcc/optabs.c 2017-10-23 11:47:11.276323187 +0100 @@ -431,8 +431,9 @@ expand_superword_shift (optab binoptab, if (binoptab != ashr_optab) emit_move_insn (outof_target, CONST0_RTX (word_mode)); else - if (!force_expand_binop (word_mode, binoptab, - outof_input, GEN_INT (BITS_PER_WORD - 1), + if (!force_expand_binop (word_mode, binoptab, outof_input, + gen_int_shift_amount (word_mode, + BITS_PER_WORD - 1), outof_target, unsignedp, methods)) return false; } @@ -789,7 +790,8 @@ expand_doubleword_mult (machine_mode mod { int low = (WORDS_BIG_ENDIAN ? 1 : 0); int high = (WORDS_BIG_ENDIAN ? 0 : 1); - rtx wordm1 = umulp ? NULL_RTX : GEN_INT (BITS_PER_WORD - 1); + rtx wordm1 = (umulp ? NULL_RTX + : gen_int_shift_amount (word_mode, BITS_PER_WORD - 1)); rtx product, adjust, product_high, temp; rtx op0_high = operand_subword_force (op0, high, mode); @@ -1185,7 +1187,7 @@ expand_binop (machine_mode mode, optab b unsigned int bits = GET_MODE_PRECISION (int_mode); if (CONST_INT_P (op1)) - newop1 = GEN_INT (bits - INTVAL (op1)); + newop1 = gen_int_shift_amount (int_mode, bits - INTVAL (op1)); else if (targetm.shift_truncation_mask (int_mode) == bits - 1) newop1 = negate_rtx (GET_MODE (op1), op1); else @@ -1399,11 +1401,11 @@ expand_binop (machine_mode mode, optab b shift_mask = targetm.shift_truncation_mask (word_mode); op1_mode = (GET_MODE (op1) != VOIDmode ? as_a <scalar_int_mode> (GET_MODE (op1)) - : word_mode); + : get_shift_amount_mode (word_mode)); /* Apply the truncation to constant shifts. */ if (double_shift_mask > 0 && CONST_INT_P (op1)) - op1 = GEN_INT (INTVAL (op1) & double_shift_mask); + op1 = gen_int_mode (INTVAL (op1) & double_shift_mask, op1_mode); if (op1 == CONST0_RTX (op1_mode)) return op0; @@ -1513,7 +1515,7 @@ expand_binop (machine_mode mode, optab b else { rtx into_temp1, into_temp2, outof_temp1, outof_temp2; - rtx first_shift_count, second_shift_count; + HOST_WIDE_INT first_shift_count, second_shift_count; optab reverse_unsigned_shift, unsigned_shift; reverse_unsigned_shift = (left_shift ^ (shift_count < BITS_PER_WORD) @@ -1524,20 +1526,24 @@ expand_binop (machine_mode mode, optab b if (shift_count > BITS_PER_WORD) { - first_shift_count = GEN_INT (shift_count - BITS_PER_WORD); - second_shift_count = GEN_INT (2 * BITS_PER_WORD - shift_count); + first_shift_count = shift_count - BITS_PER_WORD; + second_shift_count = 2 * BITS_PER_WORD - shift_count; } else { - first_shift_count = GEN_INT (BITS_PER_WORD - shift_count); - second_shift_count = GEN_INT (shift_count); + first_shift_count = BITS_PER_WORD - shift_count; + second_shift_count = shift_count; } + rtx first_shift_count_rtx + = gen_int_shift_amount (word_mode, first_shift_count); + rtx second_shift_count_rtx + = gen_int_shift_amount (word_mode, second_shift_count); into_temp1 = expand_binop (word_mode, unsigned_shift, - outof_input, first_shift_count, + outof_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); into_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - into_input, second_shift_count, + into_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (into_temp1 != 0 && into_temp2 != 0) @@ -1550,10 +1556,10 @@ expand_binop (machine_mode mode, optab b emit_move_insn (into_target, inter); outof_temp1 = expand_binop (word_mode, unsigned_shift, - into_input, first_shift_count, + into_input, first_shift_count_rtx, NULL_RTX, unsignedp, next_methods); outof_temp2 = expand_binop (word_mode, reverse_unsigned_shift, - outof_input, second_shift_count, + outof_input, second_shift_count_rtx, NULL_RTX, unsignedp, next_methods); if (inter != 0 && outof_temp1 != 0 && outof_temp2 != 0) @@ -2793,25 +2799,29 @@ expand_unop (machine_mode mode, optab un if (optab_handler (rotl_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotl_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotl_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } if (optab_handler (rotr_optab, mode) != CODE_FOR_nothing) { - temp = expand_binop (mode, rotr_optab, op0, GEN_INT (8), target, - unsignedp, OPTAB_DIRECT); + temp = expand_binop (mode, rotr_optab, op0, + gen_int_shift_amount (mode, 8), + target, unsignedp, OPTAB_DIRECT); if (temp) return temp; } last = get_last_insn (); - temp1 = expand_binop (mode, ashl_optab, op0, GEN_INT (8), NULL_RTX, + temp1 = expand_binop (mode, ashl_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); - temp2 = expand_binop (mode, lshr_optab, op0, GEN_INT (8), NULL_RTX, + temp2 = expand_binop (mode, lshr_optab, op0, + gen_int_shift_amount (mode, 8), NULL_RTX, unsignedp, OPTAB_WIDEN); if (temp1 && temp2) { @@ -5369,11 +5379,11 @@ vector_compare_rtx (machine_mode cmp_mod } /* Checks if vec_perm mask SEL is a constant equivalent to a shift of the first - vec_perm operand, assuming the second operand is a constant vector of zeroes. - Return the shift distance in bits if so, or NULL_RTX if the vec_perm is not a - shift. */ + vec_perm operand (which has mode OP0_MODE), assuming the second + operand is a constant vector of zeroes. Return the shift distance in + bits if so, or NULL_RTX if the vec_perm is not a shift. */ static rtx -shift_amt_for_vec_perm_mask (rtx sel) +shift_amt_for_vec_perm_mask (machine_mode op0_mode, rtx sel) { unsigned int i, first, nelt = GET_MODE_NUNITS (GET_MODE (sel)); unsigned int bitsize = GET_MODE_UNIT_BITSIZE (GET_MODE (sel)); @@ -5393,7 +5403,7 @@ shift_amt_for_vec_perm_mask (rtx sel) return NULL_RTX; } - return GEN_INT (first * bitsize); + return gen_int_shift_amount (op0_mode, first * bitsize); } /* A subroutine of expand_vec_perm for expanding one vec_perm insn. */ @@ -5473,7 +5483,7 @@ expand_vec_perm (machine_mode mode, rtx && (shift_code != CODE_FOR_nothing || shift_code_qi != CODE_FOR_nothing)) { - shift_amt = shift_amt_for_vec_perm_mask (sel); + shift_amt = shift_amt_for_vec_perm_mask (mode, sel); if (shift_amt) { struct expand_operand ops[3]; @@ -5563,7 +5573,8 @@ expand_vec_perm (machine_mode mode, rtx NULL, 0, OPTAB_DIRECT); else sel = expand_simple_binop (selmode, ASHIFT, sel, - GEN_INT (exact_log2 (u)), + gen_int_shift_amount (selmode, + exact_log2 (u)), NULL, 0, OPTAB_DIRECT); gcc_assert (sel != NULL);