Message ID | 87r2qyc1ww.fsf@linaro.org |
---|---|
State | New |
Headers | show |
Series | Use poly_int tree accessors | expand |
On 01/09/2018 11:39 AM, Richard Sandiford wrote: > This patch generalises various places that used hwi tree accessors > so that they can handle poly_ints instead. Earlier patches did > this while updating interfaces; this patch just mops up some > left-over pieces that weren't necessary to make things compile, > but that still make sense. > > In many cases these changes are by inspection rather than because > something had shown them to be necessary. > > I think the alias.c part is a minor bug fix: previously we used > fits_uhwi_p for a signed HOST_WIDE_INT (which the caller does > treat as signed rather than unsigned). We also checked whether > each individual offset overflowed but didn't check whether the > sum did. > > Sorry for not posting this earlier. I kept holding it back in case > more examples showed up. > > Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu. > Also tested by comparing the before-and-after assembly output for at > least one target per CPU directory. OK to install? > > Richard > > > 2018-01-09 Richard Sandiford <richard.sandiford@linaro.org> > > gcc/ > * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p > and wi::to_poly_offset. Add the current offset and then check > whether the sum fits, rather than using an unchecked addition of > a checked term. Check for a shwi rather than a uhwi. > * expr.c (get_bit_range): Use tree_to_poly_uint64. > (store_constructor): Use poly_int_tree_p. > (expand_expr_real_1): Likewise. > * function.c (assign_temp): Likewise. > * fold-const.c (const_binop): Use poly_int_tree_p and > wi::to_poly_offset. > (fold_indirect_ref_1): Likewise. Use known_in_range_p to test > for an in-range vector access and multiple_p to attempt an exact > division. > * gimplify.c (gimple_add_tmp_var_fn): Use tree_fits_poly_uint64_p. > (gimple_add_tmp_var): Likewise. > * ipa-icf-gimple.c (func_checker::compare_operand): Use > to_poly_offset for MEM offsets. > * ipa-icf.c (sem_variable::equals): Likewise. > * stor-layout.c (compute_record_mode): Use poly_int_tree_p. > * tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise. > * tree-predcom.c (aff_combination_dr_offset): Use wi::to_poly_widest > rather than wi::to_widest for DR_INITs. > * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use > wi::to_poly_offset for BIT_FIELD_REF offsets. > (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and > wi::to_poly_offset. > * tree-vect-data-refs.c (vect_find_same_alignment_drs): Use > wi::to_poly_offset for DR_INIT. > (vect_analyze_data_ref_accesses): Require both DR_INITs to be > INTEGER_CSTs. > (vect_analyze_group_access_1): Note that here. > * var-tracking.c (emit_note_insn_var_location): Use > tree_to_poly_uint64. How important is this? We're just about to move into stage4 and this feels a bit more like something we should do in stage1. jeff
Jeff Law <law@redhat.com> writes: > On 01/09/2018 11:39 AM, Richard Sandiford wrote: >> This patch generalises various places that used hwi tree accessors >> so that they can handle poly_ints instead. Earlier patches did >> this while updating interfaces; this patch just mops up some >> left-over pieces that weren't necessary to make things compile, >> but that still make sense. >> >> In many cases these changes are by inspection rather than because >> something had shown them to be necessary. >> >> I think the alias.c part is a minor bug fix: previously we used >> fits_uhwi_p for a signed HOST_WIDE_INT (which the caller does >> treat as signed rather than unsigned). We also checked whether >> each individual offset overflowed but didn't check whether the >> sum did. >> >> Sorry for not posting this earlier. I kept holding it back in case >> more examples showed up. >> >> Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu. >> Also tested by comparing the before-and-after assembly output for at >> least one target per CPU directory. OK to install? >> >> Richard >> >> >> 2018-01-09 Richard Sandiford <richard.sandiford@linaro.org> >> >> gcc/ >> * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p >> and wi::to_poly_offset. Add the current offset and then check >> whether the sum fits, rather than using an unchecked addition of >> a checked term. Check for a shwi rather than a uhwi. >> * expr.c (get_bit_range): Use tree_to_poly_uint64. >> (store_constructor): Use poly_int_tree_p. >> (expand_expr_real_1): Likewise. >> * function.c (assign_temp): Likewise. >> * fold-const.c (const_binop): Use poly_int_tree_p and >> wi::to_poly_offset. >> (fold_indirect_ref_1): Likewise. Use known_in_range_p to test >> for an in-range vector access and multiple_p to attempt an exact >> division. >> * gimplify.c (gimple_add_tmp_var_fn): Use tree_fits_poly_uint64_p. >> (gimple_add_tmp_var): Likewise. >> * ipa-icf-gimple.c (func_checker::compare_operand): Use >> to_poly_offset for MEM offsets. >> * ipa-icf.c (sem_variable::equals): Likewise. >> * stor-layout.c (compute_record_mode): Use poly_int_tree_p. >> * tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise. >> * tree-predcom.c (aff_combination_dr_offset): Use wi::to_poly_widest >> rather than wi::to_widest for DR_INITs. >> * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use >> wi::to_poly_offset for BIT_FIELD_REF offsets. >> (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and >> wi::to_poly_offset. >> * tree-vect-data-refs.c (vect_find_same_alignment_drs): Use >> wi::to_poly_offset for DR_INIT. >> (vect_analyze_data_ref_accesses): Require both DR_INITs to be >> INTEGER_CSTs. >> (vect_analyze_group_access_1): Note that here. >> * var-tracking.c (emit_note_insn_var_location): Use >> tree_to_poly_uint64. > How important is this? We're just about to move into stage4 and this > feels a bit more like something we should do in stage1. The gimplify.c part is needed in order to build libgfortran for SVE. There were certainly failures without the DR_INIT changes too. I think at least some of the others are also needed to fix test failures, but after a certain point I tried to proactively convert code rather than wait for a testcase to show up why. Sorry for not keeping better notes. Most of these changes are old -- the gimplify.c part has been around for a year and half -- but I was trying to roll stuff up to avoid posting too many patches of this kind. I can try to redo it so that it*s only shwi->poly_int64 and uhwi->poly_uint64 (and so drop things like the alias.c bounds checking fix) if that seems safer at this stage. Richard
Richard Sandiford <richard.sandiford@linaro.org> writes: > Jeff Law <law@redhat.com> writes: >> On 01/09/2018 11:39 AM, Richard Sandiford wrote: >>> This patch generalises various places that used hwi tree accessors >>> so that they can handle poly_ints instead. Earlier patches did >>> this while updating interfaces; this patch just mops up some >>> left-over pieces that weren't necessary to make things compile, >>> but that still make sense. >>> >>> In many cases these changes are by inspection rather than because >>> something had shown them to be necessary. >>> >>> I think the alias.c part is a minor bug fix: previously we used >>> fits_uhwi_p for a signed HOST_WIDE_INT (which the caller does >>> treat as signed rather than unsigned). We also checked whether >>> each individual offset overflowed but didn't check whether the >>> sum did. >>> >>> Sorry for not posting this earlier. I kept holding it back in case >>> more examples showed up. >>> >>> Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu. >>> Also tested by comparing the before-and-after assembly output for at >>> least one target per CPU directory. OK to install? >>> >>> Richard >>> >>> >>> 2018-01-09 Richard Sandiford <richard.sandiford@linaro.org> >>> >>> gcc/ >>> * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p >>> and wi::to_poly_offset. Add the current offset and then check >>> whether the sum fits, rather than using an unchecked addition of >>> a checked term. Check for a shwi rather than a uhwi. >>> * expr.c (get_bit_range): Use tree_to_poly_uint64. >>> (store_constructor): Use poly_int_tree_p. >>> (expand_expr_real_1): Likewise. >>> * function.c (assign_temp): Likewise. >>> * fold-const.c (const_binop): Use poly_int_tree_p and >>> wi::to_poly_offset. >>> (fold_indirect_ref_1): Likewise. Use known_in_range_p to test >>> for an in-range vector access and multiple_p to attempt an exact >>> division. >>> * gimplify.c (gimple_add_tmp_var_fn): Use tree_fits_poly_uint64_p. >>> (gimple_add_tmp_var): Likewise. >>> * ipa-icf-gimple.c (func_checker::compare_operand): Use >>> to_poly_offset for MEM offsets. >>> * ipa-icf.c (sem_variable::equals): Likewise. >>> * stor-layout.c (compute_record_mode): Use poly_int_tree_p. >>> * tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise. >>> * tree-predcom.c (aff_combination_dr_offset): Use wi::to_poly_widest >>> rather than wi::to_widest for DR_INITs. >>> * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use >>> wi::to_poly_offset for BIT_FIELD_REF offsets. >>> (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and >>> wi::to_poly_offset. >>> * tree-vect-data-refs.c (vect_find_same_alignment_drs): Use >>> wi::to_poly_offset for DR_INIT. >>> (vect_analyze_data_ref_accesses): Require both DR_INITs to be >>> INTEGER_CSTs. >>> (vect_analyze_group_access_1): Note that here. >>> * var-tracking.c (emit_note_insn_var_location): Use >>> tree_to_poly_uint64. >> How important is this? We're just about to move into stage4 and this >> feels a bit more like something we should do in stage1. > > The gimplify.c part is needed in order to build libgfortran for SVE. > There were certainly failures without the DR_INIT changes too. > I think at least some of the others are also needed to fix test > failures, but after a certain point I tried to proactively convert > code rather than wait for a testcase to show up why. > > Sorry for not keeping better notes. > > Most of these changes are old -- the gimplify.c part has been around > for a year and half -- but I was trying to roll stuff up to avoid > posting too many patches of this kind. > > I can try to redo it so that it*s only shwi->poly_int64 and > uhwi->poly_uint64 (and so drop things like the alias.c bounds > checking fix) if that seems safer at this stage. It turns out that the only two parts needed to build an SVE toolchain are the gimplify.c change I mentioned above and the tree-vectorizer.c change. I'll reply with individual patches for those. I'd still like to fix the DR_INIT uses too, even though I no longer have a specific example that needs it. None of the other changes are needed to get clean test results or when building the benchmarks that we've been using for SVE, so I'm happy to leave those to GCC 9 if you think that's better. None of the changes in the corresponding rtl patch seem to be needed for correctness. Thanks, Richard
On 01/12/2018 06:31 AM, Richard Sandiford wrote: > Richard Sandiford <richard.sandiford@linaro.org> writes: >> Jeff Law <law@redhat.com> writes: >>> On 01/09/2018 11:39 AM, Richard Sandiford wrote: >>>> This patch generalises various places that used hwi tree accessors >>>> so that they can handle poly_ints instead. Earlier patches did >>>> this while updating interfaces; this patch just mops up some >>>> left-over pieces that weren't necessary to make things compile, >>>> but that still make sense. >>>> >>>> In many cases these changes are by inspection rather than because >>>> something had shown them to be necessary. >>>> >>>> I think the alias.c part is a minor bug fix: previously we used >>>> fits_uhwi_p for a signed HOST_WIDE_INT (which the caller does >>>> treat as signed rather than unsigned). We also checked whether >>>> each individual offset overflowed but didn't check whether the >>>> sum did. >>>> >>>> Sorry for not posting this earlier. I kept holding it back in case >>>> more examples showed up. >>>> >>>> Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu. >>>> Also tested by comparing the before-and-after assembly output for at >>>> least one target per CPU directory. OK to install? >>>> >>>> Richard >>>> >>>> >>>> 2018-01-09 Richard Sandiford <richard.sandiford@linaro.org> >>>> >>>> gcc/ >>>> * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p >>>> and wi::to_poly_offset. Add the current offset and then check >>>> whether the sum fits, rather than using an unchecked addition of >>>> a checked term. Check for a shwi rather than a uhwi. >>>> * expr.c (get_bit_range): Use tree_to_poly_uint64. >>>> (store_constructor): Use poly_int_tree_p. >>>> (expand_expr_real_1): Likewise. >>>> * function.c (assign_temp): Likewise. >>>> * fold-const.c (const_binop): Use poly_int_tree_p and >>>> wi::to_poly_offset. >>>> (fold_indirect_ref_1): Likewise. Use known_in_range_p to test >>>> for an in-range vector access and multiple_p to attempt an exact >>>> division. >>>> * gimplify.c (gimple_add_tmp_var_fn): Use tree_fits_poly_uint64_p. >>>> (gimple_add_tmp_var): Likewise. >>>> * ipa-icf-gimple.c (func_checker::compare_operand): Use >>>> to_poly_offset for MEM offsets. >>>> * ipa-icf.c (sem_variable::equals): Likewise. >>>> * stor-layout.c (compute_record_mode): Use poly_int_tree_p. >>>> * tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise. >>>> * tree-predcom.c (aff_combination_dr_offset): Use wi::to_poly_widest >>>> rather than wi::to_widest for DR_INITs. >>>> * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use >>>> wi::to_poly_offset for BIT_FIELD_REF offsets. >>>> (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and >>>> wi::to_poly_offset. >>>> * tree-vect-data-refs.c (vect_find_same_alignment_drs): Use >>>> wi::to_poly_offset for DR_INIT. >>>> (vect_analyze_data_ref_accesses): Require both DR_INITs to be >>>> INTEGER_CSTs. >>>> (vect_analyze_group_access_1): Note that here. >>>> * var-tracking.c (emit_note_insn_var_location): Use >>>> tree_to_poly_uint64. >>> How important is this? We're just about to move into stage4 and this >>> feels a bit more like something we should do in stage1. >> >> The gimplify.c part is needed in order to build libgfortran for SVE. >> There were certainly failures without the DR_INIT changes too. >> I think at least some of the others are also needed to fix test >> failures, but after a certain point I tried to proactively convert >> code rather than wait for a testcase to show up why. >> >> Sorry for not keeping better notes. >> >> Most of these changes are old -- the gimplify.c part has been around >> for a year and half -- but I was trying to roll stuff up to avoid >> posting too many patches of this kind. >> >> I can try to redo it so that it*s only shwi->poly_int64 and >> uhwi->poly_uint64 (and so drop things like the alias.c bounds >> checking fix) if that seems safer at this stage. > > It turns out that the only two parts needed to build an SVE toolchain are > the gimplify.c change I mentioned above and the tree-vectorizer.c change. > I'll reply with individual patches for those. I'd still like to fix the > DR_INIT uses too, even though I no longer have a specific example that > needs it. THanks for splitting those out. > > None of the other changes are needed to get clean test results or > when building the benchmarks that we've been using for SVE, so I'm > happy to leave those to GCC 9 if you think that's better. > > None of the changes in the corresponding rtl patch seem to be needed > for correctness. I'm certainly looking to reduce the churn at this point :-) Obviously if something comes up and we see a need we can address it. I wouldn't be terribly surprised if that happens over time. jeff
On 01/09/2018 11:39 AM, Richard Sandiford wrote: > This patch generalises various places that used hwi tree accessors > so that they can handle poly_ints instead. Earlier patches did > this while updating interfaces; this patch just mops up some > left-over pieces that weren't necessary to make things compile, > but that still make sense. > > In many cases these changes are by inspection rather than because > something had shown them to be necessary. > > I think the alias.c part is a minor bug fix: previously we used > fits_uhwi_p for a signed HOST_WIDE_INT (which the caller does > treat as signed rather than unsigned). We also checked whether > each individual offset overflowed but didn't check whether the > sum did. > > Sorry for not posting this earlier. I kept holding it back in case > more examples showed up. > > Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu. > Also tested by comparing the before-and-after assembly output for at > least one target per CPU directory. OK to install? > > Richard > > > 2018-01-09 Richard Sandiford <richard.sandiford@linaro.org> > > gcc/ > * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p > and wi::to_poly_offset. Add the current offset and then check > whether the sum fits, rather than using an unchecked addition of > a checked term. Check for a shwi rather than a uhwi. > * expr.c (get_bit_range): Use tree_to_poly_uint64. > (store_constructor): Use poly_int_tree_p. > (expand_expr_real_1): Likewise. > * function.c (assign_temp): Likewise. > * fold-const.c (const_binop): Use poly_int_tree_p and > wi::to_poly_offset. > (fold_indirect_ref_1): Likewise. Use known_in_range_p to test > for an in-range vector access and multiple_p to attempt an exact > division. > * gimplify.c (gimple_add_tmp_var_fn): Use tree_fits_poly_uint64_p. > (gimple_add_tmp_var): Likewise. > * ipa-icf-gimple.c (func_checker::compare_operand): Use > to_poly_offset for MEM offsets. > * ipa-icf.c (sem_variable::equals): Likewise. > * stor-layout.c (compute_record_mode): Use poly_int_tree_p. > * tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise. > * tree-predcom.c (aff_combination_dr_offset): Use wi::to_poly_widest > rather than wi::to_widest for DR_INITs. > * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use > wi::to_poly_offset for BIT_FIELD_REF offsets. > (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and > wi::to_poly_offset. > * tree-vect-data-refs.c (vect_find_same_alignment_drs): Use > wi::to_poly_offset for DR_INIT. > (vect_analyze_data_ref_accesses): Require both DR_INITs to be > INTEGER_CSTs. > (vect_analyze_group_access_1): Note that here. > * var-tracking.c (emit_note_insn_var_location): Use > tree_to_poly_uint64. OK. If minor edits are necessary to deal changes since this was originally posted, consider those pre-approved. Jeff
Jeff Law <law@redhat.com> writes: > On 01/09/2018 11:39 AM, Richard Sandiford wrote: >> This patch generalises various places that used hwi tree accessors >> so that they can handle poly_ints instead. Earlier patches did >> this while updating interfaces; this patch just mops up some >> left-over pieces that weren't necessary to make things compile, >> but that still make sense. >> >> In many cases these changes are by inspection rather than because >> something had shown them to be necessary. >> >> I think the alias.c part is a minor bug fix: previously we used >> fits_uhwi_p for a signed HOST_WIDE_INT (which the caller does >> treat as signed rather than unsigned). We also checked whether >> each individual offset overflowed but didn't check whether the >> sum did. >> >> Sorry for not posting this earlier. I kept holding it back in case >> more examples showed up. >> >> Tested on aarch64-linux-gnu, x86_64-linux-gnu and powerpc64le-linux-gnu. >> Also tested by comparing the before-and-after assembly output for at >> least one target per CPU directory. OK to install? >> >> Richard >> >> >> 2018-01-09 Richard Sandiford <richard.sandiford@linaro.org> >> >> gcc/ >> * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p >> and wi::to_poly_offset. Add the current offset and then check >> whether the sum fits, rather than using an unchecked addition of >> a checked term. Check for a shwi rather than a uhwi. >> * expr.c (get_bit_range): Use tree_to_poly_uint64. >> (store_constructor): Use poly_int_tree_p. >> (expand_expr_real_1): Likewise. >> * function.c (assign_temp): Likewise. >> * fold-const.c (const_binop): Use poly_int_tree_p and >> wi::to_poly_offset. >> (fold_indirect_ref_1): Likewise. Use known_in_range_p to test >> for an in-range vector access and multiple_p to attempt an exact >> division. >> * gimplify.c (gimple_add_tmp_var_fn): Use tree_fits_poly_uint64_p. >> (gimple_add_tmp_var): Likewise. >> * ipa-icf-gimple.c (func_checker::compare_operand): Use >> to_poly_offset for MEM offsets. >> * ipa-icf.c (sem_variable::equals): Likewise. >> * stor-layout.c (compute_record_mode): Use poly_int_tree_p. >> * tree-vectorizer.c (get_vec_alignment_for_array_type): Likewise. >> * tree-predcom.c (aff_combination_dr_offset): Use wi::to_poly_widest >> rather than wi::to_widest for DR_INITs. >> * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use >> wi::to_poly_offset for BIT_FIELD_REF offsets. >> (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and >> wi::to_poly_offset. >> * tree-vect-data-refs.c (vect_find_same_alignment_drs): Use >> wi::to_poly_offset for DR_INIT. >> (vect_analyze_data_ref_accesses): Require both DR_INITs to be >> INTEGER_CSTs. >> (vect_analyze_group_access_1): Note that here. >> * var-tracking.c (emit_note_insn_var_location): Use >> tree_to_poly_uint64. > OK. If minor edits are necessary to deal changes since this was > originally posted, consider those pre-approved. Thanks Jeff! Finally got round to updating and retesting this. Committed as r260914. Richard 2018-05-30 Richard Sandiford <richard.sandiford@linaro.org> gcc/ * alias.c (adjust_offset_for_component_ref): Use poly_int_tree_p and wi::to_poly_offset. Add the current offset and then check whether the sum fits, rather than using an unchecked addition of a checked term. Check for a shwi rather than a uhwi. * expr.c (get_bit_range): Use tree_to_poly_uint64. (store_constructor): Use poly_int_tree_p. (expand_expr_real_1): Likewise. * function.c (assign_temp): Likewise. * fold-const.c (const_binop): Use poly_int_tree_p and wi::to_poly_offset. (fold_indirect_ref_1): Likewise. Use multiple_p to attempt an exact division. * ipa-icf-gimple.c (func_checker::compare_operand): Use to_poly_offset for MEM offsets. * ipa-icf.c (sem_variable::equals): Likewise. * stor-layout.c (compute_record_mode): Use poly_int_tree_p. * tree-ssa-sccvn.c (ao_ref_init_from_vn_reference): Use wi::to_poly_offset for BIT_FIELD_REF offsets. (vn_reference_maybe_forwprop_address): Use poly_int_tree_p and wi::to_poly_offset. * var-tracking.c (emit_note_insn_var_location): Use tree_to_poly_uint64. Index: gcc/alias.c =================================================================== --- gcc/alias.c 2018-04-10 11:26:52.500490858 +0100 +++ gcc/alias.c 2018-05-30 07:26:50.931554833 +0100 @@ -2698,22 +2698,22 @@ adjust_offset_for_component_ref (tree x, { tree xoffset = component_ref_field_offset (x); tree field = TREE_OPERAND (x, 1); - if (TREE_CODE (xoffset) != INTEGER_CST) + if (!poly_int_tree_p (xoffset)) { *known_p = false; return; } - offset_int woffset - = (wi::to_offset (xoffset) + poly_offset_int woffset + = (wi::to_poly_offset (xoffset) + (wi::to_offset (DECL_FIELD_BIT_OFFSET (field)) - >> LOG2_BITS_PER_UNIT)); - if (!wi::fits_uhwi_p (woffset)) + >> LOG2_BITS_PER_UNIT) + + *offset); + if (!woffset.to_shwi (offset)) { *known_p = false; return; } - *offset += woffset.to_uhwi (); x = TREE_OPERAND (x, 0); } Index: gcc/expr.c =================================================================== --- gcc/expr.c 2018-05-18 09:26:37.721714880 +0100 +++ gcc/expr.c 2018-05-30 07:26:50.933554808 +0100 @@ -4913,7 +4913,7 @@ get_bit_range (poly_uint64_pod *bitstart else *bitstart = *bitpos - bitoffset; - *bitend = *bitstart + tree_to_uhwi (DECL_SIZE (repr)) - 1; + *bitend = *bitstart + tree_to_poly_uint64 (DECL_SIZE (repr)) - 1; } /* Returns true if ADDR is an ADDR_EXPR of a DECL that does not reside @@ -6521,12 +6521,10 @@ store_constructor (tree exp, rtx target, continue; mode = TYPE_MODE (elttype); - if (mode == BLKmode) - bitsize = (tree_fits_uhwi_p (TYPE_SIZE (elttype)) - ? tree_to_uhwi (TYPE_SIZE (elttype)) - : -1); - else + if (mode != BLKmode) bitsize = GET_MODE_BITSIZE (mode); + else if (!poly_int_tree_p (TYPE_SIZE (elttype), &bitsize)) + bitsize = -1; if (index != NULL_TREE && TREE_CODE (index) == RANGE_EXPR) { @@ -10235,11 +10233,11 @@ expand_expr_real_1 (tree exp, rtx target { poly_int64 offset = mem_ref_offset (exp).force_shwi (); base = TREE_OPERAND (base, 0); + poly_uint64 type_size; if (known_eq (offset, 0) && !reverse - && tree_fits_uhwi_p (TYPE_SIZE (type)) - && known_eq (GET_MODE_BITSIZE (DECL_MODE (base)), - tree_to_uhwi (TYPE_SIZE (type)))) + && poly_int_tree_p (TYPE_SIZE (type), &type_size) + && known_eq (GET_MODE_BITSIZE (DECL_MODE (base)), type_size)) return expand_expr (build1 (VIEW_CONVERT_EXPR, type, base), target, tmode, modifier); if (TYPE_MODE (type) == BLKmode) Index: gcc/function.c =================================================================== --- gcc/function.c 2018-03-24 10:52:14.276582048 +0000 +++ gcc/function.c 2018-05-30 07:26:50.935554783 +0100 @@ -978,25 +978,26 @@ assign_temp (tree type_or_decl, int memo if (mode == BLKmode || memory_required) { - HOST_WIDE_INT size = int_size_in_bytes (type); + poly_int64 size; rtx tmp; - /* Zero sized arrays are GNU C extension. Set size to 1 to avoid - problems with allocating the stack space. */ - if (size == 0) - size = 1; - /* Unfortunately, we don't yet know how to allocate variable-sized temporaries. However, sometimes we can find a fixed upper limit on the size, so try that instead. */ - else if (size == -1) + if (!poly_int_tree_p (TYPE_SIZE_UNIT (type), &size)) size = max_int_size_in_bytes (type); + /* Zero sized arrays are a GNU C extension. Set size to 1 to avoid + problems with allocating the stack space. */ + if (known_eq (size, 0)) + size = 1; + /* The size of the temporary may be too large to fit into an integer. */ /* ??? Not sure this should happen except for user silliness, so limit this to things that aren't compiler-generated temporaries. The rest of the time we'll die in assign_stack_temp_for_type. */ - if (decl && size == -1 + if (decl + && !known_size_p (size) && TREE_CODE (TYPE_SIZE_UNIT (type)) == INTEGER_CST) { error ("size of variable %q+D is too large", decl); Index: gcc/fold-const.c =================================================================== --- gcc/fold-const.c 2018-05-30 07:24:51.607241042 +0100 +++ gcc/fold-const.c 2018-05-30 07:26:50.934554795 +0100 @@ -1611,10 +1611,10 @@ const_binop (enum tree_code code, tree t return NULL_TREE; case POINTER_DIFF_EXPR: - if (TREE_CODE (arg1) == INTEGER_CST && TREE_CODE (arg2) == INTEGER_CST) + if (poly_int_tree_p (arg1) && poly_int_tree_p (arg2)) { - offset_int res = wi::sub (wi::to_offset (arg1), - wi::to_offset (arg2)); + poly_offset_int res = (wi::to_poly_offset (arg1) + - wi::to_poly_offset (arg2)); return force_fit_type (type, res, 1, TREE_OVERFLOW (arg1) | TREE_OVERFLOW (arg2)); } @@ -14193,13 +14193,12 @@ fold_indirect_ref_1 (location_t loc, tre tree min_val = size_zero_node; if (type_domain && TYPE_MIN_VALUE (type_domain)) min_val = TYPE_MIN_VALUE (type_domain); - offset_int off = wi::to_offset (op01); - offset_int el_sz = wi::to_offset (TYPE_SIZE_UNIT (type)); - offset_int remainder; - off = wi::divmod_trunc (off, el_sz, SIGNED, &remainder); - if (remainder == 0 && TREE_CODE (min_val) == INTEGER_CST) + poly_uint64 type_size, index; + if (poly_int_tree_p (min_val) + && poly_int_tree_p (TYPE_SIZE_UNIT (type), &type_size) + && multiple_p (const_op01, type_size, &index)) { - off = off + wi::to_offset (min_val); + poly_offset_int off = index + wi::to_poly_offset (min_val); op01 = wide_int_to_tree (sizetype, off); return build4_loc (loc, ARRAY_REF, type, op00, op01, NULL_TREE, NULL_TREE); Index: gcc/ipa-icf-gimple.c =================================================================== --- gcc/ipa-icf-gimple.c 2018-01-12 16:56:29.267928000 +0000 +++ gcc/ipa-icf-gimple.c 2018-05-30 07:26:50.935554783 +0100 @@ -463,7 +463,7 @@ func_checker::compare_operand (tree t1, return return_false_with_msg (""); /* Type of the offset on MEM_REF does not matter. */ - return wi::to_offset (y1) == wi::to_offset (y2); + return known_eq (wi::to_poly_offset (y1), wi::to_poly_offset (y2)); } case COMPONENT_REF: { Index: gcc/ipa-icf.c =================================================================== --- gcc/ipa-icf.c 2018-05-22 13:22:01.883333074 +0100 +++ gcc/ipa-icf.c 2018-05-30 07:26:50.935554783 +0100 @@ -1983,8 +1983,8 @@ sem_variable::equals (tree t1, tree t2) /* Type of the offset on MEM_REF does not matter. */ return return_with_debug (sem_variable::equals (x1, x2) - && wi::to_offset (y1) - == wi::to_offset (y2)); + && known_eq (wi::to_poly_offset (y1), + wi::to_poly_offset (y2))); } case ADDR_EXPR: case FDESC_EXPR: Index: gcc/stor-layout.c =================================================================== --- gcc/stor-layout.c 2018-03-01 08:20:43.845526342 +0000 +++ gcc/stor-layout.c 2018-05-30 07:26:50.936554770 +0100 @@ -1838,9 +1838,11 @@ compute_record_mode (tree type) /* If we only have one real field; use its mode if that mode's size matches the type's size. This only applies to RECORD_TYPE. This does not apply to unions. */ - if (TREE_CODE (type) == RECORD_TYPE && mode != VOIDmode - && tree_fits_uhwi_p (TYPE_SIZE (type)) - && known_eq (GET_MODE_BITSIZE (mode), tree_to_uhwi (TYPE_SIZE (type)))) + poly_uint64 type_size; + if (TREE_CODE (type) == RECORD_TYPE + && mode != VOIDmode + && poly_int_tree_p (TYPE_SIZE (type), &type_size) + && known_eq (GET_MODE_BITSIZE (mode), type_size)) ; else mode = mode_for_size_tree (TYPE_SIZE (type), MODE_INT, 1).else_blk (); Index: gcc/tree-ssa-sccvn.c =================================================================== --- gcc/tree-ssa-sccvn.c 2018-05-30 07:24:51.894236549 +0100 +++ gcc/tree-ssa-sccvn.c 2018-05-30 07:26:50.936554770 +0100 @@ -999,7 +999,7 @@ ao_ref_init_from_vn_reference (ao_ref *r /* And now the usual component-reference style ops. */ case BIT_FIELD_REF: - offset += wi::to_offset (op->op1); + offset += wi::to_poly_offset (op->op1); break; case COMPONENT_REF: @@ -1265,10 +1265,10 @@ vn_reference_maybe_forwprop_address (vec ptroff = gimple_assign_rhs2 (def_stmt); if (TREE_CODE (ptr) != SSA_NAME || SSA_NAME_OCCURS_IN_ABNORMAL_PHI (ptr) - || TREE_CODE (ptroff) != INTEGER_CST) + || !poly_int_tree_p (ptroff)) return false; - off += wi::to_offset (ptroff); + off += wi::to_poly_offset (ptroff); op->op0 = ptr; } Index: gcc/var-tracking.c =================================================================== --- gcc/var-tracking.c 2018-05-01 19:31:03.074312721 +0100 +++ gcc/var-tracking.c 2018-05-30 07:26:50.937554757 +0100 @@ -8665,7 +8665,6 @@ emit_note_insn_var_location (variable ** bool complete; enum var_init_status initialized = VAR_INIT_STATUS_UNINITIALIZED; HOST_WIDE_INT last_limit; - tree type_size_unit; HOST_WIDE_INT offsets[MAX_VAR_PARTS]; rtx loc[MAX_VAR_PARTS]; tree decl; @@ -8816,8 +8815,9 @@ emit_note_insn_var_location (variable ** } ++n_var_parts; } - type_size_unit = TYPE_SIZE_UNIT (TREE_TYPE (decl)); - if ((unsigned HOST_WIDE_INT) last_limit < TREE_INT_CST_LOW (type_size_unit)) + poly_uint64 type_size_unit + = tree_to_poly_uint64 (TYPE_SIZE_UNIT (TREE_TYPE (decl))); + if (maybe_lt (poly_uint64 (last_limit), type_size_unit)) complete = false; if (! flag_var_tracking_uninit)
Index: gcc/alias.c =================================================================== --- gcc/alias.c 2018-01-09 18:26:49.865693173 +0000 +++ gcc/alias.c 2018-01-09 18:37:03.806269394 +0000 @@ -2685,22 +2685,22 @@ adjust_offset_for_component_ref (tree x, { tree xoffset = component_ref_field_offset (x); tree field = TREE_OPERAND (x, 1); - if (TREE_CODE (xoffset) != INTEGER_CST) + if (!poly_int_tree_p (xoffset)) { *known_p = false; return; } - offset_int woffset - = (wi::to_offset (xoffset) + poly_offset_int woffset + = (wi::to_poly_offset (xoffset) + (wi::to_offset (DECL_FIELD_BIT_OFFSET (field)) - >> LOG2_BITS_PER_UNIT)); - if (!wi::fits_uhwi_p (woffset)) + >> LOG2_BITS_PER_UNIT) + + *offset); + if (!woffset.to_shwi (offset)) { *known_p = false; return; } - *offset += woffset.to_uhwi (); x = TREE_OPERAND (x, 0); } Index: gcc/expr.c =================================================================== --- gcc/expr.c 2018-01-09 18:26:49.869693013 +0000 +++ gcc/expr.c 2018-01-09 18:37:03.807269355 +0000 @@ -4911,7 +4911,7 @@ get_bit_range (poly_uint64_pod *bitstart else *bitstart = *bitpos - bitoffset; - *bitend = *bitstart + tree_to_uhwi (DECL_SIZE (repr)) - 1; + *bitend = *bitstart + tree_to_poly_uint64 (DECL_SIZE (repr)) - 1; } /* Returns true if ADDR is an ADDR_EXPR of a DECL that does not reside @@ -6518,12 +6518,10 @@ store_constructor (tree exp, rtx target, continue; mode = TYPE_MODE (elttype); - if (mode == BLKmode) - bitsize = (tree_fits_uhwi_p (TYPE_SIZE (elttype)) - ? tree_to_uhwi (TYPE_SIZE (elttype)) - : -1); - else + if (mode != BLKmode) bitsize = GET_MODE_BITSIZE (mode); + else if (!poly_int_tree_p (TYPE_SIZE (elttype), &bitsize)) + bitsize = -1; if (index != NULL_TREE && TREE_CODE (index) == RANGE_EXPR) { @@ -10289,11 +10287,11 @@ expand_expr_real_1 (tree exp, rtx target { poly_int64 offset = mem_ref_offset (exp).force_shwi (); base = TREE_OPERAND (base, 0); + poly_uint64 type_size; if (known_eq (offset, 0) && !reverse - && tree_fits_uhwi_p (TYPE_SIZE (type)) - && known_eq (GET_MODE_BITSIZE (DECL_MODE (base)), - tree_to_uhwi (TYPE_SIZE (type)))) + && poly_int_tree_p (TYPE_SIZE (type), &type_size) + && known_eq (GET_MODE_BITSIZE (DECL_MODE (base)), type_size)) return expand_expr (build1 (VIEW_CONVERT_EXPR, type, base), target, tmode, modifier); if (TYPE_MODE (type) == BLKmode) Index: gcc/function.c =================================================================== --- gcc/function.c 2018-01-03 21:42:44.561647089 +0000 +++ gcc/function.c 2018-01-09 18:37:03.809269278 +0000 @@ -976,25 +976,26 @@ assign_temp (tree type_or_decl, int memo if (mode == BLKmode || memory_required) { - HOST_WIDE_INT size = int_size_in_bytes (type); + poly_int64 size; rtx tmp; - /* Zero sized arrays are GNU C extension. Set size to 1 to avoid - problems with allocating the stack space. */ - if (size == 0) - size = 1; - /* Unfortunately, we don't yet know how to allocate variable-sized temporaries. However, sometimes we can find a fixed upper limit on the size, so try that instead. */ - else if (size == -1) + if (!poly_int_tree_p (TYPE_SIZE_UNIT (type), &size)) size = max_int_size_in_bytes (type); + /* Zero sized arrays are a GNU C extension. Set size to 1 to avoid + problems with allocating the stack space. */ + if (known_eq (size, 0)) + size = 1; + /* The size of the temporary may be too large to fit into an integer. */ /* ??? Not sure this should happen except for user silliness, so limit this to things that aren't compiler-generated temporaries. The rest of the time we'll die in assign_stack_temp_for_type. */ - if (decl && size == -1 + if (decl + && !known_size_p (size) && TREE_CODE (TYPE_SIZE_UNIT (type)) == INTEGER_CST) { error ("size of variable %q+D is too large", decl); Index: gcc/fold-const.c =================================================================== --- gcc/fold-const.c 2018-01-09 15:46:34.702438740 +0000 +++ gcc/fold-const.c 2018-01-09 18:37:03.808269317 +0000 @@ -1605,10 +1605,10 @@ const_binop (enum tree_code code, tree t return NULL_TREE; case POINTER_DIFF_EXPR: - if (TREE_CODE (arg1) == INTEGER_CST && TREE_CODE (arg2) == INTEGER_CST) + if (poly_int_tree_p (arg1) && poly_int_tree_p (arg2)) { - offset_int res = wi::sub (wi::to_offset (arg1), - wi::to_offset (arg2)); + poly_offset_int res = (wi::to_poly_offset (arg1) + - wi::to_poly_offset (arg2)); return force_fit_type (type, res, 1, TREE_OVERFLOW (arg1) | TREE_OVERFLOW (arg2)); } @@ -14070,7 +14070,6 @@ fold_indirect_ref_1 (location_t loc, tre { tree sub = op0; tree subtype; - poly_uint64 const_op01; STRIP_NOPS (sub); subtype = TREE_TYPE (sub); @@ -14125,7 +14124,7 @@ fold_indirect_ref_1 (location_t loc, tre } if (TREE_CODE (sub) == POINTER_PLUS_EXPR - && poly_int_tree_p (TREE_OPERAND (sub, 1), &const_op01)) + && poly_int_tree_p (TREE_OPERAND (sub, 1))) { tree op00 = TREE_OPERAND (sub, 0); tree op01 = TREE_OPERAND (sub, 1); @@ -14145,9 +14144,11 @@ fold_indirect_ref_1 (location_t loc, tre poly_uint64 max_offset = (tree_to_uhwi (part_width) / BITS_PER_UNIT * TYPE_VECTOR_SUBPARTS (op00type)); - if (known_lt (const_op01, max_offset)) + if (known_in_range_p (wi::to_poly_offset (op01), 0, max_offset)) { - tree index = bitsize_int (const_op01 * BITS_PER_UNIT); + tree index = wide_int_to_tree (bitsizetype, + wi::to_poly_offset (op01) + * BITS_PER_UNIT); return fold_build3_loc (loc, BIT_FIELD_REF, type, op00, part_width, index); @@ -14158,7 +14159,7 @@ fold_indirect_ref_1 (location_t loc, tre && type == TREE_TYPE (op00type)) { if (known_eq (wi::to_poly_offset (TYPE_SIZE_UNIT (type)), - const_op01)) + wi::to_poly_offset (op01))) return fold_build1_loc (loc, IMAGPART_EXPR, type, op00); } /* ((foo *)&fooarray)[1] => fooarray[1] */ @@ -14169,13 +14170,13 @@ fold_indirect_ref_1 (location_t loc, tre tree min = size_zero_node; if (type_domain && TYPE_MIN_VALUE (type_domain)) min = TYPE_MIN_VALUE (type_domain); - offset_int off = wi::to_offset (op01); - offset_int el_sz = wi::to_offset (TYPE_SIZE_UNIT (type)); - offset_int remainder; - off = wi::divmod_trunc (off, el_sz, SIGNED, &remainder); - if (remainder == 0 && TREE_CODE (min) == INTEGER_CST) + poly_offset_int off; + if (poly_int_tree_p (min) + && multiple_p (wi::to_poly_offset (op01), + wi::to_poly_offset (TYPE_SIZE_UNIT (type)), + &off)) { - off = off + wi::to_offset (min); + off = off + wi::to_poly_offset (min); op01 = wide_int_to_tree (sizetype, off); return build4_loc (loc, ARRAY_REF, type, op00, op01, NULL_TREE, NULL_TREE); Index: gcc/gimplify.c =================================================================== --- gcc/gimplify.c 2018-01-03 11:12:58.585650503 +0000 +++ gcc/gimplify.c 2018-01-09 18:37:03.810269239 +0000 @@ -702,7 +702,7 @@ gimple_add_tmp_var_fn (struct function * /* Later processing assumes that the object size is constant, which might not be true at this point. Force the use of a constant upper bound in this case. */ - if (!tree_fits_uhwi_p (DECL_SIZE_UNIT (tmp))) + if (!tree_fits_poly_uint64_p (DECL_SIZE_UNIT (tmp))) force_constant_size (tmp); DECL_CONTEXT (tmp) = fn->decl; @@ -721,7 +721,7 @@ gimple_add_tmp_var (tree tmp) /* Later processing assumes that the object size is constant, which might not be true at this point. Force the use of a constant upper bound in this case. */ - if (!tree_fits_uhwi_p (DECL_SIZE_UNIT (tmp))) + if (!tree_fits_poly_uint64_p (DECL_SIZE_UNIT (tmp))) force_constant_size (tmp); DECL_CONTEXT (tmp) = current_function_decl; Index: gcc/ipa-icf-gimple.c =================================================================== --- gcc/ipa-icf-gimple.c 2018-01-03 11:12:56.128747274 +0000 +++ gcc/ipa-icf-gimple.c 2018-01-09 18:37:03.810269239 +0000 @@ -463,7 +463,7 @@ func_checker::compare_operand (tree t1, return return_false_with_msg (""); /* Type of the offset on MEM_REF does not matter. */ - return wi::to_offset (y1) == wi::to_offset (y2); + return known_eq (wi::to_poly_offset (y1), wi::to_poly_offset (y2)); } case COMPONENT_REF: { Index: gcc/ipa-icf.c =================================================================== --- gcc/ipa-icf.c 2018-01-04 09:45:27.378644080 +0000 +++ gcc/ipa-icf.c 2018-01-09 18:37:03.811269201 +0000 @@ -1979,8 +1979,8 @@ sem_variable::equals (tree t1, tree t2) /* Type of the offset on MEM_REF does not matter. */ return return_with_debug (sem_variable::equals (x1, x2) - && wi::to_offset (y1) - == wi::to_offset (y2)); + && known_eq (wi::to_poly_offset (y1), + wi::to_poly_offset (y2))); } case ADDR_EXPR: case FDESC_EXPR: Index: gcc/stor-layout.c =================================================================== --- gcc/stor-layout.c 2018-01-09 15:46:34.647440890 +0000 +++ gcc/stor-layout.c 2018-01-09 18:37:03.811269201 +0000 @@ -1773,9 +1773,11 @@ compute_record_mode (tree type) /* If we only have one real field; use its mode if that mode's size matches the type's size. This only applies to RECORD_TYPE. This does not apply to unions. */ - if (TREE_CODE (type) == RECORD_TYPE && mode != VOIDmode - && tree_fits_uhwi_p (TYPE_SIZE (type)) - && known_eq (GET_MODE_BITSIZE (mode), tree_to_uhwi (TYPE_SIZE (type)))) + poly_uint64 type_size; + if (TREE_CODE (type) == RECORD_TYPE + && mode != VOIDmode + && poly_int_tree_p (TYPE_SIZE (type), &type_size) + && known_eq (GET_MODE_BITSIZE (mode), type_size)) ; else mode = mode_for_size_tree (TYPE_SIZE (type), MODE_INT, 1).else_blk (); Index: gcc/tree-vectorizer.c =================================================================== --- gcc/tree-vectorizer.c 2018-01-09 15:46:33.713477419 +0000 +++ gcc/tree-vectorizer.c 2018-01-09 18:37:03.813269123 +0000 @@ -1015,12 +1015,13 @@ static unsigned get_vec_alignment_for_ty get_vec_alignment_for_array_type (tree type) { gcc_assert (TREE_CODE (type) == ARRAY_TYPE); + poly_uint64 array_size, vector_size; tree vectype = get_vectype_for_scalar_type (strip_array_types (type)); if (!vectype - || !TYPE_SIZE (type) - || TREE_CODE (TYPE_SIZE (type)) != INTEGER_CST - || tree_int_cst_lt (TYPE_SIZE (type), TYPE_SIZE (vectype))) + || !poly_int_tree_p (TYPE_SIZE (type), &array_size) + || !poly_int_tree_p (TYPE_SIZE (vectype), &vector_size) + || maybe_lt (array_size, vector_size)) return 0; return TYPE_ALIGN (vectype); Index: gcc/tree-predcom.c =================================================================== --- gcc/tree-predcom.c 2018-01-03 11:12:58.631648690 +0000 +++ gcc/tree-predcom.c 2018-01-09 18:37:03.811269201 +0000 @@ -680,7 +680,7 @@ aff_combination_dr_offset (struct data_r tree_to_aff_combination_expand (DR_OFFSET (dr), type, offset, &name_expansions); - aff_combination_const (&delta, type, wi::to_widest (DR_INIT (dr))); + aff_combination_const (&delta, type, wi::to_poly_widest (DR_INIT (dr))); aff_combination_add (offset, &delta); } Index: gcc/tree-ssa-sccvn.c =================================================================== --- gcc/tree-ssa-sccvn.c 2018-01-03 11:12:56.829719677 +0000 +++ gcc/tree-ssa-sccvn.c 2018-01-09 18:37:03.812269162 +0000 @@ -999,7 +999,7 @@ ao_ref_init_from_vn_reference (ao_ref *r /* And now the usual component-reference style ops. */ case BIT_FIELD_REF: - offset += wi::to_offset (op->op1); + offset += wi::to_poly_offset (op->op1); break; case COMPONENT_REF: @@ -1262,10 +1262,10 @@ vn_reference_maybe_forwprop_address (vec ptr = gimple_assign_rhs1 (def_stmt); ptroff = gimple_assign_rhs2 (def_stmt); if (TREE_CODE (ptr) != SSA_NAME - || TREE_CODE (ptroff) != INTEGER_CST) + || !poly_int_tree_p (ptroff)) return false; - off += wi::to_offset (ptroff); + off += wi::to_poly_offset (ptroff); op->op0 = ptr; } Index: gcc/tree-vect-data-refs.c =================================================================== --- gcc/tree-vect-data-refs.c 2018-01-09 15:46:34.647440890 +0000 +++ gcc/tree-vect-data-refs.c 2018-01-09 18:37:03.813269123 +0000 @@ -2227,9 +2227,9 @@ vect_find_same_alignment_drs (struct dat return; /* Two references with distance zero have the same alignment. */ - offset_int diff = (wi::to_offset (DR_INIT (dra)) - - wi::to_offset (DR_INIT (drb))); - if (diff != 0) + poly_offset_int diff = (wi::to_poly_offset (DR_INIT (dra)) + - wi::to_poly_offset (DR_INIT (drb))); + if (maybe_ne (diff, 0)) { /* Get the wider of the two alignments. */ unsigned int align_a = (vect_calculate_target_alignment (dra) @@ -2239,7 +2239,7 @@ vect_find_same_alignment_drs (struct dat unsigned int max_align = MAX (align_a, align_b); /* Require the gap to be a multiple of the larger vector alignment. */ - if (!wi::multiple_of_p (diff, max_align, SIGNED)) + if (!multiple_p (diff, max_align)) return; } @@ -2475,6 +2475,7 @@ vect_analyze_group_access_1 (struct data gimple *prev = stmt; HOST_WIDE_INT diff, gaps = 0; + /* By construction, all group members have INTEGER_CST DR_INITs. */ while (next) { /* Skip same data-refs. In case that two or more stmts share @@ -2864,6 +2865,11 @@ vect_analyze_data_ref_accesses (vec_info TREE_TYPE (DR_REF (drb)))) break; + /* Check that the DR_INITs are compile-time constants. */ + if (TREE_CODE (DR_INIT (dra)) != INTEGER_CST + || TREE_CODE (DR_INIT (drb)) != INTEGER_CST) + break; + /* Sorting has ensured that DR_INIT (dra) <= DR_INIT (drb). */ HOST_WIDE_INT init_a = TREE_INT_CST_LOW (DR_INIT (dra)); HOST_WIDE_INT init_b = TREE_INT_CST_LOW (DR_INIT (drb)); Index: gcc/var-tracking.c =================================================================== --- gcc/var-tracking.c 2018-01-09 18:26:49.872692894 +0000 +++ gcc/var-tracking.c 2018-01-09 18:37:03.814269085 +0000 @@ -8662,7 +8662,6 @@ emit_note_insn_var_location (variable ** bool complete; enum var_init_status initialized = VAR_INIT_STATUS_UNINITIALIZED; HOST_WIDE_INT last_limit; - tree type_size_unit; HOST_WIDE_INT offsets[MAX_VAR_PARTS]; rtx loc[MAX_VAR_PARTS]; tree decl; @@ -8808,8 +8807,9 @@ emit_note_insn_var_location (variable ** } ++n_var_parts; } - type_size_unit = TYPE_SIZE_UNIT (TREE_TYPE (decl)); - if ((unsigned HOST_WIDE_INT) last_limit < TREE_INT_CST_LOW (type_size_unit)) + poly_uint64 type_size_unit + = tree_to_poly_uint64 (TYPE_SIZE_UNIT (TREE_TYPE (decl))); + if (maybe_lt (poly_uint64 (last_limit), type_size_unit)) complete = false; if (! flag_var_tracking_uninit)