From patchwork Wed Nov 11 16:10:52 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kyrylo Tkachov X-Patchwork-Id: 56387 Delivered-To: patch@linaro.org Received: by 10.112.155.196 with SMTP id vy4csp1443953lbb; Wed, 11 Nov 2015 08:11:16 -0800 (PST) X-Received: by 10.66.155.197 with SMTP id vy5mr15514327pab.109.1447258276002; Wed, 11 Nov 2015 08:11:16 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id c6si13481855pbu.191.2015.11.11.08.11.15 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 11 Nov 2015 08:11:15 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-413676-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; spf=pass (google.com: domain of gcc-patches-return-413676-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-413676-patch=linaro.org@gcc.gnu.org; dkim=pass header.i=@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:content-type; q=dns; s=default; b=NMPm+9iCArMP6ixxAzwuq/E287xd5Q9sENe5Y99HPbC 0ZMVhNRnaRmc4n9lHUDigApUcObgSMkai6Evu67Qf0eHikxQRHndeRDdCaJSSQdf mZayTLtlu/HDHs9NJgjagyRSRbCA17te+HTc3UvFCZD5AR69YYBaLN9HUZGz/+To = DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:content-type; s=default; bh=uzdOmJrJiTcPijoW4KJ+fWL6TYI=; b=vsyZpdeTkSav6y1ug BoXP/g7n0fj1TXwT9T95o22oW0YrSEZlACGncBWZQrrO59COGwpvL4UmQ3++7VB0 6AJrD8mKT6Go72fjXH2h5Da1MX9z0czTfw/116oMlYTW1smtOFCJu48605BpG8xa jyPkcv52zUQjybXFAsRDHwYcCE= Received: (qmail 66320 invoked by alias); 11 Nov 2015 16:11:00 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 66303 invoked by uid 89); 11 Nov 2015 16:10:59 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.7 required=5.0 tests=AWL, BAYES_00, SPF_PASS autolearn=ham version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com Received: from eu-smtp-delivery-143.mimecast.com (HELO eu-smtp-delivery-143.mimecast.com) (146.101.78.143) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Wed, 11 Nov 2015 16:10:57 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-31-h8f6JE5fQAK3F17YP2h-Lw-1; Wed, 11 Nov 2015 16:10:52 +0000 Received: from [10.2.206.200] ([10.1.2.79]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Wed, 11 Nov 2015 16:10:52 +0000 Message-ID: <5643688C.1020406@arm.com> Date: Wed, 11 Nov 2015 16:10:52 +0000 From: Kyrill Tkachov User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.2.0 MIME-Version: 1.0 To: GCC Patches CC: Ramana Radhakrishnan , Richard Earnshaw Subject: [PATCH][ARM] Do not expand movmisalign pattern if not in 32-bit mode X-MC-Unique: h8f6JE5fQAK3F17YP2h-Lw-1 X-IsSubscribed: yes Hi all, The attached testcase ICEs when compiled with -march=armv6k -mthumb -Os or any march for which -mthumb gives Thumb1: error: unrecognizable insn: } ^ (insn 13 12 14 5 (set (reg:SI 116 [ x ]) (unspec:SI [ (mem:SI (reg/v/f:SI 112 [ s ]) [0 MEM[(unsigned char *)s_1(D)]+0 S4 A8]) ] UNSPEC_UNALIGNED_LOAD)) besttry.c:9 -1 (nil)) The problem is that the expands a movmisalign pattern but the resulting unaligned loads don't match any define_insn because they are gated on unaligned_access && TARGET_32BIT. The unaligned_access expander is gated only on unaligned_access. This small patch fixes the issue by turning off unaligned_access if TARGET_32BIT is not true. We can then remove TARGET_32BIT from the unaligned load/store patterns conditions as a cleanup. Bootstrapped and tested on arm-none-linux-gnueabihf. Ok for trunk? Thanks, Kyrill 2015-11-11 Kyrylo Tkachov * config/arm/arm.c (arm_option_override): Require TARGET_32BIT for unaligned_access. * config/arm/arm.md (unaligned_loadsi): Remove redundant TARGET_32BIT from matching condition. (unaligned_loadhis): Likewise. (unaligned_loadhiu): Likewise. (unaligned_storesi): Likewise. (unaligned_storehi): Likewise. 2015-11-11 Kyrylo Tkachov * gcc.target/arm/armv6-unaligned-load-ice.c: New test. commit 3b1e68a9f7fadeeb6d7f201ce2291bf2286a4d63 Author: Kyrylo Tkachov Date: Tue Nov 10 13:48:17 2015 +0000 [ARM] Do not expand movmisalign pattern if not in 32-bit mode diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c index 6a0994e..4708a12 100644 --- a/gcc/config/arm/arm.c +++ b/gcc/config/arm/arm.c @@ -3436,7 +3436,8 @@ arm_option_override (void) } /* Enable -munaligned-access by default for - - all ARMv6 architecture-based processors + - all ARMv6 architecture-based processors when compiling for a 32-bit ISA + i.e. Thumb2 and ARM state only. - ARMv7-A, ARMv7-R, and ARMv7-M architecture-based processors. - ARMv8 architecture-base processors. @@ -3446,7 +3447,7 @@ arm_option_override (void) if (unaligned_access == 2) { - if (arm_arch6 && (arm_arch_notm || arm_arch7)) + if (TARGET_32BIT && arm_arch6 && (arm_arch_notm || arm_arch7)) unaligned_access = 1; else unaligned_access = 0; diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md index ab48873..090a287 100644 --- a/gcc/config/arm/arm.md +++ b/gcc/config/arm/arm.md @@ -4266,7 +4266,7 @@ (define_insn "unaligned_loadsi" [(set (match_operand:SI 0 "s_register_operand" "=l,r") (unspec:SI [(match_operand:SI 1 "memory_operand" "Uw,m")] UNSPEC_UNALIGNED_LOAD))] - "unaligned_access && TARGET_32BIT" + "unaligned_access" "ldr%?\t%0, %1\t@ unaligned" [(set_attr "arch" "t2,any") (set_attr "length" "2,4") @@ -4279,7 +4279,7 @@ (define_insn "unaligned_loadhis" (sign_extend:SI (unspec:HI [(match_operand:HI 1 "memory_operand" "Uw,Uh")] UNSPEC_UNALIGNED_LOAD)))] - "unaligned_access && TARGET_32BIT" + "unaligned_access" "ldrsh%?\t%0, %1\t@ unaligned" [(set_attr "arch" "t2,any") (set_attr "length" "2,4") @@ -4292,7 +4292,7 @@ (define_insn "unaligned_loadhiu" (zero_extend:SI (unspec:HI [(match_operand:HI 1 "memory_operand" "Uw,m")] UNSPEC_UNALIGNED_LOAD)))] - "unaligned_access && TARGET_32BIT" + "unaligned_access" "ldrh%?\t%0, %1\t@ unaligned" [(set_attr "arch" "t2,any") (set_attr "length" "2,4") @@ -4304,7 +4304,7 @@ (define_insn "unaligned_storesi" [(set (match_operand:SI 0 "memory_operand" "=Uw,m") (unspec:SI [(match_operand:SI 1 "s_register_operand" "l,r")] UNSPEC_UNALIGNED_STORE))] - "unaligned_access && TARGET_32BIT" + "unaligned_access" "str%?\t%1, %0\t@ unaligned" [(set_attr "arch" "t2,any") (set_attr "length" "2,4") @@ -4316,7 +4316,7 @@ (define_insn "unaligned_storehi" [(set (match_operand:HI 0 "memory_operand" "=Uw,m") (unspec:HI [(match_operand:HI 1 "s_register_operand" "l,r")] UNSPEC_UNALIGNED_STORE))] - "unaligned_access && TARGET_32BIT" + "unaligned_access" "strh%?\t%1, %0\t@ unaligned" [(set_attr "arch" "t2,any") (set_attr "length" "2,4") diff --git a/gcc/testsuite/gcc.target/arm/armv6-unaligned-load-ice.c b/gcc/testsuite/gcc.target/arm/armv6-unaligned-load-ice.c new file mode 100644 index 0000000..88528f1 --- /dev/null +++ b/gcc/testsuite/gcc.target/arm/armv6-unaligned-load-ice.c @@ -0,0 +1,18 @@ +/* { dg-do compile } */ +/* { dg-skip-if "avoid conflicting multilib options" { *-*-* } { "-march=*" } { "-march=armv6k" } } */ +/* { dg-skip-if "avoid conflicting multilib options" { *-*-* } { "-marm" } { "" } } */ +/* { dg-options "-mthumb -Os -mfloat-abi=softfp" } */ +/* { dg-add-options arm_arch_v6k } */ + +long +get_number (char *s, long size, int unsigned_p) +{ + long x; + unsigned char *p = (unsigned char *) s; + switch (size) + { + case 4: + x = ((long) p[3] << 24) | ((long) p[2] << 16) | (p[1] << 8) | p[0]; + return x; + } +}