From patchwork Tue Jan 10 20:31:46 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Botcazou X-Patchwork-Id: 90761 Delivered-To: patch@linaro.org Received: by 10.140.20.99 with SMTP id 90csp800328qgi; Tue, 10 Jan 2017 12:32:14 -0800 (PST) X-Received: by 10.98.144.213 with SMTP id q82mr5912559pfk.56.1484080334767; Tue, 10 Jan 2017 12:32:14 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id z87si3257271pfi.113.2017.01.10.12.32.14 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 10 Jan 2017 12:32:14 -0800 (PST) Received-SPF: pass (google.com: domain of gcc-patches-return-445798-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org; spf=pass (google.com: domain of gcc-patches-return-445798-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-445798-patch=linaro.org@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type :content-transfer-encoding; q=dns; s=default; b=uC7PIVXxaQ63Kxxe Z5DluSADt1M+UIUeG1fWPNVBUhY3goNlafXigFwymQ74ZGn9/iuW6R7XN5fRCJD1 NPs8ojJ8dlFZpwWafeB0mfo2hELi7/ZEMLCCCjSmBL+Ib2JDHlpEPMaF5ALqUWGF U6nZveapcdA+KwDEnM1zROBjsC0= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type :content-transfer-encoding; s=default; bh=LrHCxNXs8Yz7ZpsvFtOcRl 7aDfY=; b=kZsMwXZxS3Cta8QEKeNB6cOZ+X08BAhNJjYd4SUIyjb18kVGOiuzua D3LJL/9ZRl83on55n9Qgl7/aDrf/GiHOD2lQ8h2YSHwmXctICAVKv6x2/XPD+am+ B+5yQzYg3sQ4ViZPUtTzH1ETeTI+4w86IIVt98YxeP3WNmmnkUhw8= Received: (qmail 47111 invoked by alias); 10 Jan 2017 20:32:00 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 47088 invoked by uid 89); 10 Jan 2017 20:32:00 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.8 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 spammy=business, consideration X-HELO: smtp.eu.adacore.com Received: from mel.act-europe.fr (HELO smtp.eu.adacore.com) (194.98.77.210) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 10 Jan 2017 20:31:50 +0000 Received: from localhost (localhost [127.0.0.1]) by filtered-smtp.eu.adacore.com (Postfix) with ESMTP id BA1E58134F for ; Tue, 10 Jan 2017 21:31:47 +0100 (CET) Received: from smtp.eu.adacore.com ([127.0.0.1]) by localhost (smtp.eu.adacore.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id AYybJbtjky0D for ; Tue, 10 Jan 2017 21:31:47 +0100 (CET) Received: from polaris.localnet (bon31-6-88-161-99-133.fbx.proxad.net [88.161.99.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.eu.adacore.com (Postfix) with ESMTPSA id 8D5E581347 for ; Tue, 10 Jan 2017 21:31:47 +0100 (CET) From: Eric Botcazou To: gcc-patches@gcc.gnu.org Subject: [LRA] Fix PR rtl-optimization/79032 Date: Tue, 10 Jan 2017 21:31:46 +0100 Message-ID: <34515865.Hv3Ka6M9TT@polaris> User-Agent: KMail/4.14.10 (Linux/3.16.7-53-desktop; KDE/4.14.9; x86_64; ; ) MIME-Version: 1.0 Hi, LRA generates an unaligned memory access for 32-bit SPARC on the attached testcase when it is compiled with optimization. It's again the business of paradoxical subregs of memory dealt with by simplify_operand_subreg: /* If we change the address for a paradoxical subreg of memory, the address might violate the necessary alignment or the access might be slow. So take this into consideration. We need not worry about accesses beyond allocated memory for paradoxical memory subregs as we don't substitute such equiv memory (see processing equivalences in function lra_constraints) and because for spilled pseudos we allocate stack memory enough for the biggest corresponding paradoxical subreg. */ if (!(MEM_ALIGN (reg) < GET_MODE_ALIGNMENT (mode) && SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (reg))) || (MEM_ALIGN (reg) < GET_MODE_ALIGNMENT (innermode) && SLOW_UNALIGNED_ACCESS (innermode, MEM_ALIGN (reg)))) return true; However the code contains a small inaccuracy: it tests the old MEM (reg) which has mode INNERMODE in the first branch of the condition instead of testing the new MEM (subst) which has mode MODE. That's benign for little-endian targets since the offset doesn't change, but not for big-endian ones where it changes and thus also can change the alignment. The attached fix was bootstrapped/regtested on SPARC/Solaris, OK for mainline? 2017-01-10 Eric Botcazou PR rtl-optimization/79032 * lra-constraints.c (simplify_operand_subreg): In the MEM case, test the alignment of the adjusted memory reference against that of MODE, instead of the alignment of the original memory reference. 2017-01-10 Eric Botcazou * gcc.c-torture/execute/20170110-1.c: New test. -- Eric Botcazou Index: lra-constraints.c =================================================================== --- lra-constraints.c (revision 244194) +++ lra-constraints.c (working copy) @@ -1505,15 +1505,15 @@ simplify_operand_subreg (int nop, machin MEM_ADDR_SPACE (subst)))) { /* If we change the address for a paradoxical subreg of memory, the - address might violate the necessary alignment or the access might - be slow. So take this into consideration. We need not worry + new address might violate the necessary alignment or the access + might be slow; take this into consideration. We need not worry about accesses beyond allocated memory for paradoxical memory subregs as we don't substitute such equiv memory (see processing equivalences in function lra_constraints) and because for spilled pseudos we allocate stack memory enough for the biggest corresponding paradoxical subreg. */ - if (!(MEM_ALIGN (reg) < GET_MODE_ALIGNMENT (mode) - && SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (reg))) + if (!(MEM_ALIGN (subst) < GET_MODE_ALIGNMENT (mode) + && SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (subst))) || (MEM_ALIGN (reg) < GET_MODE_ALIGNMENT (innermode) && SLOW_UNALIGNED_ACCESS (innermode, MEM_ALIGN (reg)))) return true;