From patchwork Sat Mar 15 07:42:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Tokarev X-Patchwork-Id: 873849 Delivered-To: patch@linaro.org Received: by 2002:a5d:4308:0:b0:38f:210b:807b with SMTP id h8csp1080229wrq; Sat, 15 Mar 2025 00:43:33 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCWp7vEWIRKx6ExTaUROYGjoOBdRtlPMmDUdSWatvvIpuJC/Z91j/9Q56fM9OnVb1GIl8ZiE7g==@linaro.org X-Google-Smtp-Source: AGHT+IHgxxPEM9FIxhZYK8M9/rZEJQE9VI2p2AC9WiHKQNY5dcv8oN8FL3a0lN7iaODkxuUKXKK/ X-Received: by 2002:ad4:5d46:0:b0:6e8:fe60:fded with SMTP id 6a1803df08f44-6eaeaaeac6amr89110686d6.30.1742024613086; Sat, 15 Mar 2025 00:43:33 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1742024613; cv=none; d=google.com; s=arc-20240605; b=c5k5X4fK92omCk3znPQm5P6zKEBiLPjZ/Ku2T7JcyFlWm95utMEiNlKGzc3kGJTlvX KIZyx7PKu+G/K1+r0Runr/Jj+0WW+nTS+dohzuB5cRmTYu18cusi0v1G94I1dxqCYMoi msQdaiRz/oPHj3VJVBFeoRGhYiAQyAZP0KvhEkygj+C05Eu6AGQkYDfYRg/+qT11l6ZN bFrHAAZ2lUdotHzC0D7nr0yqSKl+YgnQE2cNLAOzMU8O4he5WlvGroe36Q1kDBoMug9O r0WqjMbRVI8zaez0FU+TUCWJh0Cv8QbAffQ1wRZQiSsFrMr5N6M0NKsIvzwoHJTZhwh5 bc6g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20240605; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from; bh=MfO7qcqJfixa0mAFGNekWqKkSLnf+J+ZmUo+rGFB1NE=; fh=bMaafE1clBtiBaCrlbrbcY1DSEPhhFUklUk4XoVHiPU=; b=ZN6My9mb+0yVCyRDjNSbKP4+kUxRyOjWzSrKkt7HGcX4WCQ0Z1gLiyaOMkXYBhzJZK 37gFmuzhBSk8YI46q+bFYR6bsi6sw2VaOdfCba1u4PCam5mcl17pX1MDupgufmVPWZ6S S8TGdV35H3cZTUhQz5UjFi0thtvvmycIHM0oEB52+W3DQZeijask8yzxipUgGQV6SFbO hmFGbYliTA4FfMozNxo5SaiyxjriUEva/yuCm+lNBBBuOn30glfAuKEqD6XFKU9nrzvc /2VWOpzbqId3zoRu39mg+GrPP2QNWunKuWgEJV12ZxlmN8n4P2WDqpSSSnf74k8LB1fK Z86A==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org" Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6eade3579dcsi22905246d6.270.2025.03.15.00.43.32 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Sat, 15 Mar 2025 00:43:33 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org" Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ttMAo-00022b-9r; Sat, 15 Mar 2025 03:43:10 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ttMAl-0001zA-Sa; Sat, 15 Mar 2025 03:43:08 -0400 Received: from isrv.corpit.ru ([86.62.121.231]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ttMAj-0004nC-PM; Sat, 15 Mar 2025 03:43:07 -0400 Received: from tsrv.corpit.ru (tsrv.tls.msk.ru [192.168.177.2]) by isrv.corpit.ru (Postfix) with ESMTP id 9A9DEFFAFB; Sat, 15 Mar 2025 10:41:55 +0300 (MSK) Received: from gandalf.tls.msk.ru (mjt.wg.tls.msk.ru [192.168.177.130]) by tsrv.corpit.ru (Postfix) with ESMTP id 895E31CACC5; Sat, 15 Mar 2025 10:42:49 +0300 (MSK) Received: by gandalf.tls.msk.ru (Postfix, from userid 1000) id 6930F559E2; Sat, 15 Mar 2025 10:42:49 +0300 (MSK) From: Michael Tokarev To: qemu-devel@nongnu.org Cc: qemu-stable@nongnu.org, Richard Henderson , Michael Tokarev Subject: [Stable-8.2.10 07/42] linux-user: Honor elf alignment when placing images Date: Sat, 15 Mar 2025 10:42:09 +0300 Message-Id: <20250315074249.634718-7-mjt@tls.msk.ru> X-Mailer: git-send-email 2.39.5 In-Reply-To: References: MIME-Version: 1.0 Received-SPF: pass client-ip=86.62.121.231; envelope-from=mjt@tls.msk.ru; helo=isrv.corpit.ru X-Spam_score_int: -68 X-Spam_score: -6.9 X-Spam_bar: ------ X-Spam_report: (-6.9 / 5.0 requ) BAYES_00=-1.9, RCVD_IN_DNSWL_HI=-5, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org From: Richard Henderson Most binaries don't actually depend on more than page alignment, but any binary can request it. Not honoring this was a bug. This became obvious when gdb reported Failed to read a valid object file image from memory when examining some vdso which are marked as needing more than page alignment. Signed-off-by: Richard Henderson (cherry picked from commit c81d1fafa6233448bcc2d8fcd2ba63a4ae834f3a) Signed-off-by: Michael Tokarev diff --git a/linux-user/elfload.c b/linux-user/elfload.c index 17cd547c0c..e1a8b102d4 100644 --- a/linux-user/elfload.c +++ b/linux-user/elfload.c @@ -3278,7 +3278,8 @@ static void load_elf_image(const char *image_name, const ImageSource *src, char **pinterp_name) { g_autofree struct elf_phdr *phdr = NULL; - abi_ulong load_addr, load_bias, loaddr, hiaddr, error; + abi_ulong load_addr, load_bias, loaddr, hiaddr, error, align; + size_t reserve_size, align_size; int i, prot_exec; Error *err = NULL; @@ -3362,6 +3363,9 @@ static void load_elf_image(const char *image_name, const ImageSource *src, load_addr = loaddr; + align = pow2ceil(info->alignment); + info->alignment = align; + if (pinterp_name != NULL) { if (ehdr->e_type == ET_EXEC) { /* @@ -3370,8 +3374,6 @@ static void load_elf_image(const char *image_name, const ImageSource *src, */ probe_guest_base(image_name, loaddr, hiaddr); } else { - abi_ulong align; - /* * The binary is dynamic, but we still need to * select guest_base. In this case we pass a size. @@ -3389,10 +3391,7 @@ static void load_elf_image(const char *image_name, const ImageSource *src, * Since we do not have complete control over the guest * address space, we prefer the kernel to choose some address * rather than force the use of LOAD_ADDR via MAP_FIXED. - * But without MAP_FIXED we cannot guarantee alignment, - * only suggest it. */ - align = pow2ceil(info->alignment); if (align) { load_addr &= -align; } @@ -3416,13 +3415,35 @@ static void load_elf_image(const char *image_name, const ImageSource *src, * In both cases, we will overwrite pages in this range with mappings * from the executable. */ - load_addr = target_mmap(load_addr, (size_t)hiaddr - loaddr + 1, PROT_NONE, + reserve_size = (size_t)hiaddr - loaddr + 1; + align_size = reserve_size; + + if (ehdr->e_type != ET_EXEC && align > qemu_real_host_page_size()) { + align_size += align - 1; + } + + load_addr = target_mmap(load_addr, align_size, PROT_NONE, MAP_PRIVATE | MAP_ANON | MAP_NORESERVE | (ehdr->e_type == ET_EXEC ? MAP_FIXED_NOREPLACE : 0), -1, 0); if (load_addr == -1) { goto exit_mmap; } + + if (align_size != reserve_size) { + abi_ulong align_addr = ROUND_UP(load_addr, align); + abi_ulong align_end = align_addr + reserve_size; + abi_ulong load_end = load_addr + align_size; + + if (align_addr != load_addr) { + target_munmap(load_addr, align_addr - load_addr); + } + if (align_end != load_end) { + target_munmap(align_end, load_end - align_end); + } + load_addr = align_addr; + } + load_bias = load_addr - loaddr; if (elf_is_fdpic(ehdr)) {