From patchwork Tue May 13 16:34:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890337 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5391B2BE7D6 for ; Tue, 13 May 2025 16:34:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154086; cv=none; b=Uz8Z6I7JnUR26AJEJTSKRiJQJkBMzw0b2F3DtuvQb7+7+i9HzMDhes1lm3heX2aaq+4mcso9KnudrAH+hIDyRyOUWnoddcd0KimqTeAMk/SPxrDjx7o30XPWYk5L3JU2LTUdVWoR2lAfiDOHPncG0EwR5BkCIAYnvrQsNWYiVwY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154086; c=relaxed/simple; bh=xNTPs9WDx+vSq85jM/yL9t/mGlkS8xH4JCkKyi568ZY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=JhydNrZiFtY+TJEXBR9NOFC0lhSYLlt3YSo8jkkB0vV4J7SI8FXyCdpZ5EPzfr52KBD2i+aFIRg7FuItuTXsXBYXuMLiJ8K/yIpmfurwov60EoGjRTzR34oXMu04dDLePjMkFRJk31xVurh97Okxd7jQJAOy52vr/9Z9H9VAuc4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JaLKZKu3; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JaLKZKu3" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf446681cso31492325e9.1 for ; Tue, 13 May 2025 09:34:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154082; x=1747758882; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=eXtimLmxHXEkWnO1E5myC/GTQ1EqUuj9fx3a7MjMQd0=; b=JaLKZKu3UuOiu3hH3Bph4jM/HevGeLRyRWQ7R0PWl3bqFOOjMi9iUwe1ZNegwraAhs dNAWEl+zGIACM0oVr2vn9yK16LzWuuRMocOWopQifiF+REABm/Yrq4DTn7OikHYWC8l1 2cYqII1cAaKcZDknFDeIbz1ywQqGKutvlGvHQgpfcm+CP87yhjUu4kGWRkkGIC0HhZet ttPUhAgg/U5W8FxO/N7epjUCetW/OSzVnogy0oZp6bqagGnlYa4VjGvKaDvf9/DknUt5 jW8G5j2FG8Y9fc4SOy+NsTvNdU6VbRc5Vs8FaWuV+//CgnnC4R22xTNZj7/U7JpniJ6w Z2yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154082; x=1747758882; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=eXtimLmxHXEkWnO1E5myC/GTQ1EqUuj9fx3a7MjMQd0=; b=Bd/4oFMSUxeKUlb84uIewZhxPZY1xr/c7SN43lgWXhlWi2tgUvr84j9b8OGa3r0fp7 1NMi4Ht8qfx11/6eoiGP0kzrXAJUapddVpsiUTKS0mTL7ciV+3cqWACjDfzqSqqiUhFM EH74lnyCVV/E/eNZugPzBkOeoWFwqIMdSaoj+x4ysB8R64b0P3kXBA/wqCdkfztn/Wu5 eeEvUOggPbyp4+1RxJseAjV+q9KIDY1jEQ8MSz5mXpSo7gaUwoTSS3HHDmOQKB4GR06H tCvYidDECgp6LFrjTYBzAf+m0n0rL/7DO89SwsyKnGFYaNHS/oLHG2dqrDgex9U70blC orlQ== X-Forwarded-Encrypted: i=1; AJvYcCVgXOIWMI/GL3/j1AE+zZu8ASTKZYyFtOZ3p/pp1zDevnN8cvCSkqkC0ZDkxdLW1XsBWQ20NUz8WAVb131w@vger.kernel.org X-Gm-Message-State: AOJu0YyJI39ium+3AgaTENKSXR5VCFjpYZl/AzYKsxx+DaY+iPw5XZgZ kPzzzJXGdyAZPITWgaLf6JA/NISVKaGhg7BzFlaE1cive8Yo+gzQtK3Ny9o3llUCpmQiehFZrw= = X-Google-Smtp-Source: AGHT+IFIr8fXpA5EElYw0iZ//FnNLg+lhwetRGpkRgB/x6sT+Ovs2tHvK9NIyMLs2laKYevir4FnIB19FQ== X-Received: from wmbet9.prod.google.com ([2002:a05:600c:8189:b0:442:dfbc:dda]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a00a:b0:43d:db5:7af8 with SMTP id 5b1f17b1804b1-442d6dc7cd1mr144026555e9.21.1747154082603; Tue, 13 May 2025 09:34:42 -0700 (PDT) Date: Tue, 13 May 2025 17:34:22 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-2-tabba@google.com> Subject: [PATCH v9 01/17] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_PRIVATE_MEM enables guest_memfd in general. Subsequent patches add shared memory support to guest_memfd. Therefore, rename it to KVM_GMEM to make its purpose clearer. Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 2 +- include/linux/kvm_host.h | 10 +++++----- virt/kvm/Kconfig | 8 ++++---- virt/kvm/Makefile.kvm | 2 +- virt/kvm/kvm_main.c | 4 ++-- virt/kvm/kvm_mm.h | 4 ++-- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7bc174a1f1cb..52f6f6d08558 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2253,7 +2253,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, int tdp_max_root_level, int tdp_huge_page_level); -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM #define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem) #else #define kvm_arch_has_private_mem(kvm) false diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 291d49b9bf05..d6900995725d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -601,7 +601,7 @@ struct kvm_memory_slot { short id; u16 as_id; -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM struct { /* * Writes protected by kvm->slots_lock. Acquiring a @@ -722,7 +722,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) * Arch code must define kvm_arch_has_private_mem if support for private memory * is enabled. */ -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) +#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) static inline bool kvm_arch_has_private_mem(struct kvm *kvm) { return false; @@ -2504,7 +2504,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && + return IS_ENABLED(CONFIG_KVM_GMEM) && kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else @@ -2514,7 +2514,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); @@ -2527,7 +2527,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } -#endif /* CONFIG_KVM_PRIVATE_MEM */ +#endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 727b542074e7..49df4e32bff7 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -112,19 +112,19 @@ config KVM_GENERIC_MEMORY_ATTRIBUTES depends on KVM_GENERIC_MMU_NOTIFIER bool -config KVM_PRIVATE_MEM +config KVM_GMEM select XARRAY_MULTI bool config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES - select KVM_PRIVATE_MEM + select KVM_GMEM bool config HAVE_KVM_ARCH_GMEM_PREPARE bool - depends on KVM_PRIVATE_MEM + depends on KVM_GMEM config HAVE_KVM_ARCH_GMEM_INVALIDATE bool - depends on KVM_PRIVATE_MEM + depends on KVM_GMEM diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 724c89af78af..8d00918d4c8b 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o -kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_memfd.o +kvm-$(CONFIG_KVM_GMEM) += $(KVM)/guest_memfd.o diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e85b33a92624..4996cac41a8f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4842,7 +4842,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_MEMORY_ATTRIBUTES: return kvm_supported_mem_attributes(kvm); #endif -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_has_private_mem(kvm); #endif @@ -5276,7 +5276,7 @@ static long kvm_vm_ioctl(struct file *filp, case KVM_GET_STATS_FD: r = kvm_vm_ioctl_get_stats_fd(kvm); break; -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM case KVM_CREATE_GUEST_MEMFD: { struct kvm_create_guest_memfd guest_memfd; diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index acef3f5c582a..ec311c0d6718 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -67,7 +67,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, } #endif /* HAVE_KVM_PFNCACHE */ -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM void kvm_gmem_init(struct module *module); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -91,6 +91,6 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slot *slot) { WARN_ON_ONCE(1); } -#endif /* CONFIG_KVM_PRIVATE_MEM */ +#endif /* CONFIG_KVM_GMEM */ #endif /* __KVM_MM_H__ */ From patchwork Tue May 13 16:34:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889652 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 814AC2BF3D9 for ; Tue, 13 May 2025 16:34:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154089; cv=none; b=Aw3OJ+6RJsZehqTPYeBwDm4ewXKmBaXmY1p7H9rwNYZAAgkYwgwqNLJgrheYf8gM4wZWxwX/UAuN62U4TwY+5uPn57dNJLiKUjEsUxywSij2LLbw0zuqaCXHpeQxLdY+LuauChHppChCT0UkfA2D5xcqpjPTZtFKlKXaMGpUhRk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154089; c=relaxed/simple; bh=bUDNNNHJW1Nq2iKV/dPi2YnKYzPovvJALR70i2Ogl8I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NA4t6d1KOvOOFdpKI1mfVsFPCo/GEKctwSGnsEkUruZIsFLtGfU+5myG2vUpbzunmwwLr71m0/8jm0tbEpaqqQ3BaKJla892ts7Iwr8Of5QjQkTrNLCrecct/ndLWpYH+C5I7HtDK0VRO4fyb2kETboh9/zGUPcCwLJzSQtnCO8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=rWhGSHXS; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="rWhGSHXS" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43eea5a5d80so33309315e9.1 for ; Tue, 13 May 2025 09:34:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154085; x=1747758885; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=TW58tdeuujPQSfLrI4Dejm7n8vR9oSFyuhN6cUvPKqQ=; b=rWhGSHXSf5p/rPJ71yxaLKXAkI9VKdygr5EhM9xgCB9TGQylrJgAnX3sh33sGGquRJ T5JwV3bsuyfZDt2oIH6y6PlIMcXa92NI9Hg4ZiSp+PG7RazE7+0jRLJrny9E1096ZNig cEm4AOYIdzINCkgZqUieD9SpMjcGzWZO3HJrO/jfsOMAYejY9on7Cj5M3CBLF60m/+N7 eYbYGuQ1TRc6eJOOBgma01om8IKjzmR50u5goloXozW4sPxkVaG6MOKTzuHQ1DHvdJry Eem3cocs6qSH+MnaZ+IQRwrdMS7a/u3dF/19h3nJd4UUiPwpksn1S0HQ4809IGR92XOn 4mmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154085; x=1747758885; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TW58tdeuujPQSfLrI4Dejm7n8vR9oSFyuhN6cUvPKqQ=; b=OmtKz4z0W/c1v5qhEXO1Jr1oMNVKFnD08KLNT3ubwG7waTg9T1Z3308tk+IMhfMOnU 7+zS3yoLOeTbv1zgl5sLF2mufIrvyzAQKXLRmhFVkGMHZllOtrXfaEI6GIN8z+uA5TO6 1GCYfuLxEfQyLnpsARb89W2hZ9+9Hn+WwAdac1ySa9Lqh3a+MTU2UxjVwx94NN/smPdU ub0oqaFz731D4M+hr5PTWX3lJ2xro225G/7e42mbe7cUn0LuMfXs70Py9kQ67YnDSF+0 VvAw2WbbrSXbtnWWB5zYrztHn9E3z/QloWFFhPWzCTZ84/I6TRa6ReZM7isbLQjV/vHM LJew== X-Forwarded-Encrypted: i=1; AJvYcCUKbXLCeEUKb9aOWEagiE+FWdfQzlXaWccx5Flz3KtEN9Qw97Poo9Ednp9Y8vmq490sDrsFHxY1IzqDkDxl@vger.kernel.org X-Gm-Message-State: AOJu0Yx6QZYnsC/PmNmaWLP7BOp/2x3zBFUoNOrRneVSkTvLRihhl/LS FTL2lm8yqI2BQYZR7rcR2A2jrb88GHtv7WH/l3COIAvYnZa/no4Du4ZGmPNRBAXTnTMGiQJSFw= = X-Google-Smtp-Source: AGHT+IFj9pRKN/DH8YfxglgQfsZpXnJ3j3ZI0jXCzgoNzxGRBHeHeZF3xHQiYbURdskXxs29GTD5d6vGGA== X-Received: from wmrm4.prod.google.com ([2002:a05:600c:37c4:b0:43d:1dd4:37f2]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e46:b0:441:b3f0:e5f6 with SMTP id 5b1f17b1804b1-442d6dd2276mr134025105e9.25.1747154084667; Tue, 13 May 2025 09:34:44 -0700 (PDT) Date: Tue, 13 May 2025 17:34:23 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-3-tabba@google.com> Subject: [PATCH v9 02/17] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose clearer. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba Reviewed-by: Gavin Shan --- arch/x86/kvm/Kconfig | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/Kconfig | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fe8ea8c097de..b37258253543 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,7 +46,7 @@ config KVM_X86 select HAVE_KVM_PM_NOTIFIER if PM select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM @@ -145,7 +145,7 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM - select KVM_GENERIC_PRIVATE_MEM + select KVM_GENERIC_GMEM_POPULATE select HAVE_KVM_ARCH_GMEM_PREPARE select HAVE_KVM_ARCH_GMEM_INVALIDATE help diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d6900995725d..7ca23837fa52 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 49df4e32bff7..559c93ad90be 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -116,7 +116,7 @@ config KVM_GMEM select XARRAY_MULTI bool -config KVM_GENERIC_PRIVATE_MEM +config KVM_GENERIC_GMEM_POPULATE select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_GMEM bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..befea51bbc75 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { From patchwork Tue May 13 16:34:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890336 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C87CA2BF3DF for ; Tue, 13 May 2025 16:34:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154090; cv=none; b=BP0QeXSN761DJQtONiO8uXEUv30VqAEo3+SnqceQqWzxgFKR4QLm2U23488t8pyQcGDkGB7HnQTz9/beIkANGCLa+b7S+zy1pUJvrJSRVxfRIPLC3fRSVi03vh0qnqCU0KVioss5ku/Z67e+Ywdq18qfeDylTLZ9KdWZhPb/TyM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154090; c=relaxed/simple; bh=UHECWeNEThOikBN1LwL9MeUwPPM1e1QvYZaQ2/2CjLw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=szVawPM+SwhRbLRna+ZXpgWMCN4I2AZypmV91cuHn35jBVZKqK+uJzXiczMNiUXB97UyUSRLDxHMwjqxsKurHMFphvvxyRINRE0H9mLRpazrZpm5HGdGtg9TPsbB9pwztBARpo/FEEw5YdHdj8li/Ww9TwFJ2y5T0OG5k/QOSB8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LN3joSe3; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LN3joSe3" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d5ca7c86aso28423865e9.0 for ; Tue, 13 May 2025 09:34:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154087; x=1747758887; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pOo7NLt8OkegplqfcXi8muyEPIUYBCHaIAUmjBpBydc=; b=LN3joSe3idl+3il90jAK7/P8Zr+yZuBb2am5Sx6krK0s85wOmkR1r/+KhXhTFSUukZ c5lfmebqN9eXPSAvuST+dI2w2VLvCVl+I3CQvkBdXnltb+6cVI1+gAplYGETsxfBQiev LL27TWjIRbhahvMKFufmZm76RXTEXYZYlkr416alKiSZKKFZrfvuaUCFo7M5OAL38w4K AL6qiOLtvrmjr+V1JE+v46000fKc2DflwiyTxLs5apeOZiHOD9GrmW3GGDMGpmY8qQru SKmBYmfkPReuXTuORsHlTRESi1PnHAIjcVK0hwgkcb+9Oek4RU0UBzm9n79egbUrfkm5 V7hA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154087; x=1747758887; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pOo7NLt8OkegplqfcXi8muyEPIUYBCHaIAUmjBpBydc=; b=T/bfLzOzQtH6F/ucWgrgxa+ApVjbvki02ZNcSGiYcL+jEdsrdbz8rZL1cjmRR/swWU GVDpr3pJnCUoqCbGjL0t/E6feGAlFC0FVkVienzPjp/x8fZns9WhYNPTqWECnCkd9m0h 0QE1mXqAXF2DoSbs2NBpOvndiZAp9zQY4skoTEDRvb7/IYHZ0uby3mgL3lPkvE9oSiIE q+nx15RStKgEYaKXv5yOka6clkWV1paGtUHyzC+D7FWwu4O4+TDZKk+16vZHMGlQQFOF 3yvLv0mqTCwzgZWUg3fD5fAxVBKf/b1yBQ/h63dKiKfyfq9SYo+3F+AMIjdpXpHq4uVl gIUw== X-Forwarded-Encrypted: i=1; AJvYcCUk9efoqlpDmmbEXs75fWDrGu/oIbmlJ72U9UcZnWSG7EzjYcjd/qguBSGB3K7KFWY0BbG9YiNx4wpZo1kQ@vger.kernel.org X-Gm-Message-State: AOJu0YxZzxB8yzmxhz4lvnIUlBez0ribTA6kkQdW/Ea5pm4n2uRuLIVo mjGdimb/y/CtjXC3ARIgfy8Nd1Lw1mbcGji1VG4k8k90rBCrN4XmzQ4yr6CRsl9EJ+4te4C5Qw= = X-Google-Smtp-Source: AGHT+IE20aWTU+SpWpLldk/SfXaJ1mqHRG53T0tf3UqiygvBHSE+grusEkGSY5r+UM8qg1JZfkwl9ZFAMQ== X-Received: from wmbdr14.prod.google.com ([2002:a05:600c:608e:b0:43c:ef1f:48d3]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e87:b0:43d:7413:cb3e with SMTP id 5b1f17b1804b1-442f20bae12mr120965e9.1.1747154087002; Tue, 13 May 2025 09:34:47 -0700 (PDT) Date: Tue, 13 May 2025 17:34:24 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-4-tabba@google.com> Subject: [PATCH v9 03/17] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The function kvm_arch_has_private_mem() is used to indicate whether guest_memfd is supported by the architecture, which until now implies that its private. To decouple guest_memfd support from whether the memory is private, rename this function to kvm_arch_supports_gmem(). Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 8 ++++---- include/linux/kvm_host.h | 6 +++--- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 52f6f6d08558..4a83fbae7056 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2254,9 +2254,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) #else -#define kvm_arch_has_private_mem(kvm) false +#define kvm_arch_supports_gmem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) @@ -2309,8 +2309,8 @@ enum { #define HF_SMM_INSIDE_NMI_MASK (1 << 2) # define KVM_MAX_NR_ADDRESS_SPACES 2 -/* SMM is currently unsupported for guests with private memory. */ -# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_has_private_mem(kvm) ? 1 : 2) +/* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */ +# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2) # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) #else diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8d1b632e33d2..b66f1bf24e06 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4917,7 +4917,7 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, if (r) return r; - if (kvm_arch_has_private_mem(vcpu->kvm) && + if (kvm_arch_supports_gmem(vcpu->kvm) && kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa))) error_code |= PFERR_PRIVATE_ACCESS; @@ -7705,7 +7705,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, * Zapping SPTEs in this case ensures KVM will reassess whether or not * a hugepage can be used for affected ranges. */ - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm))) return false; if (WARN_ON_ONCE(range->end <= range->start)) @@ -7784,7 +7784,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, * a range that has PRIVATE GFNs, and conversely converting a range to * SHARED may now allow hugepages. */ - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm))) return false; /* @@ -7840,7 +7840,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, { int level; - if (!kvm_arch_has_private_mem(kvm)) + if (!kvm_arch_supports_gmem(kvm)) return; for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7ca23837fa52..6ca7279520cf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -719,11 +719,11 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) #endif /* - * Arch code must define kvm_arch_has_private_mem if support for private memory + * Arch code must define kvm_arch_supports_gmem if support for guest_memfd * is enabled. */ -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) -static inline bool kvm_arch_has_private_mem(struct kvm *kvm) +#if !defined(kvm_arch_supports_gmem) && !IS_ENABLED(CONFIG_KVM_GMEM) +static inline bool kvm_arch_supports_gmem(struct kvm *kvm) { return false; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4996cac41a8f..2468d50a9ed4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1531,7 +1531,7 @@ static int check_memory_region_flags(struct kvm *kvm, { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; - if (kvm_arch_has_private_mem(kvm)) + if (kvm_arch_supports_gmem(kvm)) valid_flags |= KVM_MEM_GUEST_MEMFD; /* Dirty logging private memory is not currently supported. */ @@ -2362,7 +2362,7 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static u64 kvm_supported_mem_attributes(struct kvm *kvm) { - if (!kvm || kvm_arch_has_private_mem(kvm)) + if (!kvm || kvm_arch_supports_gmem(kvm)) return KVM_MEMORY_ATTRIBUTE_PRIVATE; return 0; @@ -4844,7 +4844,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #endif #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: - return !kvm || kvm_arch_has_private_mem(kvm); + return !kvm || kvm_arch_supports_gmem(kvm); #endif default: break; From patchwork Tue May 13 16:34:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889651 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECB0C2BEC56 for ; Tue, 13 May 2025 16:34:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154092; cv=none; b=ErUIzappCs9X+fTNcZzb23U9Kad9nJxI/NM54AWEwF11cvgDLz/JDgnZdOCVYZVUz4aE/cmny+38n4C9tTYMMV2Ix5DA1/19o8bAV8tMEyNeTkp+yPbr24qcicTaf/EdMVvsYJhDpMQj85RD0fUuDBN+Yu8UT14zfrxsTDM+z0M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154092; c=relaxed/simple; bh=58ev2u1BDl4Ee9rPaPwyjuTFZN6CiQQdYykTB1lgYAI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lnyo9XH6kOAaKtDHs/9cOg2ZCBlABxFK2bfs2Ywpnx8ojRiWDDbFb4DEZLaBU/TYSpzpL+uQBKujpjCoopDiarPqjrPyLuWUghgIeagBDEOAnAtxULWU08B/Jq5R7tWqCv6P5ozaiMKCIuVMgPuf1TmSFJiAKkKOFS5WO25/i1w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=K+5RBGr2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="K+5RBGr2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-441d438a9b7so26435e9.1 for ; Tue, 13 May 2025 09:34:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154089; x=1747758889; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=RemcVjZbjZ9ZPgu+3d5I80Ik2FoSdKLzJPcqSfEXd4g=; b=K+5RBGr2jx3cPI1kQ/VfOrM6iFTnVaOWAbXGDCVInyDdyKzHhhAtemdoCTggfaGWyz xL0+8JBtJus1eYSqSMHq56+Fmq8EVaJtDcnFRNHrZryCAjXW5Os4M4TpGx2vGI7pAulM fEwGs9ef4VGeh5FgyKv/BZFjfT+/qGjWvJBe8qrxCE/6Xxxc76ftsG8DDuCbTFVvG8UQ yxeFGnAYAgNHLDEJwa0oqdHukaSoy+MImANhID0wcOddq+KpkdhGa5NjiqCFmEkcUQE7 ybxZEQqEXABtVYhcjDNfVz8kclRGeqhCj/p58bfeY+x2E8BZxPNv0nqIb8tZp/R69SW4 pFJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154089; x=1747758889; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=RemcVjZbjZ9ZPgu+3d5I80Ik2FoSdKLzJPcqSfEXd4g=; b=BKYq0gHeFLujF5vDxKGKrgueIjf2UgrpNbYBxXOQi/hSLk/SsxMctfrWfnSvux+Ehz p3b7awQVUt3K6kskp+82beFUKw+G6cuhTd5ANfno2dyv23Z4pjtVfN0iZXkC/bG9Ltqh bvb/wN45R88f4zGEpfjzIg0WEilZYjUxkv8Az7VuGksYw0HVHZ/71d0Su71gw7DccWtO wsgOA1DT720INfpMfuQM8FpEqZfTjk6/XoAaYF8XhThEIc4BYaz4h1nqpyu4pHdXDNt9 W8WP62DQnx/ju3ZUNf/OKUudw+NYhkH2yNgbPgbIZ6Xxol3FXMgIO5CeAMkPEzaSypVv jxjw== X-Forwarded-Encrypted: i=1; AJvYcCUCSgQcLl25A1fcynjFmQcdsbS42kRmHNepzRyyD7663K3XmavzzIWzG9c+nYXnoFM87KwEGKLzfC0b6044@vger.kernel.org X-Gm-Message-State: AOJu0Yx6VCPhzhWDURdBA6PVhTIPwr4OmeC030ZWXjc6RLYp7rCmDLe7 SbqTAttkTXfUKB6MdQf7gLJGqwLnP+abG3yQ7GqQrS3sqqzaBwMJD3n29wWCAl0xvWplbvvXRA= = X-Google-Smtp-Source: AGHT+IFxEwob+8byXmXMbz/khknkrLHMGFrF9iGceLErqSy/3DALfFa6E4jLemK1/8N7omYL7d0nchlvjg== X-Received: from wmbhj10.prod.google.com ([2002:a05:600c:528a:b0:442:cdb9:da41]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1ca0:b0:43d:5264:3cf0 with SMTP id 5b1f17b1804b1-442eb8855bcmr27319875e9.11.1747154089111; Tue, 13 May 2025 09:34:49 -0700 (PDT) Date: Tue, 13 May 2025 17:34:25 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-5-tabba@google.com> Subject: [PATCH v9 04/17] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The bool has_private_mem is used to indicate whether guest_memfd is supported. Rename it to supports_gmem to make its meaning clearer and to decouple memory being private from guest_memfd. Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba Reviewed-by: Gavin Shan --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 3 +-- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4a83fbae7056..709cc2a7ba66 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1331,7 +1331,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; - bool has_private_mem; + bool supports_gmem; bool has_protected_state; bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; @@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) #else #define kvm_arch_supports_gmem(kvm) false #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b66f1bf24e06..69bf2ef22ed0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault * on RET_PF_SPURIOUS until the update completes, or an actual spurious * case might go down the slow path. Either case will resolve itself. */ - if (kvm->arch.has_private_mem && + if (kvm->arch.supports_gmem && fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) return false; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a89c271a1951..a05b7dc7b717 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5110,8 +5110,8 @@ static int svm_vm_init(struct kvm *kvm) (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM); to_kvm_sev_info(kvm)->need_init = true; - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM); - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem; + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM); + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem; } if (!pause_filter_count || !pause_filter_thresh) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 9896fd574bfc..12433b1e755b 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12716,8 +12716,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.has_private_mem = - (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Tue May 13 16:34:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890335 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 99E362BF3C0 for ; Tue, 13 May 2025 16:34:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154094; cv=none; b=VOHmYinrGhyhhNf86NG+OyRZIb4KxfzjXYPMbxpLSXBa07MdksJ0DUdiCg9LrGcpi2YySJdKDWS20p+EaVdm8moIF706hv1xEVTRrxV40YnDuwA2Y76DqYT978ix7/S4zPhV3QOwlpGPd9F3WPeQSHkSOOU4+DI87eM8z9YA3yY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154094; c=relaxed/simple; bh=Nk2SE8fgWTOA3SD55WZwuIinzW8vRw+YmFTubUttCP8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=X9rQ9+vbDN5Ca1nl+YA9Q5OEbf3C1mTjsia0CI2Zu+bzhvwpnANKJ5RsuNgPjPU31MLFiOWr4Ywie/wvMVGl/CDpWRwXNoSV5Su4hr0mbXxnFIJKoPOVggo9490+GQWq2D94pV6TqwCbJRxIMXPls4oc2dTipjn+lG70OP54dn0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QfSKIf2n; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QfSKIf2n" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-44059976a1fso20053585e9.1 for ; Tue, 13 May 2025 09:34:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154091; x=1747758891; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=z4Y6TGsiTr3tqPWwwMchy8j/vXKPtI/QJFPjXad1Sp8=; b=QfSKIf2nGji/LwVWsGtmWN1aiprnwNheDqfhRGDbH5+YbMl+9HHw4ubRy2Ta3ct7xL Dh6QPBpjIhKioFymLBq+NH40B4HWc0p+t8FlbU0+rIAD4INGeQYvoVOgK4GsuW02HUF4 hzFYpcrYBHfoqURRJ0CF93fdLXF87sUXoK/+ZRLQNtZaCqpQ1nxrbLiz01EyTePblirM b6mbIEUBLCRSG5m6tCSZOSJpoVz/Oi1d48e1twN2T7UUIcX6FKP97A6g/HfTdIn1DaPn yuz9GvA4WN5arP9OlF8it0Fme9rxSxaBCW9vM0TyoNbcof5ZfVWquCsjC6QDmY9bwlsC dJeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154091; x=1747758891; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=z4Y6TGsiTr3tqPWwwMchy8j/vXKPtI/QJFPjXad1Sp8=; b=aTGFMzvWM5YBLTHsjt5bnTNHoWS2NAjG5laTLcF7abbQhLEKur3Kt10H40gop5hxQ1 M9zvdPVCSV6j6AYKkBgymiGZ6jxsez7C6IoRAUEZVUZBb9UPntobFDY6kQ3yoT/uAm+w D7LcIuRFeiflH3H6joVFSJqzkQrffBlBZybxEb5btRYrH7gUU/N6Zf8FX8oVuKVhTWGV HRPCcwmqa2YPMz1r+CGUBDdpHweXZzOSHXzZXHdlFdJC9HQoRYoYAqMFMB4STwle8xlJ D8WaW42tTvDLGB0JU20rTRIV7TxysN4PraG71NsI2JyMubwc8sDRaevJPh59T7PAetXC jDDA== X-Forwarded-Encrypted: i=1; AJvYcCXc5Yv+QhQihnG94EsX9US84OhUhgRyz07tWQ2ujNBmTulogUiC9rGjKbu5Hdzh4FovLI0Ddon9vyt4H10B@vger.kernel.org X-Gm-Message-State: AOJu0Yx8xjGuOpsAd7aaOforGC2aMgFvbJNGERyqvsCVB7QpVL1Ta/qX vX8ZpwmMT96mrlwDXW7evT3F4kYxoSAYoQadNY2TJgcbgmDEgN81WQgs+N57X5JUFYoKWzr+6A= = X-Google-Smtp-Source: AGHT+IHW3MOJ2rEkdcM1Edo/ds00OBnNEl97oBivjg68cagNTaciFlwYJOcR54vqibaM891n/OdLrMHUUg== X-Received: from wmbfm20.prod.google.com ([2002:a05:600c:c14:b0:440:6128:9422]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:37cd:b0:43c:e6d1:efe7 with SMTP id 5b1f17b1804b1-442d6dd21e9mr126539735e9.26.1747154091128; Tue, 13 May 2025 09:34:51 -0700 (PDT) Date: Tue, 13 May 2025 17:34:26 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-6-tabba@google.com> Subject: [PATCH v9 05/17] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The function kvm_slot_can_be_private() is used to check whether a memory slot is backed by guest_memfd. Rename it to kvm_slot_has_gmem() to make that clearer and to decouple memory being private from guest_memfd. Reviewed-by: Ira Weiny Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/svm/sev.c | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 69bf2ef22ed0..2b6376986f96 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3283,7 +3283,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_can_be_private(slot) && + bool is_private = kvm_slot_has_gmem(slot) && kvm_mem_is_private(kvm, gfn); return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); @@ -4496,7 +4496,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, { int max_order, r; - if (!kvm_slot_can_be_private(fault->slot)) { + if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index a7a7dc507336..27759ca6d2f2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2378,7 +2378,7 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) mutex_lock(&kvm->slots_lock); memslot = gfn_to_memslot(kvm, params.gfn_start); - if (!kvm_slot_can_be_private(memslot)) { + if (!kvm_slot_has_gmem(memslot)) { ret = -EINVAL; goto out; } @@ -4688,7 +4688,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) } slot = gfn_to_memslot(kvm, gfn); - if (!kvm_slot_can_be_private(slot)) { + if (!kvm_slot_has_gmem(slot)) { pr_warn_ratelimited("SEV: Unexpected RMP fault, non-private slot for GPA 0x%llx\n", gpa); return; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6ca7279520cf..d9616ee6acc7 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -614,7 +614,7 @@ struct kvm_memory_slot { #endif }; -static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +static inline bool kvm_slot_has_gmem(const struct kvm_memory_slot *slot) { return slot && (slot->flags & KVM_MEM_GUEST_MEMFD); } diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index befea51bbc75..6db515833f61 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -654,7 +654,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long return -EINVAL; slot = gfn_to_memslot(kvm, start_gfn); - if (!kvm_slot_can_be_private(slot)) + if (!kvm_slot_has_gmem(slot)) return -EINVAL; file = kvm_gmem_get_file(slot); From patchwork Tue May 13 16:34:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889650 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A29982BFC63 for ; Tue, 13 May 2025 16:34:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154096; cv=none; b=OwZfChe9odyeDpSkh18YhGouIcqxmUcXHajhOO9yO5biOoucUeNcCuYUS6JmSyNmxDWzUzxOExVoo3ptaMA7LRG1dg8cGFr9DPJrixvAeoUD8wKHKAKFEKFl0sI8DV4i6HuC7lnGAYIRikP0KAKKozAS7ydo5M7I9hBSVHpUjNo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154096; c=relaxed/simple; bh=usZ98HnqQjJuOLHnrjdI/J1buXDTlngynXDQux61byk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=KRa7ZIq2lz9yhauE8D2RmMPS6PTmgCp6mdI+jswE6mjG+hXD8+VP9oLc+qLvAK1Niv5Kp5KMiG5TnJ5COzqlxe7+fztZEl76du97OXCCVpiayLQwiSV4BZ8gHZhRBD3zfPCy+Cwvf1m1N0AKjELmpRapVfoSTN1IyGELhWd/KK0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uQm8rBC8; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uQm8rBC8" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43ce8f82e66so30910585e9.3 for ; Tue, 13 May 2025 09:34:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154093; x=1747758893; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ZfRX0WQ0nwRc65ZNXi/N9cbTM9kcIW9CoeCwP9fdsaQ=; b=uQm8rBC8dKN+z54cb6Ah6x5zw7AspA5CcCl6VjsEN+krMFCiCeEuCtYZNBWPADjuj6 Uu0+6LhrWgnaCBijncnUpcaOw2CNQ2DE9+iHIr5m5DT7kt5QAdxVbr3ph6Mrda0rLS+q QJL4YQIDjse+K7M24IY91GAeeiiklVvRBq+X+eiQhncSw8JA1lez9IZEzZgngXx09Zk3 y5qojo+MwE+KYSrD3CsJ3ZAfKl13QiNWTu0sMTJWaBm715khJetj66Hda5NiSnpAITfd 1b14ZKQzL38gKvGOEPrgG/C+bbZTNBATiTsEBGUHkgvYOa7KlThM9V7RyuGp/rP5jCUD acnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154093; x=1747758893; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ZfRX0WQ0nwRc65ZNXi/N9cbTM9kcIW9CoeCwP9fdsaQ=; b=IpJESrRHbclEQZmUqrfQyfN3QuBBUOeYmCW9TZVu7bOV3S/IDXMYEhABgcwai6CXPV d6N6qhsCpz10RW8bKV1xOzyzop5UTdBH1nkQ+Kem0jcgLo9hJ0zIOBq7HEz6oLl6ys63 tSiaEZguiRzpeXQ8I/Dhp9DDSxC+E6hEaZDheYpThgIP2W48aLFv9xdRdxZzjyr++Sw7 adGqArci5Thf51qwVMwqPE8WHWMYpOF9+BWERyCzXl6jK5NPE9JNJKMRVOaB1A7x9zDh wWJvX70hRVv9amE3PF+Exm9bobxkmgovTN+d1rfkMy4pwvbx3BU+7DgKvGjJtuBuIlIo Vneg== X-Forwarded-Encrypted: i=1; AJvYcCUL2D0MogrnJRsck7YFNltw28FtUEraZw8DNEkepZvib3dq9SqITjLNkjqPo1nRCO+PIY26rhxPJUdqXLhy@vger.kernel.org X-Gm-Message-State: AOJu0YyMz7dTcw62+MPNgpXimAGz7giF8nhQxHqvlKYrsPszgOWlU2EP fLWBsIZM4KwpfBr126zg6DMUQbo4BSz6PJate9QFNUIacExoG4vGLvArEVaIEAJ3mOJxvn56wg= = X-Google-Smtp-Source: AGHT+IH1XM5cKXN9KHIp9N+CWq3wdwJ0y1vkRxyXxd2Z8mniOCEFD2cALdYDmqraPvwEBsmjywaqCAwJxw== X-Received: from wmbdo24.prod.google.com ([2002:a05:600c:6818:b0:43d:56fa:9b95]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a00a:b0:43d:db5:7af8 with SMTP id 5b1f17b1804b1-442d6dc7cd1mr144031745e9.21.1747154093124; Tue, 13 May 2025 09:34:53 -0700 (PDT) Date: Tue, 13 May 2025 17:34:27 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-7-tabba@google.com> Subject: [PATCH v9 06/17] KVM: Fix comments that refer to slots_lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Fix comments so that they refer to slots_lock instead of slots_locks (remove trailing s). Reviewed-by: David Hildenbrand Reviewed-by: Ira Weiny Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9616ee6acc7..ae70e4e19700 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -859,7 +859,7 @@ struct kvm { struct notifier_block pm_notifier; #endif #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ + /* Protected by slots_lock (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2468d50a9ed4..6289ea1685dd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, * All current use cases for flushing the TLBs for a specific memslot * are related to dirty logging, and many do the TLB flush out of * mmu_lock. The interaction between the various operations on memslot - * must be serialized by slots_locks to ensure the TLB flush from one + * must be serialized by slots_lock to ensure the TLB flush from one * operation is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); From patchwork Tue May 13 16:34:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890334 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0BB952BFC63 for ; Tue, 13 May 2025 16:34:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154098; cv=none; b=psvaZS2cjVC7V49/c3PB5G6UAqxJjgwytAUEMP9Xke/RF9KnGO7b3PJoIaA5tVAlIDeNzWe38COzEnM34TAUdaLZxN6a4XcM+RmXlQL1oN1iMc/SDuX11Sl1GGCMN6fc0MP0FnWCVQAE7VyCkSO/1AFrpCtNs/z9FcOl1HU/g+I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154098; c=relaxed/simple; bh=BF6onoCQPf5NdVbqWJHgvBCSL7W2+jD6ynTu+GAkpVk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Z/RXqxRD30JBv54HqSFtKKhGwO2pYLRiC01DFTpWkSgaw8jVkE9wmwjOcx+nNbHtzGKFn5aiWCTVQ4R7AcKw5FoYKOkNMW0r4t6b4lHV/YldpH80SWJGdciybRcWE0ZBpjqpDoMQI5ge/Iux9eE0Hm1mKP/UVl8kS811fGP5vHg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=LCDwW2g7; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LCDwW2g7" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-442dc6f0138so16258355e9.0 for ; Tue, 13 May 2025 09:34:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154095; x=1747758895; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xkiFt+XKx4YC9meq4bLaztuX851NYA6IvyxpfbaNDQM=; b=LCDwW2g7j0giquwV85+i80qrJR1LEBIyBAXUBc2YTky+vbT2XE6XwohX4VPJCH0yxH wcv92r7akyYvibqR0Wma3T4wb7FsHoYyv1orxBdHPVzF/68XKFxKOjZua6yqn4oN1diu DejvhsfPh/JCXVu7mWrhx0ZJZ1CBn5fkyStE/oknbrwhddq1CjSOajIvrUTLaeT0fDsT oUj8QWwdYTGtviLNm9Iqrlc5lInfwz6+q1arkfRDgvTZYjqOjEuLcKdsxoCMpI/phK1+ q32JC9xYDY/YIWJN3gNi1aThmCwJxQUd4dhc3PatlK8JbtBL5oZsyuESRH2g1/eaeL2P Vufw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154095; x=1747758895; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xkiFt+XKx4YC9meq4bLaztuX851NYA6IvyxpfbaNDQM=; b=d/Xnz8+hCnfucoLLCw926GLEa4TRIT9yUjoTpapyZYgsvCDxLPK5rGOe0EOxg58WMc iDZVbIHdFRYRSHcBRhEvlIuwA84K8Vvekh8seI6O4jJ6sukZKT+VlZDXif6zWCFghYrR 9wmIzxxu8dNqkVGi2xARGeDNtrwqinVip8XfRysKK3ZXXmx4W2UiSgv00NyjP9gvfkq3 HjwCP0Me+60uBpNaL3QWGGTnN5H3M+8sxYtECpYj36ysFJjJngo4yXC8klE6b88/7EV1 k8xU/ClG8IoqTKtzePn7t6Zg5e90PBGb/iJ72QM20GQ9eonvPh/eswGvwzTBihBAtpAR chuQ== X-Forwarded-Encrypted: i=1; AJvYcCWn+s+bnYmz2c4Vz342Wrez64JA5wonyI8O8UFyEkRq5j+egard7NxAv1LlwSwZE5ta5iAPqqy3d+6Mkz7b@vger.kernel.org X-Gm-Message-State: AOJu0Yz4QqVDfyW4SMeMeJXZqhyiYcg8AOY2PkOF7pmSd0W5THnzpdqL 3DSDS/VL7Nzdt0f8pHqDWUUhulXCrPwl7bKYEQJ0g/FI3jC097bYi3pJGmeqPW3iA+IaE251Fg= = X-Google-Smtp-Source: AGHT+IGk8P+g3Nky7G3LvDTMmaXIQqyx5KcRTmLSnZLAUVXgBl74CjtJPqg5mc0nuUkEH/vJ4M6wl6DNcA== X-Received: from wmrn17.prod.google.com ([2002:a05:600c:5011:b0:43d:abd:278f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e48:b0:43d:172:50b1 with SMTP id 5b1f17b1804b1-442d6ddd1e1mr156690615e9.29.1747154095335; Tue, 13 May 2025 09:34:55 -0700 (PDT) Date: Tue, 13 May 2025 17:34:28 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-8-tabba@google.com> Subject: [PATCH v9 07/17] KVM: guest_memfd: Allow host to map guest_memfd() pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch enables support for shared memory in guest_memfd, including mapping that memory at the host userspace. This support is gated by the configuration option KVM_GMEM_SHARED_MEM, and toggled by the guest_memfd flag GUEST_MEMFD_FLAG_SUPPORT_SHARED, which can be set when creating a guest_memfd instance. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 10 ++++ include/linux/kvm_host.h | 13 +++++ include/uapi/linux/kvm.h | 1 + virt/kvm/Kconfig | 5 ++ virt/kvm/guest_memfd.c | 88 +++++++++++++++++++++++++++++++++ 5 files changed, 117 insertions(+) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 709cc2a7ba66..f72722949cae 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) + +/* + * CoCo VMs with hardware support that use guest_memfd only for backing private + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. + */ +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) #else #define kvm_arch_supports_gmem(kvm) false +#define kvm_arch_vm_supports_gmem_shared_mem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ae70e4e19700..2ec89c214978 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) } #endif +/* + * Returns true if this VM supports shared mem in guest_memfd. + * + * Arch code must define kvm_arch_vm_supports_gmem_shared_mem if support for + * guest_memfd is enabled. + */ +#if !defined(kvm_arch_vm_supports_gmem_shared_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) +static inline bool kvm_arch_vm_supports_gmem_shared_mem(struct kvm *kvm) +{ + return false; +} +#endif + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b6ae8ad8934b..9857022a0f0c 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1UL << 0) struct kvm_create_guest_memfd { __u64 size; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 559c93ad90be..f4e469a62a60 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -128,3 +128,8 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_GMEM + +config KVM_GMEM_SHARED_MEM + select KVM_GMEM + bool + prompt "Enables in-place shared memory for guest_memfd" diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 6db515833f61..8e6d1866b55e 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -312,7 +312,88 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + +static bool kvm_gmem_supports_shared(struct inode *inode) +{ + uint64_t flags = (uint64_t)inode->i_private; + + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; +} + +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + struct folio *folio; + vm_fault_t ret = VM_FAULT_LOCKED; + + filemap_invalidate_lock_shared(inode->i_mapping); + + folio = kvm_gmem_get_folio(inode, vmf->pgoff); + if (IS_ERR(folio)) { + int err = PTR_ERR(folio); + + if (err == -EAGAIN) + ret = VM_FAULT_RETRY; + else + ret = vmf_error(err); + + goto out_filemap; + } + + if (folio_test_hwpoison(folio)) { + ret = VM_FAULT_HWPOISON; + goto out_folio; + } + + if (WARN_ON_ONCE(folio_test_large(folio))) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (!folio_test_uptodate(folio)) { + clear_highpage(folio_page(folio, 0)); + kvm_gmem_mark_prepared(folio); + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + +out_folio: + if (ret != VM_FAULT_LOCKED) { + folio_unlock(folio); + folio_put(folio); + } + +out_filemap: + filemap_invalidate_unlock_shared(inode->i_mapping); + + return ret; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault_shared, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if (!kvm_gmem_supports_shared(file_inode(file))) + return -ENODEV; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} +#else +#define kvm_gmem_mmap NULL +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -463,6 +544,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) u64 flags = args->flags; u64 valid_flags = 0; + if (kvm_arch_vm_supports_gmem_shared_mem(kvm)) + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; + if (flags & ~valid_flags) return -EINVAL; @@ -501,6 +585,10 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, offset + size > i_size_read(inode)) goto err; + if (kvm_gmem_supports_shared(inode) && + !kvm_arch_vm_supports_gmem_shared_mem(kvm)) + goto err; + filemap_invalidate_lock(inode->i_mapping); start = offset >> PAGE_SHIFT; From patchwork Tue May 13 16:34:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889649 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E63772BF3D7 for ; Tue, 13 May 2025 16:34:58 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154100; cv=none; b=kjpmWpgGUppzyzOO6IBnoKobTTKl9AOvHwF/ZglkONnszYgDpIQBgVysDYqBLn1HDvfst1BbAhma/L0voD5dBnqbkDQ+tBtaxZzSMb7R/0azzEEuupOXpORZbL1TMnpQcT3r0/5f28xSQvqlEgmh4B4Zpu9Zn/6okVKfEgxh0fg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154100; c=relaxed/simple; bh=TOeeaoQ3m+/LWMfhhlCsyU/JnOcJqolOnfYg+C9z+4s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=af2vosUMMAIZlP9QxYbcqkEUg2udWmvXKet86QQZYJFsYx1KIB6o+TRLY/GJFuyp3ih2tpbgwZ2FDz6uGQVrgP0ges7ie/hXAfoKCGU9uu0B8MpnailFoVuJqKzYwjaL1qO43oZL9SsUu8d5GJM74Sgtwt5mvqgI7SPhizhqMXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yYMQlnwc; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yYMQlnwc" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf172ff63so26854235e9.3 for ; Tue, 13 May 2025 09:34:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154097; x=1747758897; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sUPX/y9AGDBrl/3DZ/v5doAtXHf0jW3cIL5MwdsoRE0=; b=yYMQlnwcg4byULaa1kPujReOlMiiU2kX/9lZzsha1kLoWTLp8G13NhGTCCTsvmV1n+ p5Ubo5aHAZl0Fmaog5VRldaxU3QvFkVnSpPpXJRAl7ifVvSHBe2rHcQJZ36OdEKkJUXT vog9YCwWvpDXmnyhRiLSxt8bfcnOiwk1AWmGBE45dmWbyz4RouOaSGHpNfJMXiOP3D6M gwvFXePpQo01/kIs3LxhPSc+CcWzjEWJ6IaUT/GuedET/3ksoUYOVvpbm2fPKYtqIFej J/B3hg6xOHdwcTIbyN23FkFsIBEmuk3CeSrEIpwNP/kFp8wdj4r8bIIxiORD7JuSzqf7 H0VA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154097; x=1747758897; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sUPX/y9AGDBrl/3DZ/v5doAtXHf0jW3cIL5MwdsoRE0=; b=ris5/xt+k5ol3XcLo713Zry7rlwH/+gq02Smwc2rUOHrVbvEcVqz6YX8Z32BCdDcyI j+xC9RH1wjVaEwc2/LAaq90kGPEH7w1UsxVppdpo/78xU9qoQHSFyO9Fkf8jtnP36zU2 7Y9iF43VmqbWJa/oOInkYqP6vHKkr6JQ73SihU4cpl7Daaj0jKiriRcxxAjqBByChe8l PR/u2YMHn4dzNtofCtP2mtlaibmz9frcl/cRaeROEFQqDdgxqyAJNIZfG5cNOHAJVE0+ CMLjTToyBRi8aM4Fj3/PFhP5+zFto9+8SbVMkcToZ+bmgVWZ59e6mhI3HyVk/Evaxf5x nuRw== X-Forwarded-Encrypted: i=1; AJvYcCW9+zOWncMayKY7c+cTHQnqitQj1Yj08US0FsOtHPiz+QrKuP6V9e4Yw+e6WtT+bd33cFzBTqoloFFCALWh@vger.kernel.org X-Gm-Message-State: AOJu0YzBvjMxYcGvlb2X2PMIGuWrmrz9GrBRdD22IfuNe3F3t7EU8x/R GhgWqRo8kp6Q1inlwSVX+vu+7MXnKKx/JkUgGIqDpkM0Ke1F6sSurxJJOJO4roqFhiJ1i3dbgw= = X-Google-Smtp-Source: AGHT+IHPBFphMwR0/rLp+gySy0dxBabpKT4zkEgWC/5U+nwhKCMIpmmhaC+Hj2shZb3rzwt63hBboV4t6w== X-Received: from wmbep21.prod.google.com ([2002:a05:600c:8415:b0:440:5e01:286b]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:37cd:b0:43c:e6d1:efe7 with SMTP id 5b1f17b1804b1-442d6dd21e9mr126542875e9.26.1747154097178; Tue, 13 May 2025 09:34:57 -0700 (PDT) Date: Tue, 13 May 2025 17:34:29 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-9-tabba@google.com> Subject: [PATCH v9 08/17] KVM: guest_memfd: Check that userspace_addr and fd+offset refer to same range From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng On binding of a guest_memfd with a memslot, check that the slot's userspace_addr and the requested fd and offset refer to the same memory range. This check is best-effort: nothing prevents userspace from later mapping other memory to the same provided in slot->userspace_addr and breaking guest operation. Suggested-by: David Hildenbrand Suggested-by: Sean Christopherson Suggested-by: Yan Zhao Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- virt/kvm/guest_memfd.c | 37 ++++++++++++++++++++++++++++++++++--- 1 file changed, 34 insertions(+), 3 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 8e6d1866b55e..2f499021df66 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -556,6 +556,32 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) return __kvm_gmem_create(kvm, size, flags); } +static bool kvm_gmem_is_same_range(struct kvm *kvm, + struct kvm_memory_slot *slot, + struct file *file, loff_t offset) +{ + struct mm_struct *mm = kvm->mm; + loff_t userspace_addr_offset; + struct vm_area_struct *vma; + bool ret = false; + + mmap_read_lock(mm); + + vma = vma_lookup(mm, slot->userspace_addr); + if (!vma) + goto out; + + if (vma->vm_file != file) + goto out; + + userspace_addr_offset = slot->userspace_addr - vma->vm_start; + ret = userspace_addr_offset + (vma->vm_pgoff << PAGE_SHIFT) == offset; +out: + mmap_read_unlock(mm); + + return ret; +} + int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) { @@ -585,9 +611,14 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, offset + size > i_size_read(inode)) goto err; - if (kvm_gmem_supports_shared(inode) && - !kvm_arch_vm_supports_gmem_shared_mem(kvm)) - goto err; + if (kvm_gmem_supports_shared(inode)) { + if (!kvm_arch_vm_supports_gmem_shared_mem(kvm)) + goto err; + + if (slot->userspace_addr && + !kvm_gmem_is_same_range(kvm, slot, file, offset)) + goto err; + } filemap_invalidate_lock(inode->i_mapping); From patchwork Tue May 13 16:34:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890333 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 083F81F12F4 for ; Tue, 13 May 2025 16:35:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154102; cv=none; b=Cj3a13xvowCLBrGY+rhbuign63ZlqU+tGUJgc4LOrvhDMy4sUh87cplO73w1p81vW8g8xR4OejEWOL9Q7psmdy/65no4vDA4Iea145gx1iD+Y+L1QMHEi6fgDdOgyw91E444VAIKpfJxQUavyAzWb0Ru0IAxljN+gZKuBO4XP3s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154102; c=relaxed/simple; bh=IgbJONri0NGoKyhF1kxMIJhf5KSMuWMgmCl50Qvefxo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=b8higMRZ5yMS/QBnAjbwWZJjYZGjlV++kUQuTXgXixNsgzIB8bKV4tFA6yhIv/7FGTSO3IV9dNS2vOKDQZA5s62JVURT8jLDdd9NNZMNgxAJU+sA8abd+MTQEj5/QwPKOmeC09sPrWxeDKxmBGKmPByNjBAMnDfYxeHK9W+RtZQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=qAK7Ps4T; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="qAK7Ps4T" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-441c96c1977so38275455e9.0 for ; Tue, 13 May 2025 09:35:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154099; x=1747758899; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NxKl/6+BAKCXIdE2T+YrAbctmttt0hBq+hOPemAaBUg=; b=qAK7Ps4TM/YzzNRhuSRSZM8xh0hx27Od81RDbHuZRPKpG2EvVWBa48J3bhQIQxxOZ7 t0PhQx/90iLkQqbb8dNg+A3S7izOhNygxDg4oxgg7f1JmwmenhU7ynD9p3ICKL36S/o7 4zIAW42yYCOKOLM2pjCfOG2ZzBpkWpr1B2GRFiTrXW8jq+zrfvk5IqLSZA/ZvDCGevLD sr7KigABdzdbqGEPdHYtCG/vZ5vRcFaUIXsZv7UWifusQ/BWwlUq8ww02z8jww28O+0s iRGh5XqkUnlTPZxswxSQXH80L6viZ8pu/8p5mxFAtsxfOtqiX9vcqeCNSEnA/Rsp3Key 8LuA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154099; x=1747758899; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NxKl/6+BAKCXIdE2T+YrAbctmttt0hBq+hOPemAaBUg=; b=XJFDp83+UQo+bnMx/JFiE8liobKjvVRxF6sYLR2LX20Rg7eoKIEJbB86En+/cge6HW ciKF7O1T2ADHJ31aUOFR2O2745zNaHcs4jzrmIriTg23WAz5JbrOte/PQEqaddoyWz6X k9cLFMiZRaq8zY/mcA8M8ttMWBcbE8f2wHkqJdIkwZzY5pNhr2tJ/qzLUsCQXi7P7E8f 6OeU7katDhbAFHwdghuMh+ukNq3DIEICm+PWQlDqF6KV94txgpFov4csc2/4A4nZVS50 SubmHwalbDi5jesGrScMvdNkF3m0/2gio2V3nUSq83j2FEfGmCS+ZEWUSD9p9IthNvOs 5GSw== X-Forwarded-Encrypted: i=1; AJvYcCVsk7KX4SMRk3Vr7qmEYv/3D7xzysaGkl8x2Xxh7KN0rJreUdE32VsG4CJsHE4brJvDumN6I8GU0KUETuqY@vger.kernel.org X-Gm-Message-State: AOJu0YwoJtBRtKc6yUn3e/WBf8KpI2IoC319MI0XfSCpYDdGgYPz15PR hHjJMtq+WCmQaG4Eg2htVrdDAutWyNynq4GK8XC3fApluDp5wH9xIGDlpjgCB8W4XFQIqG79NQ= = X-Google-Smtp-Source: AGHT+IG3vTrR9dlx8h1Y7WqXSbJ03o3LWkHYex4QcV7npEMOKy25lzrViPjnVXRCx5B9hTFNZlwSiNnOEw== X-Received: from wmsd3.prod.google.com ([2002:a05:600c:3ac3:b0:442:dc9b:b569]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8284:b0:43c:fa52:7d2d with SMTP id 5b1f17b1804b1-442d6dd2478mr132398425e9.20.1747154099254; Tue, 13 May 2025 09:34:59 -0700 (PDT) Date: Tue, 13 May 2025 17:34:30 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-10-tabba@google.com> Subject: [PATCH v9 09/17] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng For memslots backed by guest_memfd with shared mem support, the KVM MMU always faults-in pages from guest_memfd, and not from the userspace_addr. Towards this end, this patch also introduces a new guest_memfd flag, GUEST_MEMFD_FLAG_SUPPORT_SHARED, which indicates that the guest_memfd instance supports in-place shared memory. This flag is only supported if the VM creating the guest_memfd instance belongs to certain types determined by architecture. Only non-CoCo VMs are permitted to use guest_memfd with shared mem, for now. Function names have also been updated for accuracy - kvm_mem_is_private() returns true only when the current private/shared state (in the CoCo sense) of the memory is private, and returns false if the current state is shared explicitly or impicitly, e.g., belongs to a non-CoCo VM. kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used to fault in not just private memory, but more generally, from guest_memfd. Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Ackerley Tng --- arch/x86/kvm/mmu/mmu.c | 33 ++++++++++++++++++--------------- include/linux/kvm_host.h | 33 +++++++++++++++++++++++++++++++-- virt/kvm/guest_memfd.c | 17 +++++++++++++++++ 3 files changed, 66 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2b6376986f96..cfbb471f7c70 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4465,21 +4465,25 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, + struct kvm_page_fault *fault, + int order) { - u8 req_max_level; + u8 max_level = fault->max_level; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - max_level = min(kvm_max_level_for_order(gmem_order), max_level); + max_level = min(kvm_max_level_for_order(order), max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); - if (req_max_level) - max_level = min(max_level, req_max_level); + if (fault->is_private) { + u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn); + + if (level) + max_level = min(max_level, level); + } return max_level; } @@ -4491,10 +4495,10 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - int max_order, r; + int gmem_order, r; if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4502,15 +4506,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &fault->refcounted_page, &max_order); + &fault->refcounted_page, &gmem_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_level_for_fault_and_order(vcpu->kvm, fault, gmem_order); return RET_PF_CONTINUE; } @@ -4520,8 +4523,8 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + if (fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot)) + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 2ec89c214978..de7b46ee1762 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2502,6 +2502,15 @@ static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE; } +#ifdef CONFIG_KVM_GMEM_SHARED_MEM +bool kvm_gmem_memslot_supports_shared(const struct kvm_memory_slot *slot); +#else +static inline bool kvm_gmem_memslot_supports_shared(const struct kvm_memory_slot *slot) +{ + return false; +} +#endif /* CONFIG_KVM_GMEM_SHARED_MEM */ + #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) { @@ -2515,10 +2524,30 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); +/* + * Returns true if the given gfn's private/shared status (in the CoCo sense) is + * private. + * + * A return value of false indicates that the gfn is explicitly or implicity + * shared (i.e., non-CoCo VMs). + */ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_GMEM) && - kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; + struct kvm_memory_slot *slot; + + if (!IS_ENABLED(CONFIG_KVM_GMEM)) + return false; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) { + /* + * For now, memslots only support in-place shared memory if the + * host is allowed to mmap memory (i.e., non-Coco VMs). + */ + return false; + } + + return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 2f499021df66..fe0245335c96 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -388,6 +388,23 @@ static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) return 0; } + +bool kvm_gmem_memslot_supports_shared(const struct kvm_memory_slot *slot) +{ + struct file *file; + bool ret; + + file = kvm_gmem_get_file((struct kvm_memory_slot *)slot); + if (!file) + return false; + + ret = kvm_gmem_supports_shared(file_inode(file)); + + fput(file); + return ret; +} +EXPORT_SYMBOL_GPL(kvm_gmem_memslot_supports_shared); + #else #define kvm_gmem_mmap NULL #endif /* CONFIG_KVM_GMEM_SHARED_MEM */ From patchwork Tue May 13 16:34:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889648 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B312A2BF99E for ; Tue, 13 May 2025 16:35:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154104; cv=none; b=K3ktNNl6q+rXL7vARFHP5iuvDGBkCM9oUJ4bWYQxX1mLF7jHVQvothVnSAM2yd0e+R69IyzP2MbeKj0V240KuUyXcYAB/I4inm+Br0PTWYcuVLo3rn5PgnGrrqOSasciOgoolTyyitYde9X3pVkwaZaZwZa3uTZOxA+t0fzEX7o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154104; c=relaxed/simple; bh=Jr9xqsOqotECuvzAu+C7RbwpyeNTMbt9IEgZBUaW6fM=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=RD/LH7Q8Fz7TwwtJ0kvZb1VO8+tITdxQx5mnj4nwn30oNXr/G3eqPZRge/znE8j2kXveteMUuxjtwTh2IYydCNir+PIzAGbW45/txC9wkDOpj5Ss7RRXanRn+cjLy2+Uxlkp1omvy9KP3PuQ/nOjCttWeucRPzWTG3E/p5M38d4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=U5/ZHG11; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="U5/ZHG11" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a0b7ceaa20so2028120f8f.1 for ; Tue, 13 May 2025 09:35:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154101; x=1747758901; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=j1W1Cy37OzbQ24L0v54CfH7EjVT5UN5Iyr5YInP+w4Y=; b=U5/ZHG11kxeq3z/pjK8tDwgUf1RdTFGSx9sI3dY1Fl5CZnKDjoi6cLCdCwAog4rHYT a+X7mruR5XtzSKIRpz0lpht/JPDQSQfCrU2sMGshiI4Ml/lRTvWSaCV9X6v5rgoj7KN8 S0FjK+K5DpdcWqJNqTDwQrXJBE46gV6C72olJJLGu2PnKj+httP4msDMl4Y48nP0BTku owoRCwRnQFk4qC97+U06sW5TiFSK0/G5cdSfTFH7Xs0eIbzrZhQyL8kmJpZ87gFf96JL HKW0lKWaAg+kn2QnHjeQ0qXyEfBL5IgtG8xzsizLzCfYkbEFz0F4nket7IgiQqlUVgdv EJyw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154101; x=1747758901; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=j1W1Cy37OzbQ24L0v54CfH7EjVT5UN5Iyr5YInP+w4Y=; b=TvCCAIP6Sr3lZFk2+uOoHGETXPq6qtt2zUcxGJ+S0w0gaGp2goiN/PmvAyqv6jvxia te+vDxKnY0aKgRMagw2/c/xIo9uauIK6HInWfwIqJEoJcyHg5airfxxPKAAM/W4S0CYn dcsjWr5XFblIM452U3LKUE4K9ZRjNvaJrdVV9xytIjQAiyJ26pbw4Bm4ZsblggT4g9SG N26EAJUXB1EUGL429x6JWvzqIgp2GUmJ6y9P8L1rS1ExZIJbsxMKD60m0WcwGsNJOpB2 0P1svTOn8tkXGuHxBeJLfOuTf1+WAu4YWMrFLV/c1WPxZkNw9zUjl0J8JZ4PpEYQl3Rz UX+Q== X-Forwarded-Encrypted: i=1; AJvYcCUMlldsbpWtQMz+bcFDhzLUlCoMDMQn0HicQqZFzp4sBbDAPTS88rPUNpFJfD4PHAuubEfVLF5bCZ/p8Tpp@vger.kernel.org X-Gm-Message-State: AOJu0Yx4uJ74SIHyZgdlpnJQs9XlJGIY7S78QAfpU5M323gQFS4hj87u z2XUVuBXlcXL5WU913DQDdmmaA4J9QBM3f2lt207rqMG2PBYQt4xyua8qHMOQnsSxXLplTjlYA= = X-Google-Smtp-Source: AGHT+IFGjQxHgijGI2OEWj0PdaWz94EVCFblWm1hwEf7UsJPhE5e8u62QQVgajDHnVw5zTqR4B58OPcL3Q== X-Received: from wrbgv9.prod.google.com ([2002:a05:6000:4609:b0:3a0:b83e:ad41]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:64ce:0:b0:3a0:b8b0:441a with SMTP id ffacd0b85a97d-3a1f643ba6bmr13308222f8f.25.1747154101187; Tue, 13 May 2025 09:35:01 -0700 (PDT) Date: Tue, 13 May 2025 17:34:31 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-11-tabba@google.com> Subject: [PATCH v9 10/17] KVM: x86: Compute max_mapping_level with input from guest_memfd From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng This patch adds kvm_gmem_max_mapping_level(), which always returns PG_LEVEL_4K since guest_memfd only supports 4K pages for now. When guest_memfd supports shared memory, max_mapping_level (especially when recovering huge pages - see call to __kvm_mmu_max_mapping_level() from recover_huge_pages_range()) should take input from guest_memfd. Input from guest_memfd should be taken in these cases: + if the memslot supports shared memory (guest_memfd is used for shared memory, or in future both shared and private memory) or + if the memslot is only used for private memory and that gfn is private. If the memslot doesn't use guest_memfd, figure out the max_mapping_level using the host page tables like before. This patch also refactors and inlines the other call to __kvm_mmu_max_mapping_level(). In kvm_mmu_hugepage_adjust(), guest_memfd's input is already provided (if applicable) in fault->max_level. Hence, there is no need to query guest_memfd. lpage_info is queried like before, and then if the fault is not from guest_memfd, adjust fault->req_level based on input from host page tables. Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba Signed-off-by: Shivank Garg --- arch/x86/kvm/mmu/mmu.c | 92 ++++++++++++++++++++++++++-------------- include/linux/kvm_host.h | 7 +++ virt/kvm/guest_memfd.c | 12 ++++++ 3 files changed, 79 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index cfbb471f7c70..9e0bc8114859 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3256,12 +3256,11 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, return level; } -static int __kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) +static int kvm_lpage_info_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) { struct kvm_lpage_info *linfo; - int host_level; max_level = min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3270,23 +3269,61 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) - return max_level; + return max_level; +} + +static inline u8 kvm_max_level_for_order(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static inline int kvm_gmem_max_mapping_level(const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) +{ + int max_order; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - host_level = host_pfn_mapping_level(kvm, gfn, slot); - return min(host_level, max_level); + max_order = kvm_gmem_mapping_order(slot, gfn); + return min(max_level, kvm_max_level_for_order(max_order)); } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_has_gmem(slot) && - kvm_mem_is_private(kvm, gfn); + int max_level; + + max_level = kvm_lpage_info_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM); + if (max_level == PG_LEVEL_4K) + return PG_LEVEL_4K; - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); + if (kvm_slot_has_gmem(slot) && + (kvm_gmem_memslot_supports_shared(slot) || + kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE)) { + return kvm_gmem_max_mapping_level(slot, gfn, max_level); + } + + return min(max_level, host_pfn_mapping_level(kvm, gfn, slot)); +} + +static inline bool fault_from_gmem(struct kvm_page_fault *fault) +{ + return fault->is_private || + (kvm_slot_has_gmem(fault->slot) && + kvm_gmem_memslot_supports_shared(fault->slot)); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -3309,12 +3346,20 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level, - fault->is_private); + fault->req_level = kvm_lpage_info_max_mapping_level(vcpu->kvm, slot, + fault->gfn, fault->max_level); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; + if (!fault_from_gmem(fault)) { + int host_level; + + host_level = host_pfn_mapping_level(vcpu->kvm, fault->gfn, slot); + fault->req_level = min(fault->req_level, host_level); + if (fault->req_level == PG_LEVEL_4K) + return; + } + /* * mmu_invalidate_retry() was successful and mmu_lock is held, so * the pmd can't be split from under us. @@ -4448,23 +4493,6 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) vcpu->stat.pf_fixed++; } -static inline u8 kvm_max_level_for_order(int order) -{ - BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); - - KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) - return PG_LEVEL_1G; - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) - return PG_LEVEL_2M; - - return PG_LEVEL_4K; -} - static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, struct kvm_page_fault *fault, int order) @@ -4523,7 +4551,7 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot)) + if (fault_from_gmem(fault)) return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index de7b46ee1762..f9bb025327c3 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2560,6 +2560,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2569,6 +2570,12 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } +static inline int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, + gfn_t gfn) +{ + BUG(); + return 0; +} #endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index fe0245335c96..b8e247063b20 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -774,6 +774,18 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +/** + * Returns the mapping order for this @gfn in @slot. + * + * This is equal to max_order that would be returned if kvm_gmem_get_pfn() were + * called now. + */ +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn) +{ + return 0; +} +EXPORT_SYMBOL_GPL(kvm_gmem_mapping_order); + #ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) From patchwork Tue May 13 16:34:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890332 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 956B72BF3E0 for ; Tue, 13 May 2025 16:35:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154106; cv=none; b=oBfMvxK4r1wLP7nUOUBDQtiZRrY1nWDcvFxsxBuHqrd5ILXZ5ZNZy5IC/jPrJghnd2JkGT39y0Ttc1F7l1pThdXvtvlOL5kJzmE+JY5vaeHP5Xsd6LxaXcQ2JK3FTNVeXLeRZxN44J9IMyekVpBhuPLjCikos3i7QKJvqnicGuo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154106; c=relaxed/simple; bh=faPYAvc+1RBRk7FWuAC9vwWWbM/MAQ1n7IIIiB1A4fw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=oFKxvGEFO7JGAQ4kdq9qj44kTcZ65xbyC6w8GLhh5e7ePS6XtSVxQ6QSZIv/BxiA3zOMEBdbQqFI3EqImKFOjIARONZX5YFWrm4n9T73SXKpyJVkLRPLNu7/tOLCsygXKT2XKQ/E/WSsEzHfO19OeOXtd8bJI5Y0LxvbVwjN9s0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=BGGMAc5t; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="BGGMAc5t" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cfda30a3cso30011755e9.3 for ; Tue, 13 May 2025 09:35:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154103; x=1747758903; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=mw5Aq9liTvfBW6lDsmXdp3t+4bJEbgWE1aVwHCss4lo=; b=BGGMAc5tzi3G7NeNSiVgAlGGLudK3/KfPSv92VGXOXnnlQck68PhOT0qnYYpXeSGiv ZW1Dfv8ApdLUzjTHpUslGJvvQVsNDAuMvbCAw1LOOEXQhwM5YT0wel3Cys49tX7IiH+Z arSmbj/IAdyYsUM/OwTQdjx05mT65I0w2NiN6CdG0PWXhpnswKDSQPyXccsBeVHp85t1 cXEUJEDl4hfE+jhpjpUPiNRqPeK+jgYemoacwlbj709dZ9o4KHVMu/5/eOf1fTxNxYGH vrNEsPuzj3rdsQelf1WUYHIKM9NydT/cGTty1bE29ktjVtbPtNtZ74JpdV2+HLAjtrTe uSzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154103; x=1747758903; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=mw5Aq9liTvfBW6lDsmXdp3t+4bJEbgWE1aVwHCss4lo=; b=vlkIGgeAKGz+wfb9lWcjAKmYdhAZwd2zq9a6cQ2b3zYBgi39jIHsBb3V3FzqreIjrq kXKVkqY1BhOiaoVg0sn5WJdcR89BRpcLlav1ukcbTelxuUSi5VBaPkriLXDTza0MvhIX DZCnagZNpQHYuRrtspsrxAzjZfufmto7/pvxAo7lw8kmSEcAee2VtQt91HQlvmtbCCbo rVlqD2a6sVEOy4kZuCU9Cjgnx126g79372ZOJ9QxCMF2iSre8TmHOSPOZWQ63Rjpwnwf 8FcBQN0+0gF3OgGwmMcgQHN+360XK0/laJ7Q4uLJkR+zyH44Nnkm3lDiDerhnwk788hf t0Ng== X-Forwarded-Encrypted: i=1; AJvYcCWIfpqkFH5z4gmeqZXCXRGsFTD37wdvMozJxPbpyddESpFywGpvHC15lcFUwdPoEO3cqj/3qUqPUwBG2Dkv@vger.kernel.org X-Gm-Message-State: AOJu0YxpagAtjDqd9WNY3Qc9HIdt54lfDObAlHJRF9Am/5RE+0kIQbH4 b30nrsw3CAxRhKCAhgITxzUbzBxjzFxGJMM6omSvfcBmcrh+9yl/5/arhwMCU2mV5rfb02caeg= = X-Google-Smtp-Source: AGHT+IGSpDrcEpe7uNDVI+souhz0v7v+9RHjQC1tc7Aljkmth6DLnm0zlJcclwS/YNY3iQUF+yig/Y2hJg== X-Received: from wmqe12.prod.google.com ([2002:a05:600c:4e4c:b0:442:cd42:1f7]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4e46:b0:441:b3f0:e5f6 with SMTP id 5b1f17b1804b1-442d6dd2276mr134034145e9.25.1747154103042; Tue, 13 May 2025 09:35:03 -0700 (PDT) Date: Tue, 13 May 2025 17:34:32 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-12-tabba@google.com> Subject: [PATCH v9 11/17] KVM: arm64: Refactor user_mem_abort() calculation of force_pte From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com To simplify the code and to make the assumptions clearer, refactor user_mem_abort() by immediately setting force_pte to true if the conditions are met. Also, remove the comment about logging_active being guaranteed to never be true for VM_PFNMAP memslots, since it's not actually correct. No functional change intended. Reviewed-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index eeda92330ade..9865ada04a81 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1472,7 +1472,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, bool fault_is_perm) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool write_fault, writable; bool exec_fault, mte_allowed; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; @@ -1484,6 +1484,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); + bool force_pte = logging_active || is_protected_kvm_enabled(); long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; @@ -1536,16 +1537,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - /* - * logging_active is guaranteed to never be true for VM_PFNMAP - * memslots. - */ - if (logging_active || is_protected_kvm_enabled()) { - force_pte = true; + if (force_pte) vma_shift = PAGE_SHIFT; - } else { + else vma_shift = get_vma_page_shift(vma, hva); - } switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED From patchwork Tue May 13 16:34:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889647 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 846762C0316 for ; Tue, 13 May 2025 16:35:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154108; cv=none; b=d5XZ3EjFXDMIcNaO2g7ggX1B/s/Zg54Pr+fnJE5g/c9v6dnnGYe1V27z9CkrnpQobKqgzrGLrkKmTI3HAFbgXaStdA5nocTrB1+deczCWBZ4j+kdNzPdeivTPdQdjJ1RNCHf0ELnOHZm8rVBgkEVxAH5tG3191hgO4mRqBSLkkM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154108; c=relaxed/simple; bh=o+7g7TGRMBkFIwd+yhmtjHUJT97QY+563PoPTnNJLwk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=GQmuzeLbqagz2tq7cUDvORw5LrMq+lfJLRvxvg7V5d3YUWWNB+jL24uh2nFiE4FqFfP7KSZ0/tM1FPoDvA47OJyo1zKJboFiFkVurqUHs37HaC4TKYDTn+8SoD+TW1zbSOluxpbivOnVdDPeWYulpgRz+8Jiyt1oxc5B2F/nuoc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c8WteXlp; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c8WteXlp" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so28826005e9.3 for ; Tue, 13 May 2025 09:35:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154105; x=1747758905; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=c8WteXlpLwOMOxuY7qeyCupbLgbK3VEHOSAoHQGbONfod8xRKnZJ+A7USDhmYYNigy SqsdgCIJNqsPMJ9L8km7gjMwVAriTaR+nb1k15Tk4lU2zZRECEQcOMay+1C1a6Q/Cvn/ HBQkLsKLYrCeIe20gQPBP2F4JxgUF+6OCyhA7G2NXK/VHyIJVpFfgiLTOnxVl3YhtaY5 IcHRYXnQsCwa04xRbJJ3ZENwGqPH1nXmijmcJ4QGda4QncQNHtwcUw6e2cs+UM5qIIGe pMylUr/8S+OPfiEwCa3Fmsral0Q/N0UETLuyiuANEKu6QQVHRmG+GM9DuUrPAcfU2xCx AAkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154105; x=1747758905; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Be1AlR+JPJhKhyuHrgiYClfCDGQ2SZI+bIXk4D2xVD8=; b=P4lh8I2PKhYdRdzQL7bIBoaDNdVrCjccXDs1M7bEKL8ibOMlL8ewVgsZRmI0kHSf6t Wln4wTOficbB00mr/78ZLTDC2bwLvKOl6qdgYG9SXFlmd2K5d/jXwc0HzXWW/mUSoTHu dl9qD8ALFAUV+vrCTS7r2FEZRfAIW/0UPbniBmYOUg0EOj6zw8vWhiZ+HjBugz4KXHjl qUyWu2G8BeLuWx0tc4v9KeKwplOlvyS2uBOfVweySvJF4GafhKF7+0MPZqn81ThLPdca DJIQxN90mCHs3A+NQU6PWF0gr7roBjF2FYu06iQ1VQoyKk1xZVX1MsKpLyJ9NXPcCF8L VLog== X-Forwarded-Encrypted: i=1; AJvYcCU7+tyCxxHi4M0KMaXQ/cqd885IO1YkHhOCMm7tFbASaNM3LN/d6IpQEg/LNQc/CL9QaYUvOxiPlM1bfUdv@vger.kernel.org X-Gm-Message-State: AOJu0Ywxbt1NeQO6gDUcmk6ZtL+GZPFNP1MJL3FKk4Br8b2gW+zug3Ft la8syFNyTMkquxVtkTsX7+yv90kkB59WXlvCyzKdtBqajRuPxubr7d5XvqGauo7SxmecjT+W8Q= = X-Google-Smtp-Source: AGHT+IEsMEetoV4YK4SV4vFb6l1Ng9DKZZK9MFFHNiIzxliX7gPauq7tpe88BFecnM4+cLCAaJyauy01cw== X-Received: from wmbhc10.prod.google.com ([2002:a05:600c:870a:b0:442:cd39:5ca4]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8207:b0:43c:ee62:33f5 with SMTP id 5b1f17b1804b1-442d6ddcf2dmr152785155e9.27.1747154104922; Tue, 13 May 2025 09:35:04 -0700 (PDT) Date: Tue, 13 May 2025 17:34:33 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-13-tabba@google.com> Subject: [PATCH v9 12/17] KVM: arm64: Rename variables in user_mem_abort() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Guest memory can be backed by guest_memfd or by anonymous memory. Rename vma_shift to page_shift and vma_pagesize to page_size to ease readability in subsequent patches. Suggested-by: James Houghton Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 54 ++++++++++++++++++++++---------------------- 1 file changed, 27 insertions(+), 27 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 9865ada04a81..d756c2b5913f 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1479,13 +1479,13 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; struct vm_area_struct *vma; - short vma_shift; + short page_shift; void *memcache; gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); bool force_pte = logging_active || is_protected_kvm_enabled(); - long vma_pagesize, fault_granule; + long page_size, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1538,11 +1538,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, } if (force_pte) - vma_shift = PAGE_SHIFT; + page_shift = PAGE_SHIFT; else - vma_shift = get_vma_page_shift(vma, hva); + page_shift = get_vma_page_shift(vma, hva); - switch (vma_shift) { + switch (page_shift) { #ifndef __PAGETABLE_PMD_FOLDED case PUD_SHIFT: if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE)) @@ -1550,23 +1550,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, fallthrough; #endif case CONT_PMD_SHIFT: - vma_shift = PMD_SHIFT; + page_shift = PMD_SHIFT; fallthrough; case PMD_SHIFT: if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) break; fallthrough; case CONT_PTE_SHIFT: - vma_shift = PAGE_SHIFT; + page_shift = PAGE_SHIFT; force_pte = true; fallthrough; case PAGE_SHIFT: break; default: - WARN_ONCE(1, "Unknown vma_shift %d", vma_shift); + WARN_ONCE(1, "Unknown page_shift %d", page_shift); } - vma_pagesize = 1UL << vma_shift; + page_size = 1UL << page_shift; if (nested) { unsigned long max_map_size; @@ -1592,7 +1592,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, max_map_size = PAGE_SIZE; force_pte = (max_map_size == PAGE_SIZE); - vma_pagesize = min(vma_pagesize, (long)max_map_size); + page_size = min_t(long, page_size, max_map_size); } /* @@ -1600,9 +1600,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * ensure we find the right PFN and lay down the mapping in the right * place. */ - if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE) { - fault_ipa &= ~(vma_pagesize - 1); - ipa &= ~(vma_pagesize - 1); + if (page_size == PMD_SIZE || page_size == PUD_SIZE) { + fault_ipa &= ~(page_size - 1); + ipa &= ~(page_size - 1); } gfn = ipa >> PAGE_SHIFT; @@ -1627,7 +1627,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, &writable, &page); if (pfn == KVM_PFN_ERR_HWPOISON) { - kvm_send_hwpoison_signal(hva, vma_shift); + kvm_send_hwpoison_signal(hva, page_shift); return 0; } if (is_error_noslot_pfn(pfn)) @@ -1636,9 +1636,9 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (kvm_is_device_pfn(pfn)) { /* * If the page was identified as device early by looking at - * the VMA flags, vma_pagesize is already representing the + * the VMA flags, page_size is already representing the * largest quantity we can map. If instead it was mapped - * via __kvm_faultin_pfn(), vma_pagesize is set to PAGE_SIZE + * via __kvm_faultin_pfn(), page_size is set to PAGE_SIZE * and must not be upgraded. * * In both cases, we don't let transparent_hugepage_adjust() @@ -1686,16 +1686,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * If we are not forced to use page mapping, check if we are * backed by a THP and thus use block mapping if possible. */ - if (vma_pagesize == PAGE_SIZE && !(force_pte || device)) { + if (page_size == PAGE_SIZE && !(force_pte || device)) { if (fault_is_perm && fault_granule > PAGE_SIZE) - vma_pagesize = fault_granule; + page_size = fault_granule; else - vma_pagesize = transparent_hugepage_adjust(kvm, memslot, - hva, &pfn, - &fault_ipa); + page_size = transparent_hugepage_adjust(kvm, memslot, + hva, &pfn, + &fault_ipa); - if (vma_pagesize < 0) { - ret = vma_pagesize; + if (page_size < 0) { + ret = page_size; goto out_unlock; } } @@ -1703,7 +1703,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (!fault_is_perm && !device && kvm_has_mte(kvm)) { /* Check the VMM hasn't introduced a new disallowed VMA */ if (mte_allowed) { - sanitise_mte_tags(kvm, pfn, vma_pagesize); + sanitise_mte_tags(kvm, pfn, page_size); } else { ret = -EFAULT; goto out_unlock; @@ -1728,10 +1728,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, /* * Under the premise of getting a FSC_PERM fault, we just need to relax - * permissions only if vma_pagesize equals fault_granule. Otherwise, + * permissions only if page_size equals fault_granule. Otherwise, * kvm_pgtable_stage2_map() should be called to change block size. */ - if (fault_is_perm && vma_pagesize == fault_granule) { + if (fault_is_perm && page_size == fault_granule) { /* * Drop the SW bits in favour of those stored in the * PTE, which will be preserved. @@ -1739,7 +1739,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, prot &= ~KVM_NV_GUEST_MAP_SZ; ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags); } else { - ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, vma_pagesize, + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, page_size, __pfn_to_phys(pfn), prot, memcache, flags); } From patchwork Tue May 13 16:34:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890331 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50C1A2C0327 for ; Tue, 13 May 2025 16:35:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154110; cv=none; b=Ao3PnV+g5ItyGmUzW74bFtGMGlweSakKCYZPHQzRprCY6iGdbXrMCYfqOtTiifogEgtX71G4hqYDM6IuZf6EKFPO7bGBeY5nq3zv4n0+NXG5r41JglYDfVSsZv6lEfRTsjHiIMZfYLHJV6LoVRbmioTB+BmS74GOtN9pl/4aTsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154110; c=relaxed/simple; bh=MbDGTurddBrN8KJrNyuUpxF5IoxP5LGeg+N7SFaEsKg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Gfo2P7As6BFSBDkU/bVkjqkiZQON6Uosz5tDZ9kUX4OaE5PVM09LKRFad4oQxh9ht7t1rdmkvEyla+FowhYOI9b1k1dMh13Qa3G/+7bIFbxI7tTpqkwdECxq2e1TwDDmgA1UflnPrUCEAfX2qCtisdPD6ZBuk1u3qeT8nqOR+ug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=VeNnfsKh; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="VeNnfsKh" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43cf44b66f7so33782145e9.1 for ; Tue, 13 May 2025 09:35:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154107; x=1747758907; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=coGP02xhdF4QsTZqgpjGtEC02QuEoL67kyEmXDEzHl0=; b=VeNnfsKhcy/SmMF9uJ2lDHPKUQB+rDov3PINTggAEz27nFoZ3QadcTKFtG6TPZ3Pl+ NQln5Sbesjka0o72Gewrtk+TuO3Oh8NnBjJoHPYkK24rXDJKn2dN4LCmEpPDLI7SMDjW ky3H2Pb5teA45vhLsJLf2cIOoqMPWXKcsIPyB15vzjy7W08iuvTLP7KnaXPjzhsUuIVd y/4JxTE4VF86MCwHm5yfr8xdCzUxHH1fq9eZdJnqvxFAETe+I1mhaTq2SB+gIKFs0qHY EDijBcyzc94bzjSlZBXQQGNEj773isCNDVrhr/QgT/3YdpAU8TvX3MVjz/AMyZPhFskI tmXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154107; x=1747758907; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=coGP02xhdF4QsTZqgpjGtEC02QuEoL67kyEmXDEzHl0=; b=vgs5tilQ2jgOI5eZuplHqjEt8aFRbe3fVC2qGaWheL6tZxLHKiVIQsb/J7rPpp+Sig GB1tghUPM+tRUVYTk6QluBUmLGoEcnQnHaxnvI+cSZwvUS41tc0VDl9ecqqx56rcDDxd js5bYX/yhblP/e+U0/K8ceV9HNGh9PWlhFR7fkPtKXJTnKyJ3+45kAkE75niY9NO0CL4 FF5Qon2RXxf9RMC568Fo1RsGJQxT5e/+NN4MmyXWcMOoWJ/Znhi7JeH0eLIy9DMr3MTC icNMGSkz/QrzHaCjRxj/BBCtEuLUZuTbH7Zn3Dm16lfbMd+ziRDbbt3wtlEYWeuB1xyI QySA== X-Forwarded-Encrypted: i=1; AJvYcCWg9aMvQ8QlIftYcviMtAqHH2Z4Y0fBsANTOX0DNN783eRNyi8vecH8xls31F6rY5w2uHo2j4ScCHzSW211@vger.kernel.org X-Gm-Message-State: AOJu0Yx+WTzQUMWdE45VeZY3cScVryBtiC6FJR9LVJvmKGBTawYFdNbO yBc43KRhEH8QlP3KFJs7JdQrfINJ8UYrUDNril9ZstxqpvLcId4lmqw4U3qbZcz9gv92Y6I5MQ= = X-Google-Smtp-Source: AGHT+IEjjYEXEYlvWFF5OLq9cklfBFuKA7pxlcIDeCLfcHXQdKGj5SSIieh7nJvEEf5f7+rXBSzijM2YeA== X-Received: from wmsd3.prod.google.com ([2002:a05:600c:3ac3:b0:442:dc9b:b569]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5118:b0:43d:cc9:b09d with SMTP id 5b1f17b1804b1-442d6dc539cmr116558185e9.20.1747154106776; Tue, 13 May 2025 09:35:06 -0700 (PDT) Date: Tue, 13 May 2025 17:34:34 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-14-tabba@google.com> Subject: [PATCH v9 13/17] KVM: arm64: Handle guest_memfd()-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Add arm64 support for handling guest page faults on guest_memfd backed memslots. For now, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 94 +++++++++++++++++++++++++--------------- include/linux/kvm_host.h | 5 +++ virt/kvm/kvm_main.c | 5 --- 3 files changed, 64 insertions(+), 40 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index d756c2b5913f..9a48ef08491d 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1466,6 +1466,30 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static kvm_pfn_t faultin_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, + gfn_t gfn, bool write_fault, bool *writable, + struct page **page, bool is_gmem) +{ + kvm_pfn_t pfn; + int ret; + + if (!is_gmem) + return __kvm_faultin_pfn(slot, gfn, write_fault ? FOLL_WRITE : 0, writable, page); + + *writable = false; + + ret = kvm_gmem_get_pfn(kvm, slot, gfn, &pfn, page, NULL); + if (!ret) { + *writable = !memslot_is_readonly(slot); + return pfn; + } + + if (ret == -EHWPOISON) + return KVM_PFN_ERR_HWPOISON; + + return KVM_PFN_ERR_NOSLOT_MASK; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1473,19 +1497,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, { int ret = 0; bool write_fault, writable; - bool exec_fault, mte_allowed; + bool exec_fault, mte_allowed = false; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; phys_addr_t ipa = fault_ipa; struct kvm *kvm = vcpu->kvm; - struct vm_area_struct *vma; - short page_shift; + struct vm_area_struct *vma = NULL; + short page_shift = PAGE_SHIFT; void *memcache; - gfn_t gfn; + gfn_t gfn = ipa >> PAGE_SHIFT; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); - bool force_pte = logging_active || is_protected_kvm_enabled(); - long page_size, fault_granule; + bool is_gmem = kvm_slot_has_gmem(memslot); + bool force_pte = logging_active || is_gmem || is_protected_kvm_enabled(); + long page_size, fault_granule = PAGE_SIZE; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; @@ -1529,17 +1554,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * Let's check if we will get back a huge page backed by hugetlbfs, or * get block mapping for device MMIO region. */ - mmap_read_lock(current->mm); - vma = vma_lookup(current->mm, hva); - if (unlikely(!vma)) { - kvm_err("Failed to find VMA for hva 0x%lx\n", hva); - mmap_read_unlock(current->mm); - return -EFAULT; + if (!is_gmem) { + mmap_read_lock(current->mm); + vma = vma_lookup(current->mm, hva); + if (unlikely(!vma)) { + kvm_err("Failed to find VMA for hva 0x%lx\n", hva); + mmap_read_unlock(current->mm); + return -EFAULT; + } + + vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; + mte_allowed = kvm_vma_mte_allowed(vma); } - if (force_pte) - page_shift = PAGE_SHIFT; - else + if (!force_pte) page_shift = get_vma_page_shift(vma, hva); switch (page_shift) { @@ -1605,27 +1633,23 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, ipa &= ~(page_size - 1); } - gfn = ipa >> PAGE_SHIFT; - mte_allowed = kvm_vma_mte_allowed(vma); - - vfio_allow_any_uc = vma->vm_flags & VM_ALLOW_ANY_UNCACHED; - - /* Don't use the VMA after the unlock -- it may have vanished */ - vma = NULL; + if (!is_gmem) { + /* Don't use the VMA after the unlock -- it may have vanished */ + vma = NULL; - /* - * Read mmu_invalidate_seq so that KVM can detect if the results of - * vma_lookup() or __kvm_faultin_pfn() become stale prior to - * acquiring kvm->mmu_lock. - * - * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs - * with the smp_wmb() in kvm_mmu_invalidate_end(). - */ - mmu_seq = vcpu->kvm->mmu_invalidate_seq; - mmap_read_unlock(current->mm); + /* + * Read mmu_invalidate_seq so that KVM can detect if the results + * of vma_lookup() or faultin_pfn() become stale prior to + * acquiring kvm->mmu_lock. + * + * Rely on mmap_read_unlock() for an implicit smp_rmb(), which + * pairs with the smp_wmb() in kvm_mmu_invalidate_end(). + */ + mmu_seq = vcpu->kvm->mmu_invalidate_seq; + mmap_read_unlock(current->mm); + } - pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, - &writable, &page); + pfn = faultin_pfn(kvm, memslot, gfn, write_fault, &writable, &page, is_gmem); if (pfn == KVM_PFN_ERR_HWPOISON) { kvm_send_hwpoison_signal(hva, page_shift); return 0; @@ -1677,7 +1701,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, kvm_fault_lock(kvm); pgt = vcpu->arch.hw_mmu->pgt; - if (mmu_invalidate_retry(kvm, mmu_seq)) { + if (!is_gmem && mmu_invalidate_retry(kvm, mmu_seq)) { ret = -EAGAIN; goto out_unlock; } diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index f9bb025327c3..b317392453a5 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1884,6 +1884,11 @@ static inline int memslot_id(struct kvm *kvm, gfn_t gfn) return gfn_to_memslot(kvm, gfn)->id; } +static inline bool memslot_is_readonly(const struct kvm_memory_slot *slot) +{ + return slot->flags & KVM_MEM_READONLY; +} + static inline gfn_t hva_to_gfn_memslot(unsigned long hva, struct kvm_memory_slot *slot) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6289ea1685dd..6261d8638cd2 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -2640,11 +2640,6 @@ unsigned long kvm_host_page_size(struct kvm_vcpu *vcpu, gfn_t gfn) return size; } -static bool memslot_is_readonly(const struct kvm_memory_slot *slot) -{ - return slot->flags & KVM_MEM_READONLY; -} - static unsigned long __gfn_to_hva_many(const struct kvm_memory_slot *slot, gfn_t gfn, gfn_t *nr_pages, bool write) { From patchwork Tue May 13 16:34:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889646 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D64E2C031D for ; Tue, 13 May 2025 16:35:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154112; cv=none; b=knGfwVwMYjSCodiUGwQ7C82mQvgXbAHqDzxZb2uAQHVqO3diVzmHINmCshgPnJRPUubZoj8amE1edY947U3MgmSe27qshW/jHb6uRywScVIrA+laekC7O6zQiomPISNiBaDckd0qmB2qncqa4lvhs7vcEdfGYHn7u7ar1ukX/tE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154112; c=relaxed/simple; bh=qEaTBE1uwGPI8+uCA4iVoWVO+dyV2JyaLTfwA9DK/6k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=C4HcQKuAzKv7OebXpEyyJJ6JB0e2lAzRzp7CiNT7/k7tRerEca86XzjIj0DeW7DZcsbb8+UFTqT9KjgqzoHCwUfSkTiUDK2S1vENHAXOejI6uFFQtqffRkIeu5O1oPBMurE+7IQsz53FefbFQgLbZ3TScS8Bf6AbZ3KGMsKzUjQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xb5nKyDr; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xb5nKyDr" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d4d15058dso42810785e9.0 for ; Tue, 13 May 2025 09:35:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154109; x=1747758909; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rZXPMKjc2x8q7vQw6jLYOA4YavTLLDAU0LUHvWww2m0=; b=xb5nKyDrUMJOAiN1OSgQE9yd7RRW/8LLG2SuRllRT0m+thusdHP/y1PjFRQ3YZJHyc jnxANxOa4ifgoBmbDBg2TBTMMgPJjVoj7XfVjZLadY3t5P06yXtJ2LVIo6YMX9qdL3gZ Wi+zVG8MgLfwMFbzoXS5vUJcqluU8olgOKvH5ZPmsB5UUhMdkK++C1+ksH/EjP8hdDe2 HOrcZQ5udx+zrPc/GireS9p1VsMX+UetlIY5sJJlUFuRXyBile14CUGXFWiDGCJI9lbp 9hbDHoixWf38cSwe+qdFYdKs2nRoJVeflHypXbbY8oQ8G4wQ4tee4mkdppMmln7VSkSt 3jlA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154109; x=1747758909; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rZXPMKjc2x8q7vQw6jLYOA4YavTLLDAU0LUHvWww2m0=; b=GVa38UM0gxFSY1ha7hWTm5yM4IGG21iuhv6VSAIzXkffcawx5hDrBjyEGZFqFT0kTm KlvCD4s8h/J0CUKBY19XUZUzhAvaGyTeTrdxS9uL+OvLvBCY+1W3B0bQwxPpoTBXGNGv 1mEe3SiJ3RY8iKo1YlnPfSqp2pXQ5QBaJzgev6qFcv8pz35/5TSD7ygXxhPb3B/B7x6v F4lOXWZGzmZEZq32LkxMKqi6ky2pJh+0/vZ4RnEtoMvQb5ccg7/FBx862RAWygq7RIBb Kht0kPCLSwMZOsZb1rokN0y9Yv+Yb621y165txUdBWPe9u9YE9+W2Y7x2IJKGd/tMk4p HP6w== X-Forwarded-Encrypted: i=1; AJvYcCWtBkN9M/Cf8TI3kY06iQ/TO97zVavIdIZfaRNNBCuTH9EIaqoqSREGU8DBBhqXP+K8ThsnHtfcxy17GTyP@vger.kernel.org X-Gm-Message-State: AOJu0YyXNlUDh/yzhPBXVe9lJETEZOF4b4FPO6pCW0NfBPKKT3BVd2py ajdr7HbADa7t1XljkIpPSVBj8ZIzYZmqIN4LOkHNmWwLeIQso/+hRHIwxHBDLR4V34dte1Espg= = X-Google-Smtp-Source: AGHT+IFPpQgvrgdez5UxRSIdFt8fENxNzznV8garFYKvunshxDRiI2542r7dI1m28zPtVhnl3UAAh1kxZg== X-Received: from wmqc20.prod.google.com ([2002:a05:600c:a54:b0:43d:8f:dd29]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:5286:b0:440:66a4:8d1a with SMTP id 5b1f17b1804b1-442f20ba9fbmr156435e9.7.1747154109115; Tue, 13 May 2025 09:35:09 -0700 (PDT) Date: Tue, 13 May 2025 17:34:35 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-15-tabba@google.com> Subject: [PATCH v9 14/17] KVM: arm64: Enable mapping guest_memfd in arm64 From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Enable mapping guest_memfd in arm64. For now, it applies to all VMs in arm64 that use guest_memfd. In the future, new VM types can restrict this via kvm_arch_gmem_supports_shared_mem(). Signed-off-by: Fuad Tabba --- arch/arm64/include/asm/kvm_host.h | 10 ++++++++++ arch/arm64/kvm/Kconfig | 1 + 2 files changed, 11 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 08ba91e6fb03..2514779f5131 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1593,4 +1593,14 @@ static inline bool kvm_arch_has_irq_bypass(void) return true; } +static inline bool kvm_arch_supports_gmem(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_KVM_GMEM); +} + +static inline bool kvm_arch_vm_supports_gmem_shared_mem(struct kvm *kvm) +{ + return IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM); +} + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 096e45acadb2..8c1e1964b46a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GMEM_SHARED_MEM help Support hosting virtualized guest machines. From patchwork Tue May 13 16:34:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890330 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 11C972C031D for ; Tue, 13 May 2025 16:35:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154114; cv=none; b=qW8W/WJjhYq06D6vWJSG5Ev5wxcGyM6G31fJyKhj0YWrH0CyY9phdosIxs2V3pAdhIZgLMEJejCPUOmw5d9C6aPOzdhrAOLqCwv+8nMoz627fcmk+pt//4l2D6uiEBDC3EB8SFsLJQIkVBc7SVfWw/1MiPiOdav1bb6Jml/irTs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154114; c=relaxed/simple; bh=bNE8fjz0DjqD61kI2dV9cmfzEuqeWV0E/d5rTeYbx6I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=HEGKiT5M/X7nr6tIWRqbRsmUJQ3/hozaXGJOTBRe8CO26AQZ81qLRbeL43VdyTg+5I58s2AfXX9ObADvUCbV6tQZ0cAsjIc6+QVcjTgdGU58E+ypmyAj5ACzcC8XuFI/FofYsNUof8FOrR0rBgMZ6ratF24oSaN9gJ5sQfCjbKA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lY7cVdTb; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lY7cVdTb" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so28826825e9.3 for ; Tue, 13 May 2025 09:35:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154111; x=1747758911; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=+ZWH6ggb1hC8uB3wzLXpUmgA6cvqTdmcA385l2vtyqo=; b=lY7cVdTbcSiy8Ez4lAPqt+XIj2In95/VC4Z6jqtC9laCHhqd1vZsRZ3ILiH04p5Lzp SiwYLtmgGHnihmWTNBvqyatm2qjgLBGGAjxFCQZK1NXHno5tfqVvEmhgCib3Q6eaP6Us mGAjW+uFeOqYI/Ogagf5jUKXWYyagpCpgctsx0A2JV1nuSBwN14dZA+0PVBiuo+Hrjby eVyLkSDeF76iV+79V64KbH55U3mPwkYXOmsYFnA78lp7NDV+tqlmZ4aZS91apBykQEwM a3DmMa4PZ8t1kzpzlB7N6dk8zCqNE/wVRx3RCBWpkXKQeucUja2nqEN6DoKIgmWdtmV8 0v0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154111; x=1747758911; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=+ZWH6ggb1hC8uB3wzLXpUmgA6cvqTdmcA385l2vtyqo=; b=HiOWIeodOnmz6tMaTPrfnPDH6+gujPDCRQ5/uPyvh1pq6BsKOJLwlPe+zAQwqOdBB5 kmpq5BXx7Z74AA8JLWkiILDshGl5mnEMYkqpE2X3+SfRLwEhatlNnM4tdp/6mPR0+On7 /HHR34QKlxRa8FjtiaymcKaeTlD1Jr1yAwdwrDaZlbpf2ig1i1LmddPjTm2lFJBL3uU0 QvsJcl5a4X868JKAS10DdzrN1xxgx13yTQFKWb5la1Iva7M3TZ4XLktVJYs5Uck9eH+g 2u+peEvcG40Mytwzb8AR64nUpceYNkmWHxvgRqBqwRA5SBGlUR7eFuQnbLbTfb3dkhNH nYSg== X-Forwarded-Encrypted: i=1; AJvYcCUIpqPHdmqJbfaIsOGs2aWOcu2aXct4j+d3YGhifWxp6CsT8mB4MfpabYQvveXadBUhyObpUGOTW8VWABf8@vger.kernel.org X-Gm-Message-State: AOJu0Yxim7BwfLTwk9qipnEIiLyy3fV/2pE57gAEtWlWxIppm6ykBWgj 6MbnNXBjx9Eoemq0opc5xkuR2iWTS/WcEVpQaHT0etai5hwzYTVOwiHzQ4oOM+bCrYc9ascS1Q= = X-Google-Smtp-Source: AGHT+IGHTibNt10zPPbfcc7hWBFBZ+4Wb9WgnMsxL3M8OZ9mvTxE/IBubiT+QXo86B+nchX57b8DoHTPkg== X-Received: from wmbjg17.prod.google.com ([2002:a05:600c:a011:b0:441:d228:3918]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:154b:b0:441:ac58:ead5 with SMTP id 5b1f17b1804b1-442d6ddd0afmr192218995e9.31.1747154111449; Tue, 13 May 2025 09:35:11 -0700 (PDT) Date: Tue, 13 May 2025 17:34:36 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-16-tabba@google.com> Subject: [PATCH v9 15/17] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch introduces the KVM capability KVM_CAP_GMEM_SHARED_MEM, which indicates that guest_memfd supports shared memory (when enabled by the flag). This support is limited to certain VM types, determined per architecture. This patch also updates the KVM documentation with details on the new capability, flag, and other information about support for shared memory in guest_memfd. Signed-off-by: Fuad Tabba Reviewed-by: David Hildenbrand --- Documentation/virt/kvm/api.rst | 18 ++++++++++++++++++ include/uapi/linux/kvm.h | 1 + virt/kvm/kvm_main.c | 4 ++++ 3 files changed, 23 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 47c7c3f92314..86f74ce7f12a 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6390,6 +6390,24 @@ most one mapping per page, i.e. binding multiple memory regions to a single guest_memfd range is not allowed (any number of memory regions can be bound to a single guest_memfd file, but the bound ranges must not overlap). +When the capability KVM_CAP_GMEM_SHARED_MEM is supported, the 'flags' field +supports GUEST_MEMFD_FLAG_SUPPORT_SHARED. Setting this flag on guest_memfd +creation enables mmap() and faulting of guest_memfd memory to host userspace. + +When the KVM MMU performs a PFN lookup to service a guest fault and the backing +guest_memfd has the GUEST_MEMFD_FLAG_SUPPORT_SHARED set, then the fault will +always be consumed from guest_memfd, regardless of whether it is a shared or a +private fault. + +For these memslots, userspace_addr is checked to be the mmap()-ed view of the +same range specified using gmem.pgoff. Other accesses by KVM, e.g., instruction +emulation, go via slot->userspace_addr. The slot->userspace_addr field can be +set to 0 to skip this check, which indicates that KVM would not access memory +belonging to the slot via its userspace_addr. + +The use of GUEST_MEMFD_FLAG_SUPPORT_SHARED will not be allowed for CoCo VMs. +This is validated when the guest_memfd instance is bound to the VM. + See KVM_SET_USER_MEMORY_REGION2 for additional details. 4.143 KVM_PRE_FAULT_MEMORY diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index 9857022a0f0c..4cc824a3a7c9 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_GMEM_SHARED_MEM 240 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6261d8638cd2..6c75f933bfbe 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4840,6 +4840,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_supports_gmem(kvm); +#endif +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + case KVM_CAP_GMEM_SHARED_MEM: + return !kvm || kvm_arch_vm_supports_gmem_shared_mem(kvm); #endif default: break; From patchwork Tue May 13 16:34:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 889645 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E610A2BF961 for ; Tue, 13 May 2025 16:35:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154116; cv=none; b=PRnNiTML8Hi606NoAsNNqm6pdlFj916s2UbWEwbPfbqNbA9aY48Yva2jdT8z6hXw/52hDoTdPRdU/QJlk5iz+eosQZBaf3QXTb1OmN1jd9osOV85l1HfA793EyI7lF0Uh8eT2McWVNz/7Aj7X78FIOJt5PvxndPjkSO4V4IBLIU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154116; c=relaxed/simple; bh=mKUcMM6xySgMhrCMICHBVOWIY72xumqrKrfQhfRhHEs=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=G/6evRbf8vIvcBjfYGBf2MaJOP9bBbAD+4/dfACLuTndevksgtg9KG1X9CBH33vWGchIEBcSkK36sKB97ttfqtc9jUPWlBYVbEZOrGmaRxHjdb2SORj2AUrHnZCQCZ36HCQQJAb1ZHzLZviqcPI4s3cgH5FMRJ+4cNFAJe2eQSI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hEHj5P8f; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hEHj5P8f" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-43d209dc2d3so28827105e9.3 for ; Tue, 13 May 2025 09:35:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154113; x=1747758913; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=HplCz6Ki91pQtkmMnDfyPLORTUTrCJV6qBuSdHR42kE=; b=hEHj5P8f5Nzvk88UrJ1Hv6EOB/L0ePuyxAWjjJfVgL9bCDBFcCJ5+UF2AIs6ebx8WN liLXMFKt7dLSaPsigYlo768FjhqzrkW32cXsuidzDtKFNLuhFsD2uY045l3HvUdG9Y9o bWnMGJRnfGkEzs1Ct9F/6qzki7qVDHW4+bU4BQHqUZi1Ubbu92/1+o9hmrx14cOY8dcE 6APn+f9dLOPuXQUFYYJGZX9P6RT5+688uHDzSHnHajRuOwo0IL5+UaEfyWCVG36v3aRk o8bd+dKzC8plI3nQYywh9Jr9Cois2dfVQvnANEjnZCl0RM+Cdmb4uWMVZ8X1arGXK/AY /4og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154113; x=1747758913; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=HplCz6Ki91pQtkmMnDfyPLORTUTrCJV6qBuSdHR42kE=; b=dXumXmBJYPoQLJ50eoZr0slvrLO1KfHQ6z77QQ4Fc10uepSWbrb4/YLzK65rfeKKV8 iv2TmVKpS530x2+xvjTEOHx3E/UjZP9xWmK03ouxV70A4Mxc8QMc/s4ryRKN7Iw3e3v6 YemdBjATdlyfSqhldIVx8TxMqPUPsFGBafN0IJqXgTCAjbLizTD30CHU8C9z6SSkmOMi fZDwkjExkrBhJehKwEqsG3iwTQRjkEKYSwmkCsPsZQBoqUT5x4jGgvlJI+hmHLEsgSPb 87NSXFZDcovlK5tx2GqpkQxAgcSqs7ob7xW+qHUE8RNLRQtZUnFP+oTtDLT1BGuYiFlV BS5Q== X-Forwarded-Encrypted: i=1; AJvYcCU7RSGm1ryjEg2HECNZLMc422sSam3UHCKIvBnxzeFVcYzHYHbwbbMyblc03LlC4ZxMTHWE9gg6bdRYlAvG@vger.kernel.org X-Gm-Message-State: AOJu0YxJN1F8nph1NGA7xDnPor0WIREmbEMkJcqBkzhWnuE4T2NP9DCP 8exA5TXgBDZVwRiCEs7dHzK7bVlM5Fa7jXZBZTliG/Ep9+Hn3CMIpu50TyxE/MMzzqmNJV4fsA= = X-Google-Smtp-Source: AGHT+IGyG9X0C8EATAee6mw1Vu0OjWxo4VAOW8iVeiL+MppPyCAt6e4o5dM2NyqZvZWkg4NG1ybE313Wog== X-Received: from wmbdo24.prod.google.com ([2002:a05:600c:6818:b0:43d:56fa:9b95]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4451:b0:43d:526:e0ce with SMTP id 5b1f17b1804b1-442d6dc51e8mr133770725e9.21.1747154113359; Tue, 13 May 2025 09:35:13 -0700 (PDT) Date: Tue, 13 May 2025 17:34:37 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-17-tabba@google.com> Subject: [PATCH v9 16/17] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Also, build the guest_memfd selftest for arm64. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../testing/selftests/kvm/guest_memfd_test.c | 145 +++++++++++++++--- 2 files changed, 126 insertions(+), 20 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..ccf95ed037c3 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -163,6 +163,7 @@ TEST_GEN_PROGS_arm64 += access_tracking_perf_test TEST_GEN_PROGS_arm64 += arch_timer TEST_GEN_PROGS_arm64 += coalesced_io_test TEST_GEN_PROGS_arm64 += dirty_log_perf_test +TEST_GEN_PROGS_arm64 += guest_memfd_test TEST_GEN_PROGS_arm64 += get-reg-list TEST_GEN_PROGS_arm64 += memslot_modification_stress_test TEST_GEN_PROGS_arm64 += memslot_perf_test diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..443c49185543 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -34,12 +34,46 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_allowed(int fd, size_t page_size, size_t total_size) +{ + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmaping() guest memory should pass."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed"); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap should succeed"); +} + +static void test_mmap_denied(int fd, size_t page_size, size_t total_size) { char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -120,26 +154,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) } } -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, + uint64_t guest_memfd_flags, + size_t page_size) { - size_t page_size = getpagesize(); - uint64_t flag; size_t size; int fd; for (size = 1; size < page_size; size++) { - fd = __vm_create_guest_memfd(vm, size, 0); + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); TEST_ASSERT(fd == -1 && errno == EINVAL, "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", size); } - - for (flag = BIT(0); flag; flag <<= 1) { - fd = __vm_create_guest_memfd(vm, page_size, flag); - TEST_ASSERT(fd == -1 && errno == EINVAL, - "guest_memfd() with flag '0x%lx' should fail with EINVAL", - flag); - } } static void test_create_guest_memfd_multiple(struct kvm_vm *vm) @@ -170,30 +197,108 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, + bool expect_mmap_allowed) { - size_t page_size; + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) + return; page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(vm_type); - test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); - fd = vm_create_guest_memfd(vm, total_size, 0); + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (expect_mmap_allowed) + test_mmap_allowed(fd, page_size, total_size); + else + test_mmap_denied(fd, page_size, total_size); + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_release(vm); +} + +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, + uint64_t expected_valid_flags) +{ + size_t page_size = getpagesize(); + struct kvm_vm *vm; + uint64_t flag = 0; + int fd; + + if (!(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type))) + return; + + vm = vm_create_barebones_type(vm_type); + + for (flag = BIT(0); flag; flag <<= 1) { + fd = __vm_create_guest_memfd(vm, page_size, flag); + + if (flag & expected_valid_flags) { + TEST_ASSERT(fd > 0, + "guest_memfd() with flag '0x%lx' should be valid", + flag); + close(fd); + } else { + TEST_ASSERT(fd == -1 && errno == EINVAL, + "guest_memfd() with flag '0x%lx' should fail with EINVAL", + flag); + } + } + + kvm_vm_release(vm); +} + +static void test_gmem_flag_validity(void) +{ + uint64_t non_coco_vm_valid_flags = 0; + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; + + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); + +#ifdef __x86_64__ + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); +#endif +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + test_gmem_flag_validity(); + + test_with_type(VM_TYPE_DEFAULT, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, + true); + } + +#ifdef __x86_64__ + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(KVM_X86_SW_PROTECTED_VM, + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); + } +#endif } From patchwork Tue May 13 16:34:38 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 890329 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 104BA2C0862 for ; Tue, 13 May 2025 16:35:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154118; cv=none; b=bAYhctpCdcJcwR7Yn4G7KA7vba+K2Fhy0k86HLdeyTMDZ7SQE6qFEqdnolzNKh42rMnz+uVXwFfM0J1c4KjeSCXGK1kUZjqAsOwvuIZIOJyL2F5EH0BxuzYm0yeO0Q3fN9crjWr6fOuG/jSjIVNDrRk5Lse42BlKYkGM6DvNm3c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747154118; c=relaxed/simple; bh=CC8kgOADI8RYHbWzLpyG2qMY6iKytGYMhxvjjgFTexU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TBRSHVLqUeIc1K9zZOKrJ/NTlzKS3Ce/5I3A8bocPIeI5hHJ/H01a2QiFq4Tq2tzF8NqnPv6pvTA3u31d61oNnBcsXdUwVjL1snrRMWWQQmLSGfsYN7OrUyXeM3SwRTW0N54zB3c8VASb0pwombRrNK7C/T3/2I3UF6pOjBPbm8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Rj09FDo2; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Rj09FDo2" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-43cf446681cso31498485e9.1 for ; Tue, 13 May 2025 09:35:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747154115; x=1747758915; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=yTV9f3tL7yhwej+ROnqRZnQFKCSKbjeemsj3pSC/+aM=; b=Rj09FDo2vSNH8ll9nrJMYzDiRgkKkvNGC2Ybu2s05/qNAgEPjoGgr1n86XTYlBG5BK yqxQryvr9PAD3zwr1uhbg9g9IW65q1gkUWFqV0tur4v7x9LJYdtzyR55db6/32+Ap/jq +WBeiWqV+Q2j4r97ZCjOmV4Cua3ysi/Qz+nqMEBM16wA0QpFtNCO0KoW4MLjdE8qByQ3 QS4SyPp03x7CKYwTRgcFrDDhaswcvzQjd5Gb2gqHnTXkEuISsK2XUkk+O2vfFVokLkE1 G2Z8jIaxGikN66T/nxe6kUNwA21tFIDtThnD40nFjcrZCckwKIsb4lew+Q1fkAASiWsB j4eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747154115; x=1747758915; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yTV9f3tL7yhwej+ROnqRZnQFKCSKbjeemsj3pSC/+aM=; b=Cpslg0AVXDUdi8k6lcZ+Qg7YpAGyKkfzB7WWz1UkPnLDiwsRXjsM4X5vbDgnfv+k7N 9/RaSuAxgSn3GMflxWfpjgt4YQQHvzX0Rn7KpJMDryDHnzgvNZJdCFij42CeTsHJ0JiZ eAQU2VAU/rWK8kn9DcCg0PrCUCxzpJYu6rFMVpke5KT9YlUONxoqhytXm86Ps6FrtxVB oC4ienbCKnRwckjk0X+/ITvoz2YVDmoi4AvZR0Utm/UD935yOY1zghJCP2xx7LnRxKE0 gxP14mXVfqixJJo/SNtOXIPNX2a6KDfPjhMzKXb4fbMJQlU3uAyqeLIDW5DvX1IF/80S OKAg== X-Forwarded-Encrypted: i=1; AJvYcCVkauaA2PwPx65CQqvpHzooe94B5zbzjz4W7rRTnaO0YeiVUQ+fBPUr+40lXIYtMLw0CcMsAO96zX2O7+Wl@vger.kernel.org X-Gm-Message-State: AOJu0YyycbrOaAgYljV54BTz2PZc1wiUf1m0ukGyh0eNb0kPcKm7n53+ v/IxDZfIFU3eWIEpxhS07+t4ynY2aHUFcaES25SiEOd9DXBgtUzYB/K70XBsoEpI0Uwvk50PhA= = X-Google-Smtp-Source: AGHT+IF3CKWr5dNaEltZziFd3JjzPKtbdKkHc+WfmpUsuWTg9V5wBHK5NPq3RbL6OwinlprEvGRdG11d/w== X-Received: from wmbbd22.prod.google.com ([2002:a05:600c:1f16:b0:43b:b74b:9350]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3511:b0:43d:ed:acd5 with SMTP id 5b1f17b1804b1-442f20d5d72mr168515e9.10.1747154115630; Tue, 13 May 2025 09:35:15 -0700 (PDT) Date: Tue, 13 May 2025 17:34:38 +0100 In-Reply-To: <20250513163438.3942405-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250513163438.3942405-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1045.g170613ef41-goog Message-ID: <20250513163438.3942405-18-tabba@google.com> Subject: [PATCH v9 17/17] KVM: selftests: Test guest_memfd same-range validation From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng Add some selftests for guest_memfd same-range validation, which check that the slot userspace_addr covers the same range as the memory in guest_memfd: + When slot->userspace_addr is set to 0, there should be no range match validation on guest_memfd binding. + guest_memfd binding should fail if + slot->userspace_addr is not from guest_memfd + slot->userspace_addr is mmap()ed from some other file + slot->userspace_addr is mmap()ed from some other guest_memfd + slot->userspace_addr is mmap()ed from a different range in the same guest_memfd + guest_memfd binding should succeed if slot->userspace_addr is mmap()ed from the same range in the same guest_memfd provided in slot->guest_memfd Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- .../testing/selftests/kvm/guest_memfd_test.c | 168 ++++++++++++++++++ 1 file changed, 168 insertions(+) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index 443c49185543..60aaba5808a5 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -197,6 +197,173 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } +#define GUEST_MEMFD_TEST_SLOT 10 +#define GUEST_MEMFD_TEST_GPA 0x100000000 + +static void +test_bind_guest_memfd_disabling_range_match_validation(struct kvm_vm *vm, + int fd) +{ + size_t page_size = getpagesize(); + int ret; + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, page_size, 0, + fd, 0); + TEST_ASSERT(!ret, + "setting slot->userspace_addr to 0 should disable validation"); + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, 0, 0, + fd, 0); + TEST_ASSERT(!ret, "Deleting memslot should work"); +} + +static void +test_bind_guest_memfd_anon_memory_in_userspace_addr(struct kvm_vm *vm, int fd) +{ + size_t page_size = getpagesize(); + void *userspace_addr; + int ret; + + userspace_addr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_PRIVATE | MAP_ANONYMOUS, -1, 0); + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, page_size, + userspace_addr, fd, 0); + TEST_ASSERT(ret == -1, + "slot->userspace_addr is not from the guest_memfd and should fail"); +} + +static void test_bind_guest_memfd_shared_memory_other_file_in_userspace_addr( + struct kvm_vm *vm, int fd) +{ + size_t page_size = getpagesize(); + void *userspace_addr; + int other_fd; + int ret; + + other_fd = memfd_create("shared_memory_other_file", 0); + TEST_ASSERT(other_fd > 0, "Creating other file should succeed"); + + userspace_addr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, other_fd, 0); + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, page_size, + userspace_addr, fd, 0); + TEST_ASSERT(ret == -1, + "slot->userspace_addr is not from the guest_memfd and should fail"); + + TEST_ASSERT(!munmap(userspace_addr, page_size), + "munmap() to cleanup should succeed"); + + close(other_fd); +} + +static void +test_bind_guest_memfd_other_guest_memfd_in_userspace_addr(struct kvm_vm *vm, + int fd) +{ + size_t page_size = getpagesize(); + void *userspace_addr; + int other_fd; + int ret; + + other_fd = vm_create_guest_memfd(vm, page_size * 2, + GUEST_MEMFD_FLAG_SUPPORT_SHARED); + TEST_ASSERT(other_fd > 0, "Creating other file should succeed"); + + userspace_addr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, other_fd, 0); + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, page_size, + userspace_addr, fd, 0); + TEST_ASSERT(ret == -1, + "slot->userspace_addr is not from the guest_memfd and should fail"); + + TEST_ASSERT(!munmap(userspace_addr, page_size), + "munmap() to cleanup should succeed"); + + close(other_fd); +} + +static void +test_bind_guest_memfd_other_range_in_userspace_addr(struct kvm_vm *vm, int fd) +{ + size_t page_size = getpagesize(); + void *userspace_addr; + int ret; + + userspace_addr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, fd, page_size); + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, page_size, + userspace_addr, fd, 0); + TEST_ASSERT(ret == -1, + "slot->userspace_addr is not from the same range and should fail"); + + TEST_ASSERT(!munmap(userspace_addr, page_size), + "munmap() to cleanup should succeed"); +} + +static void +test_bind_guest_memfd_same_range_in_userspace_addr(struct kvm_vm *vm, int fd) +{ + size_t page_size = getpagesize(); + void *userspace_addr; + int ret; + + userspace_addr = mmap(NULL, page_size, PROT_READ | PROT_WRITE, + MAP_SHARED, fd, page_size); + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, page_size, + userspace_addr, fd, page_size); + TEST_ASSERT(!ret, + "slot->userspace_addr is the same range and should succeed"); + + TEST_ASSERT(!munmap(userspace_addr, page_size), + "munmap() to cleanup should succeed"); + + ret = __vm_set_user_memory_region2(vm, GUEST_MEMFD_TEST_SLOT, + KVM_MEM_GUEST_MEMFD, + GUEST_MEMFD_TEST_GPA, 0, 0, + fd, 0); + TEST_ASSERT(!ret, "Deleting memslot should work"); +} + +static void test_bind_guest_memfd_wrt_userspace_addr(struct kvm_vm *vm) +{ + size_t page_size = getpagesize(); + int fd; + + if (!vm_check_cap(vm, KVM_CAP_GUEST_MEMFD) || + !vm_check_cap(vm, KVM_CAP_GMEM_SHARED_MEM)) + return; + + fd = vm_create_guest_memfd(vm, page_size * 2, + GUEST_MEMFD_FLAG_SUPPORT_SHARED); + + test_bind_guest_memfd_disabling_range_match_validation(vm, fd); + test_bind_guest_memfd_anon_memory_in_userspace_addr(vm, fd); + test_bind_guest_memfd_shared_memory_other_file_in_userspace_addr(vm, fd); + test_bind_guest_memfd_other_guest_memfd_in_userspace_addr(vm, fd); + test_bind_guest_memfd_other_range_in_userspace_addr(vm, fd); + test_bind_guest_memfd_same_range_in_userspace_addr(vm, fd); + + close(fd); +} + static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, bool expect_mmap_allowed) { @@ -214,6 +381,7 @@ static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, vm = vm_create_barebones_type(vm_type); test_create_guest_memfd_multiple(vm); + test_bind_guest_memfd_wrt_userspace_addr(vm); test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags);