From patchwork Thu Jun 5 15:37:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894252 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FBA819D092 for ; Thu, 5 Jun 2025 15:38:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137888; cv=none; b=XO2eRivlOvXhGasTmd483JFHtfVLVZYtS7r9qOjmKf1GQ2YzjBmex/MXlYATTIjAUKh47bUm1z4OwKZumYH05mnEF/DwE8cJzohz4AvkqhG/GwyXobUI5UAjo+U4ny2V8oMoWh5G0PrnRfLzGA5PMQEdQlmPRh2L5htIRykutlw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137888; c=relaxed/simple; bh=hFGkb+Ko2c+UUAmHL1693wKiWuSWonIgEwL1i6TaDzw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=pkUM8O0mEeLNY/adluWMK00iGQew4XAjLgoCKoCu2Rx95emy6gbs+30ZRbn8wzXeRnHRkquBkrWTqJq2FQpgpi7J80//2cNqRqQd5TMJN59WbxPsP4Y3T1lp11H9c0hg62N6UoR+RHQTqxR+dRIYSC2fjjgrOvX8QqcqDaKxnKE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=RnIseiNy; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="RnIseiNy" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450db029f2aso5245285e9.3 for ; Thu, 05 Jun 2025 08:38:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137885; x=1749742685; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=PeVbY+l7o08Upt7OoqsgDA4fPz68lXvHEuORHE4CN0M=; b=RnIseiNyxHpVp+2JM5TxAaKL1ZbpLn5Dahdei3RCy7zy2lxAoZNWZnbhTGVnUJiWsD pECv61siDnmZsoWsnqD37XPxhWM/T14bt1vZozVjxwJsQlc3BWhKBZvYFNbDbPDggpqU zWQ7V0b2pPCQoYwj0/p79Qsl/vzsOvENt/61FzBiJ/6pf8dS0unMsdDkjNIaB1hoRtHh qL90P0LYJ2J+CgMrzLRMXJ9vhtGHvc11B2EDuVtEyamD9qCR9zw7YKp52TD/SOeyOg7F 6womXl1Xsw3ioanYQ5bXdPqh3j6a+yabHRohoBbr3S7sx4MA3Jrpla5YYMbKoa124y0C gFMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137885; x=1749742685; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=PeVbY+l7o08Upt7OoqsgDA4fPz68lXvHEuORHE4CN0M=; b=s16OAJmogZPaDdFTFh3EHrQUf+wtfu7CfsRU8IsVi1jVK5gJ12Ej7qC/z0XUq/IchR I03Vjy1NwKaILFgQ5RofJvKKjpoSOvu+Drm2ZCqDKyljnxv7MMjXLD7EX3nYkYwdau4z lNV6pegLhXIgrn/BRGHXdWl+Fdi21sinxeM/CfkA88wq3rcoYuxWuff8VuInaaNa5rUa lqZoEYD5K6m8oMj6Deywe8jYNLvh/ujttwfy1ki3/PW3uPiGXjHB1h0odw51hn/SlGwz c1ls0xj2xdnGcKbcGlsHT4lg4lUV1Ug9xbhlkjnHrJvVglBgrHGqhLVAOZA3G2a3XhBa Uuhw== X-Forwarded-Encrypted: i=1; AJvYcCXFMSDEnqkGedTtNydFSd5fCoy5q5iL8G/D//WEasKYaIpKj7IeOPTxJJcmV6IhFM+O9XzvDP4U11i8+tyv@vger.kernel.org X-Gm-Message-State: AOJu0YzgTmzxIgsrb5Myzs0HedcYfI7hUzinZ0ebSRbI+FmEyPc2pEQH YxCkF0s4q6FZedPHPs/GO8NOCLvrSaboIT5l1tizCrhVWWV/NH2zszIbAuRvWzNgTu3EaSD43pu e6g== X-Google-Smtp-Source: AGHT+IF+t4r54toOCYRLXEgGmueQw69gOU5zFdV1vWICP6X+BjXzxPurBx9wl16B6JhKh2aY7xefAd9tCg== X-Received: from wmbz19.prod.google.com ([2002:a05:600c:c093:b0:451:d280:847]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:314d:b0:43d:8ea:8d80 with SMTP id 5b1f17b1804b1-451f0a64ecamr75072075e9.5.1749137884721; Thu, 05 Jun 2025 08:38:04 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:43 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-2-tabba@google.com> Subject: [PATCH v11 01/18] KVM: Rename CONFIG_KVM_PRIVATE_MEM to CONFIG_KVM_GMEM From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_PRIVATE_MEM enables guest_memfd in general. Subsequent patches add shared memory support to guest_memfd. Therefore, rename it to KVM_GMEM to make its purpose clearer. Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 2 +- include/linux/kvm_host.h | 10 +++++----- virt/kvm/Kconfig | 8 ++++---- virt/kvm/Makefile.kvm | 2 +- virt/kvm/kvm_main.c | 4 ++-- virt/kvm/kvm_mm.h | 4 ++-- 6 files changed, 15 insertions(+), 15 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 7bc174a1f1cb..52f6f6d08558 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2253,7 +2253,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, int tdp_max_root_level, int tdp_huge_page_level); -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM #define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem) #else #define kvm_arch_has_private_mem(kvm) false diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 291d49b9bf05..d6900995725d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -601,7 +601,7 @@ struct kvm_memory_slot { short id; u16 as_id; -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM struct { /* * Writes protected by kvm->slots_lock. Acquiring a @@ -722,7 +722,7 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) * Arch code must define kvm_arch_has_private_mem if support for private memory * is enabled. */ -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) +#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) static inline bool kvm_arch_has_private_mem(struct kvm *kvm) { return false; @@ -2504,7 +2504,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) && + return IS_ENABLED(CONFIG_KVM_GMEM) && kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else @@ -2514,7 +2514,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) } #endif /* CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES */ -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); @@ -2527,7 +2527,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } -#endif /* CONFIG_KVM_PRIVATE_MEM */ +#endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 727b542074e7..49df4e32bff7 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -112,19 +112,19 @@ config KVM_GENERIC_MEMORY_ATTRIBUTES depends on KVM_GENERIC_MMU_NOTIFIER bool -config KVM_PRIVATE_MEM +config KVM_GMEM select XARRAY_MULTI bool config KVM_GENERIC_PRIVATE_MEM select KVM_GENERIC_MEMORY_ATTRIBUTES - select KVM_PRIVATE_MEM + select KVM_GMEM bool config HAVE_KVM_ARCH_GMEM_PREPARE bool - depends on KVM_PRIVATE_MEM + depends on KVM_GMEM config HAVE_KVM_ARCH_GMEM_INVALIDATE bool - depends on KVM_PRIVATE_MEM + depends on KVM_GMEM diff --git a/virt/kvm/Makefile.kvm b/virt/kvm/Makefile.kvm index 724c89af78af..8d00918d4c8b 100644 --- a/virt/kvm/Makefile.kvm +++ b/virt/kvm/Makefile.kvm @@ -12,4 +12,4 @@ kvm-$(CONFIG_KVM_ASYNC_PF) += $(KVM)/async_pf.o kvm-$(CONFIG_HAVE_KVM_IRQ_ROUTING) += $(KVM)/irqchip.o kvm-$(CONFIG_HAVE_KVM_DIRTY_RING) += $(KVM)/dirty_ring.o kvm-$(CONFIG_HAVE_KVM_PFNCACHE) += $(KVM)/pfncache.o -kvm-$(CONFIG_KVM_PRIVATE_MEM) += $(KVM)/guest_memfd.o +kvm-$(CONFIG_KVM_GMEM) += $(KVM)/guest_memfd.o diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index e85b33a92624..4996cac41a8f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4842,7 +4842,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) case KVM_CAP_MEMORY_ATTRIBUTES: return kvm_supported_mem_attributes(kvm); #endif -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_has_private_mem(kvm); #endif @@ -5276,7 +5276,7 @@ static long kvm_vm_ioctl(struct file *filp, case KVM_GET_STATS_FD: r = kvm_vm_ioctl_get_stats_fd(kvm); break; -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM case KVM_CREATE_GUEST_MEMFD: { struct kvm_create_guest_memfd guest_memfd; diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index acef3f5c582a..ec311c0d6718 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -67,7 +67,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, } #endif /* HAVE_KVM_PFNCACHE */ -#ifdef CONFIG_KVM_PRIVATE_MEM +#ifdef CONFIG_KVM_GMEM void kvm_gmem_init(struct module *module); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, @@ -91,6 +91,6 @@ static inline void kvm_gmem_unbind(struct kvm_memory_slot *slot) { WARN_ON_ONCE(1); } -#endif /* CONFIG_KVM_PRIVATE_MEM */ +#endif /* CONFIG_KVM_GMEM */ #endif /* __KVM_MM_H__ */ From patchwork Thu Jun 5 15:37:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894522 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9BEC01E2307 for ; Thu, 5 Jun 2025 15:38:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137890; cv=none; b=qmvhTQs55OKqTKBNTodqrg3SQH2ntKxdtJN5T0XIZWdM2kRb7EO+ukMN+5BEvZod+JSc3Cqkg1KH+hFqRQbt3tf3+RWgSEJcUDPc2Fp2p2tVL3qDzIv2AWzRbtsIQXlkZWzlqCNQxSJFzkngqB8cvd+JI7+iyOB5OhTdZBwZ7yY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137890; c=relaxed/simple; bh=2VuJxxeEdPk1NuoOUAref3sHBbXvtsLb6Ibpy4iCMvc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=bn18GkQ6y66BPHp4+GvN9UInqPQhZEsEN3cxCzwo0JHbjsBHUDYdRSEG8pthJl8uahnBpzO1Cpy10xzhZ2Y7uIx/WNewOaVN/dhkoZp1WX1zfNNEukyC6mIiE/yNzkJDZpnYp3ot6flt6256buhWmK4dFB/KPTP3Irnb3rflTJs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=nv8j8Sm1; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="nv8j8Sm1" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-451ac1b43c4so6480305e9.0 for ; Thu, 05 Jun 2025 08:38:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137887; x=1749742687; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=IhZAQ4aK83zrMF9SKfn+FWhuWPngzvibSn3iVZ1J5Jw=; b=nv8j8Sm1O3y65B7G7Vb+fmXflX5/yCye08/x3hCODXyqTXdGOkUF4SvfYB3Ee5/LQx fNln2uAV2ueq0evEsKp5xy25hmeZK4OipJGqT9ajqZ+8w+RexutKYkye/GrlwA1vj/Ti Jf4YZ9dmqDdEO9+hLkX/JKRqnc5hURaAKjiUIu+cSCC7p+Xa3oM3FysA/64FG+afqeqP jJtUm+rLBSNws/nikchCKnnKoT7oaS3qfj23EuPA/H35kf5gMHbgi7MKbwjXcaYzZDZ1 tUXe8u3nUaKaQvwYkNJklZL7aEtUyFa27Qi5Q5JU+AYHk5v5NWxkMUwUpUaz054FlnWd gzNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137887; x=1749742687; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=IhZAQ4aK83zrMF9SKfn+FWhuWPngzvibSn3iVZ1J5Jw=; b=Icsi8U2ZUz/1Eyjm5LmNpKPhh4w5+M96rBL8BxLpJ6wcZ4Jp+D+cEEF/pZfNgSccv0 QR0GJxU4MyWLWxXnc1WHfXd/7rnY5whjkPCQVOtHnSYV755tTYwmLZVV16PLfUvsUtfE CBx+zOsiNKx9A3XVPvT8g2F0umdKpD1RcOXr/sdx6lmeU7AaKJ+BtFcaFzg37tIvfCHR yYe6KIwXUu8C/WQbEDQ30WBqBzbNn3YWsBztFWswgmLjqG2d9HXU6HT9O3gUwu/I99fi FgglpJltz4MS86YZJSBnbu3pXau4YCgnWRrm7ALRWUHrHA/d1Mke3ShUNvFIieo3zShS MWYw== X-Forwarded-Encrypted: i=1; AJvYcCWknQjHXgHnm1Yn0JgTY5dFoc+vtr2mZtf0LMRZ6ef2oQt67RTHHADio9XbIIbTxt9H//Is9LAsJyi+soBd@vger.kernel.org X-Gm-Message-State: AOJu0YyK/yfS19xCYeei6hGHb78KSP6IszrxP2orWgfWo1HWS+G0l8+o 1dIs32Ayz0D9WuiYSyUaHvNC5DsvFf+FBn4m3POwqJ1NURFrxD9eFrNkRDI4N84nKFOwsnRDYRD lzA== X-Google-Smtp-Source: AGHT+IGVAW8RhzH9WjiYAssyXVbepy4JWLsKd7LxHBjpMmhSdfVYPolb9YZ1XYxvqQQiU+J/OJ/TTBXl0Q== X-Received: from wmqe10.prod.google.com ([2002:a05:600c:4e4a:b0:43d:abd:278f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:a03:b0:441:b3f0:e5f6 with SMTP id 5b1f17b1804b1-451f0b27a05mr61468365e9.25.1749137886759; Thu, 05 Jun 2025 08:38:06 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:44 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-3-tabba@google.com> Subject: [PATCH v11 02/18] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose clearer. Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/Kconfig | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/Kconfig | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index fe8ea8c097de..b37258253543 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,7 +46,7 @@ config KVM_X86 select HAVE_KVM_PM_NOTIFIER if PM select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM @@ -145,7 +145,7 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM - select KVM_GENERIC_PRIVATE_MEM + select KVM_GENERIC_GMEM_POPULATE select HAVE_KVM_ARCH_GMEM_PREPARE select HAVE_KVM_ARCH_GMEM_INVALIDATE help diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d6900995725d..7ca23837fa52 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2533,7 +2533,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 49df4e32bff7..559c93ad90be 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -116,7 +116,7 @@ config KVM_GMEM select XARRAY_MULTI bool -config KVM_GENERIC_PRIVATE_MEM +config KVM_GENERIC_GMEM_POPULATE select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_GMEM bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..befea51bbc75 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { From patchwork Thu Jun 5 15:37:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894251 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 825471EB5C2 for ; Thu, 5 Jun 2025 15:38:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137892; cv=none; b=nNY6ycPdz9H8c/6hH/0jIyS/0xMhUGJTZwIqvEyYaL8ruh8n3r6EdZMAOlC/Yec5Kyj5W98sESBr/JNs7me2a4m8ZoYKY4RDCUgO7OpZ8iUwEWW1Jc+wY/rmuyqAiZegPc/nHF4YnMuhYCSu5dW4dhI37ZxMko3JZNtGyfYT/yw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137892; c=relaxed/simple; bh=+xuijXzswsseiem0q1ioIMHAUZjpHo+Bgn4bZIq2QP0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=WXsrvFsqm2HZCfJCqwiUlqt/NuOGn1UjJp+zjQAYnxj6LculG5rStnL4PoJ5+vrZPjO1MDf7rheM2jEALsXd1dhvoh0KvSX1owb4rmBCR2Co/CDpTG2eAJVJxuTTj8ACrZCV7IiFixWaFijbubeZHqfYgwLuGGQGJCL+YJ4JYk4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=IXpX7dyK; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="IXpX7dyK" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450eaae2934so9069345e9.2 for ; Thu, 05 Jun 2025 08:38:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137889; x=1749742689; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GvKzeL+1T3a8uo6AT5fF+XCAX4Fykg2hw764EmZuCPQ=; b=IXpX7dyK9ugXT6SIvQA24vj1lxRT6mm2zzSzfXzUpnzSsc9e+ZEnvr8eIkPuIxYR9P wjB/5TFkhXusTQ4EnPotjwtwJJirEqpZ8F75zk8xtA/9sMXu2BLdN1kIS9uWun9DjYs9 WVRufuqrwm+iW4pe8N+DTgoNZbjYgOm8B1JPWLqWKnRReC5YgNWW2zu+ok7NL+Zcgxg8 v10WlE4IyyNO64FYf0ibU09vuSJikJ8OOhgAxSbZRRcBdf16393MT/Suri6Ov9ALP44v Kmz+J24NRxk2lUnS8giQkOwYiSshm2ESCo4px4xqUWE7dkeZI4cjCdiJWM5isFHU57Cf 79QQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137889; x=1749742689; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GvKzeL+1T3a8uo6AT5fF+XCAX4Fykg2hw764EmZuCPQ=; b=F9SiW91TRbsJ5DTml6SmiZISKYDKjoqiYnD1r5rWqvowB5+4eFaxZfP5hkGolWfW9T HUYV2WUBBC8hef+FvMI/TtcJb6G9KqV5kQzVkcLwg0YtBfyqV9K1i4JaC8OssokN02so uyPSTEk0PwE/YWDdluomwmAjCA/rExY39xhJzyXjIPrWco51MDh4iC6I+FCePJBcZ3So /3b7M8kkJsJLWo67LD3KS63GJKofVuA3h+t/9UEDwidgkzLiS41XQ0moQvO3V6iUBifU sf+x8CovrAlucVkPzBX16R62dzzGXgWAkNWigUFB29N71Hn2lp/rHzVxjsbAFlZSkfcC /duA== X-Forwarded-Encrypted: i=1; AJvYcCXVxB9jAIPqJsYyY++sNIJkgInVktnBnL+6vwuZzMZtI0tVH1376nKGvdl/5M+/ulqpytJMGGQWLGOs9UCv@vger.kernel.org X-Gm-Message-State: AOJu0YxkCwK6kxfSDPbiQDVAMmq+j+FJulbvNJeGa46MeDbR0R4WV7me xQ0xCnM8eZlhzeb7iiYpdxG67CQHt9itBksE6TUAwkbVD0SqWvGHMHwpZExxAF/4dXiA5f+a7nA tQQ== X-Google-Smtp-Source: AGHT+IHpk/sgMuq1XSlSRiDazdF2Nco8Q78JDZQrlU3izgATGu3ggDHVE7+xk89gjlaLZCaJtmCquLtzSw== X-Received: from wmpz21.prod.google.com ([2002:a05:600c:a15:b0:442:dfbc:dc3]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8312:b0:450:d01f:de6f with SMTP id 5b1f17b1804b1-451f0aa72fdmr71993155e9.15.1749137888863; Thu, 05 Jun 2025 08:38:08 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:45 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-4-tabba@google.com> Subject: [PATCH v11 03/18] KVM: Rename kvm_arch_has_private_mem() to kvm_arch_supports_gmem() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The function kvm_arch_has_private_mem() indicates whether an architecture supports guest_memfd. Until now, this support implied the memory was strictly private. To decouple guest_memfd support from memory privacy, rename this function to kvm_arch_supports_gmem(). Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 8 ++++---- arch/x86/kvm/mmu/mmu.c | 8 ++++---- include/linux/kvm_host.h | 6 +++--- virt/kvm/kvm_main.c | 6 +++--- 4 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 52f6f6d08558..4a83fbae7056 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2254,9 +2254,9 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_has_private_mem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) #else -#define kvm_arch_has_private_mem(kvm) false +#define kvm_arch_supports_gmem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) @@ -2309,8 +2309,8 @@ enum { #define HF_SMM_INSIDE_NMI_MASK (1 << 2) # define KVM_MAX_NR_ADDRESS_SPACES 2 -/* SMM is currently unsupported for guests with private memory. */ -# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_has_private_mem(kvm) ? 1 : 2) +/* SMM is currently unsupported for guests with guest_memfd (esp private) memory. */ +# define kvm_arch_nr_memslot_as_ids(kvm) (kvm_arch_supports_gmem(kvm) ? 1 : 2) # define kvm_arch_vcpu_memslots_id(vcpu) ((vcpu)->arch.hflags & HF_SMM_MASK ? 1 : 0) # define kvm_memslots_for_spte_role(kvm, role) __kvm_memslots(kvm, (role).smm) #else diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 8d1b632e33d2..b66f1bf24e06 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -4917,7 +4917,7 @@ long kvm_arch_vcpu_pre_fault_memory(struct kvm_vcpu *vcpu, if (r) return r; - if (kvm_arch_has_private_mem(vcpu->kvm) && + if (kvm_arch_supports_gmem(vcpu->kvm) && kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(range->gpa))) error_code |= PFERR_PRIVATE_ACCESS; @@ -7705,7 +7705,7 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, * Zapping SPTEs in this case ensures KVM will reassess whether or not * a hugepage can be used for affected ranges. */ - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm))) return false; if (WARN_ON_ONCE(range->end <= range->start)) @@ -7784,7 +7784,7 @@ bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, * a range that has PRIVATE GFNs, and conversely converting a range to * SHARED may now allow hugepages. */ - if (WARN_ON_ONCE(!kvm_arch_has_private_mem(kvm))) + if (WARN_ON_ONCE(!kvm_arch_supports_gmem(kvm))) return false; /* @@ -7840,7 +7840,7 @@ void kvm_mmu_init_memslot_memory_attributes(struct kvm *kvm, { int level; - if (!kvm_arch_has_private_mem(kvm)) + if (!kvm_arch_supports_gmem(kvm)) return; for (level = PG_LEVEL_2M; level <= KVM_MAX_HUGEPAGE_LEVEL; level++) { diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 7ca23837fa52..6ca7279520cf 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -719,11 +719,11 @@ static inline int kvm_arch_vcpu_memslots_id(struct kvm_vcpu *vcpu) #endif /* - * Arch code must define kvm_arch_has_private_mem if support for private memory + * Arch code must define kvm_arch_supports_gmem if support for guest_memfd * is enabled. */ -#if !defined(kvm_arch_has_private_mem) && !IS_ENABLED(CONFIG_KVM_GMEM) -static inline bool kvm_arch_has_private_mem(struct kvm *kvm) +#if !defined(kvm_arch_supports_gmem) && !IS_ENABLED(CONFIG_KVM_GMEM) +static inline bool kvm_arch_supports_gmem(struct kvm *kvm) { return false; } diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 4996cac41a8f..2468d50a9ed4 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -1531,7 +1531,7 @@ static int check_memory_region_flags(struct kvm *kvm, { u32 valid_flags = KVM_MEM_LOG_DIRTY_PAGES; - if (kvm_arch_has_private_mem(kvm)) + if (kvm_arch_supports_gmem(kvm)) valid_flags |= KVM_MEM_GUEST_MEMFD; /* Dirty logging private memory is not currently supported. */ @@ -2362,7 +2362,7 @@ static int kvm_vm_ioctl_clear_dirty_log(struct kvm *kvm, #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static u64 kvm_supported_mem_attributes(struct kvm *kvm) { - if (!kvm || kvm_arch_has_private_mem(kvm)) + if (!kvm || kvm_arch_supports_gmem(kvm)) return KVM_MEMORY_ATTRIBUTE_PRIVATE; return 0; @@ -4844,7 +4844,7 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #endif #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: - return !kvm || kvm_arch_has_private_mem(kvm); + return !kvm || kvm_arch_supports_gmem(kvm); #endif default: break; From patchwork Thu Jun 5 15:37:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894521 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B605B1F2C45 for ; Thu, 5 Jun 2025 15:38:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137894; cv=none; b=TuxbcmkKkDx1uampCYBkwb7Rj5XZqPa9KmtooCAsQA/4XEfZqpC6JUIyFzWpgfWfSO9EiAcFtqclEQO5R/3bYsnXxjEWixJyXpZtqgZXz579HZ2TNfebX6rQtX96rrfaEbGXCkCdtPRNMoaGAM1H+aojvGnGTUeizFV8n00Thk4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137894; c=relaxed/simple; bh=QJgSugRidDozCXPuIRXLTwtSBbKcxzGSNYJamSUOK7w=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Z2JTFrjzWInwchuMWMxIidiY884XnA+45UQeOPCRtJpAbW9L92yQEa2Pu9AiYq4XAOYugVZtifXoQzNG1+nKsxWrv5BPYmbIdHd2acbx+GWXbb8VaISxWa17JdJmYEs9d0cKavNdar91PzaSOGyPAhlWPr+Hfe+NldlAPlw/bso= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bIODptXK; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bIODptXK" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a4e9252ba0so609142f8f.0 for ; Thu, 05 Jun 2025 08:38:12 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137891; x=1749742691; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cw6+Mpe4Y9YYGqB7SSsL2ZJhDik53zQn0HS8MVl9BPk=; b=bIODptXKyaf5hJFewxCOizmbeUwYTXyAeZ4R9viOGJmxv1746ksKRwttIFXyx2q2aY ov3X03++ieRhsrmZdWymVh+QYy94g4NF9E496WYPNicOqx05flE0IjJ1A5nyE4aBeCAi l03i4KUsoxDX5RUmYb8mIM8Qpz0/7vPCRArehT45B5PgICyB7p1HspQULUGdFoTUhXw2 6ba8yMCanjsrs0Bca7FCavU9FkoqHlfBdXc24JQ1BzwQB5OVMsE9vezu7X2OWPCUsx0t QoXnZ+CGHfV8DnmmNkl0nQKs4ToQbddFZqJE4JCrq2caJv6VmvXnTZCq8bUZj1isGwVq uUHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137891; x=1749742691; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cw6+Mpe4Y9YYGqB7SSsL2ZJhDik53zQn0HS8MVl9BPk=; b=X/Vd6/J/vsp8f6FYGEAcO5NoBA0wTbZsCDF8pz+fZDTY8GN8bkUiuFUXRlGelBNBvM cyPAJtjF0wdD2wzfAT0O1CLMVN+ZgweSRI8YXPXpalbrp7DkZ3DkSELBRjAllfHidCTp bPLOAJqzKrZ8gHPaagKlv/rClsS1QUCDAbdLvf84hrb5/if1xlvg7WoIPpUIjWYCxa8Q 9sTNyXLdansGKeaGvmNuWrE3F92XDtpr6T11Wp19M0bcuIBBsJwGIi4CD9Tdp26Rqzlw 4fTrQzxVoql2S34NveEyR1TrYmwLTDsiuCEwDyIFjlaRGpXBzNyxHtZwpKPvYyDNciNI mFCQ== X-Forwarded-Encrypted: i=1; AJvYcCWQ/4YtYdIlLAvtIJ/eRr2WxIDIeLRNM0Uh3ZlOJ6T4cp0LIo3mbR13SwG1y2iBc9zOfTo9pkrDeQZeYigf@vger.kernel.org X-Gm-Message-State: AOJu0YzEtYK7A6x7Ox2mGVwHTbcFHLPz/oWJghfk9Hie7HgCnnfMlDcW DrJ5eroHcpf4OO2napgz58EGeGIaMJ5bdJ1S+hQYpHsLOeHxTO3U/ZqWlnazs+658JY2AcS7yPz a1g== X-Google-Smtp-Source: AGHT+IFYsQ27vypuy4NYsL5G2D3PkTdK4tJz+eGgF3cJ530C6liNPHZmvXo6DXqMwIsM1QvbwO6I/y4yDw== X-Received: from wrpk13.prod.google.com ([2002:adf:f5cd:0:b0:3a5:2a0b:d7a3]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:250a:b0:3a4:f892:de7f with SMTP id ffacd0b85a97d-3a51d95dbbcmr6703161f8f.36.1749137891053; Thu, 05 Jun 2025 08:38:11 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:46 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-5-tabba@google.com> Subject: [PATCH v11 04/18] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The bool has_private_mem is used to indicate whether guest_memfd is supported. Rename it to supports_gmem to make its meaning clearer and to decouple memory being private from guest_memfd. Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 3 +-- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4a83fbae7056..709cc2a7ba66 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1331,7 +1331,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; - bool has_private_mem; + bool supports_gmem; bool has_protected_state; bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; @@ -2254,7 +2254,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) #else #define kvm_arch_supports_gmem(kvm) false #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index b66f1bf24e06..69bf2ef22ed0 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3486,7 +3486,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault * on RET_PF_SPURIOUS until the update completes, or an actual spurious * case might go down the slow path. Either case will resolve itself. */ - if (kvm->arch.has_private_mem && + if (kvm->arch.supports_gmem && fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) return false; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index a89c271a1951..a05b7dc7b717 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5110,8 +5110,8 @@ static int svm_vm_init(struct kvm *kvm) (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM); to_kvm_sev_info(kvm)->need_init = true; - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM); - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem; + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM); + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem; } if (!pause_filter_count || !pause_filter_thresh) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index be7bb6d20129..035ced06b2dd 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12718,8 +12718,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.has_private_mem = - (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Thu Jun 5 15:37:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894250 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A99761F1515 for ; Thu, 5 Jun 2025 15:38:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137896; cv=none; b=mcJag+tHVLIDk6iCxOxavX64cx69b0x2aOD5Zyh+dtpRGH0chLv1JfXjpsBEy/9eKAoDZUSn23kvklE4pC1FPrZt9uOpC5cwnSt8IORzQM3ixMTQjXjx+4BWI6BYuVsfUi3wpIyeUEgiJJT1XOxy+1rPKQGZjQxpUtAvLUP/qqw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137896; c=relaxed/simple; bh=ClqrgboMEAhwsqtCXqqp6SxhBiB6M6InDzTzjwdArXo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Z6VuQoKdqe1QpLkEUHdkkFSLLJgmqO2tJ2XwVOQLQc9SmmEGMWUnR/CcBUAi5m2e1TATWCtpyfR9mcKN8Z6CZhzzZBls7IpMFHBM2s1zS3+c6uic2YQiy/e4fKA8M5N7sZaJ2tLVGMu+aSxvMhAlduB8BNigVFv8iDIG7Ik8Zw8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=pd6N8n27; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="pd6N8n27" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a4eb6fcd88so710268f8f.1 for ; Thu, 05 Jun 2025 08:38:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137893; x=1749742693; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=qe3sWK5q5nK1m5jGk9UWStlvXP6F2F3f83ETUbM9FxM=; b=pd6N8n27L77gA4fYrBMC5E54AobMXFK6ENdDB6PgifD3eGnBlcrt+FJZ/+zVCQuYSr yz+tW78/x/XK0Gh6RG4tuCaN5nfa5N3GYBSh0yMXoZbl53ZTnW0viBV+NUev+yJbDXUp 3O73BgHxx6ze2BwKkdk6T1lj/+QyHO0sVdcB2MMIAc8by+4+cKOSYZm5ukHgJkSbgoNM RiBn3SHOfm3c8TEQmWGohL/IlA5lv5VpEp0UZZP6bZcunpGK18cSTxP1RiYLyKRCkttQ eIlFXColoIrOVXXQ7VZIQkHfIVlv+T9a/lakQ+53vFcy2LO60h9x8K9uMHwaHaDjGzMN Wv0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137893; x=1749742693; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qe3sWK5q5nK1m5jGk9UWStlvXP6F2F3f83ETUbM9FxM=; b=SLrs1MTyc31+WnOiTROPmtTNilsmv4N8mMZ19jxjkcqsQQC+iiNwGXcxK9v8ngubky GPQT+MCOVhGAEnLKNIyUVIMR54w5tIMylYXj9xCbZ+FIGRppEJ+ipUW/9lwhqUcSuiSN C4iivhZBcQK3B2qohDjAqpRuTgSUa7bw1vM1uH/pVyg+j784iV1kxLOEw6hP7lwRVj/7 Rl1iRr1j5ZHTehUoUt+Mjcfm+JhccaHNHCI9cs8+tUPv95L1W6oDtMj6s2tyX1hIuReR ++wwHM09EDCKWyLpUuzyn5BqYfZ8wYd67H/zmreLWrpa7URc4LSjKOPmWwws82LeDkXE 6mHw== X-Forwarded-Encrypted: i=1; AJvYcCXyZo0PdxG5aLLyzJGi2nlYrvAXrWyjnYgUbpur/7sFDnuOp6LBDtkiBaiJlohNVT/lKO/PnAk8Z1fOx6Lw@vger.kernel.org X-Gm-Message-State: AOJu0YzwaFuYuKxJHIcqFjmGRF+5Dp9G/CghGMbs+riWFL2vTxnLtF1m JhFgm8Ua8+66AeQjVpSDLg5nTQ4YRsDjbB005op7uIpYTBy1ogGizfdG1TmRtk0LVaTtr7dCVvz x1Q== X-Google-Smtp-Source: AGHT+IFmcdwhn2LMYNxP+uGrHHW9kSBs4n3ZI9kvbXxMD+ySlDN/PQ/FygdrWVTr0+hjocoxBaJLadObcg== X-Received: from wmsr2.prod.google.com ([2002:a05:600c:8b02:b0:450:d401:f555]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:588a:0:b0:3a4:d5e8:e352 with SMTP id ffacd0b85a97d-3a51d924837mr6940909f8f.7.1749137892942; Thu, 05 Jun 2025 08:38:12 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:47 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-6-tabba@google.com> Subject: [PATCH v11 05/18] KVM: Rename kvm_slot_can_be_private() to kvm_slot_has_gmem() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The function kvm_slot_can_be_private() is used to check whether a memory slot is backed by guest_memfd. Rename it to kvm_slot_has_gmem() to make that clearer and to decouple memory being private from guest_memfd. Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 4 ++-- arch/x86/kvm/svm/sev.c | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 69bf2ef22ed0..2b6376986f96 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3283,7 +3283,7 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_can_be_private(slot) && + bool is_private = kvm_slot_has_gmem(slot) && kvm_mem_is_private(kvm, gfn); return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); @@ -4496,7 +4496,7 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, { int max_order, r; - if (!kvm_slot_can_be_private(fault->slot)) { + if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return -EFAULT; } diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index a7a7dc507336..27759ca6d2f2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2378,7 +2378,7 @@ static int snp_launch_update(struct kvm *kvm, struct kvm_sev_cmd *argp) mutex_lock(&kvm->slots_lock); memslot = gfn_to_memslot(kvm, params.gfn_start); - if (!kvm_slot_can_be_private(memslot)) { + if (!kvm_slot_has_gmem(memslot)) { ret = -EINVAL; goto out; } @@ -4688,7 +4688,7 @@ void sev_handle_rmp_fault(struct kvm_vcpu *vcpu, gpa_t gpa, u64 error_code) } slot = gfn_to_memslot(kvm, gfn); - if (!kvm_slot_can_be_private(slot)) { + if (!kvm_slot_has_gmem(slot)) { pr_warn_ratelimited("SEV: Unexpected RMP fault, non-private slot for GPA 0x%llx\n", gpa); return; diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6ca7279520cf..d9616ee6acc7 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -614,7 +614,7 @@ struct kvm_memory_slot { #endif }; -static inline bool kvm_slot_can_be_private(const struct kvm_memory_slot *slot) +static inline bool kvm_slot_has_gmem(const struct kvm_memory_slot *slot) { return slot && (slot->flags & KVM_MEM_GUEST_MEMFD); } diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index befea51bbc75..6db515833f61 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -654,7 +654,7 @@ long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long return -EINVAL; slot = gfn_to_memslot(kvm, start_gfn); - if (!kvm_slot_can_be_private(slot)) + if (!kvm_slot_has_gmem(slot)) return -EINVAL; file = kvm_gmem_get_file(slot); From patchwork Thu Jun 5 15:37:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894520 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CFBC3213236 for ; Thu, 5 Jun 2025 15:38:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137899; cv=none; b=L888uKAT37XAzX+wjyzlCy0A/eG/nTysu11gtCSkEB3wu1EVcqDZ4dPgfEW5DGlHpRrrchW3JP3aoV0dpD0uOwkq44h0gO99HvZlK2Rms/nyU0x26jy88KjBCApPZPst5IIHDPNVbnvmOu5aLjb/xoNQSOgN/mm3HzhFTmEBv50= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137899; c=relaxed/simple; bh=x+fQBWn0/0ardBRy78HzYroQyc4L59a6RPCtPgDdFMw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=itHoG2xgxB+MHX0gUMUTBVG6HKtt0HfZ+UNyTv9XvNTIxMOmBYXrIWkwuLQdSGiBjl4m85yWa/t5zm6ABI1JszoSdrarCI540Mpe/i0120sOixVO3tOGuyfxv7T4yi1mZORYhb+KNQdv6ESYIDOhTJh+6iHpY7OB3wV+ZEmPYiI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lPlDbY/8; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lPlDbY/8" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a4f8fd1847so506530f8f.1 for ; Thu, 05 Jun 2025 08:38:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137895; x=1749742695; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=bX7rVYDFpCzF6sNW2Lfy2882XEAhzLwmJkPHB1Upz9o=; b=lPlDbY/8hqnrTZTe6gJmLH4hiHcH76n7mp2dKpbG2LLjIPMvPn1KqcD+sKDeg09h2R cKXt/LXy0RS6u7jeCbJWh8AuNkhFK6DbehbistUC0DK7xhFqUpxExHRj81ust6siqx7r erZBmn0tWx3FcGErXL3orhYVSDNXF1igvWAh0YJjE7T6PEOQLd9fxI7Wvh6Ra02OGW5Z 3gM3Rc4449oisAZWEj1ixfqB9qhgL+WW2ZfqXzRPONvPn8BjljwHFN3a1pe7WHPuSDPA Gqi1UVhhUr23RbT10NiEUjUMEI1/Zywlmm3ZGks8WKvnzfTwFjnQFWeg+KPKM1qDrM7k +Wkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137895; x=1749742695; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=bX7rVYDFpCzF6sNW2Lfy2882XEAhzLwmJkPHB1Upz9o=; b=WEor0dx8wChSf77pp4wGOqVKkw7AlWd9GYomOgbe1dk/TRfnK3jNLcaG+teU9iRUWO CZH2B0RvJEsnRqYlW7CTsnVj8+Uu4W8F5PB75B9T2IEBoQJGI/q1jBed7yCsSzFnH/CK sAx6YVyO1a0Bx0BytVWDbwp232j4pN0MhPvJMuB0MOSduvxM1n7TwZrwaCc8ra4B34l0 iq/Ohuyx/NNGFsHDAi/gHuCYJOFmNOALpFJ6uEhESZoC+TWHQaBU99su+3rfbj4CLJt0 5n6Dd3L1EhXJiZlNeepPEupiuuGDL01RkADCy7NoRRsCpqYcPj8zgf5rJNmKsH22IA08 37gw== X-Forwarded-Encrypted: i=1; AJvYcCXxQv3M+fU4XNfNDxUnx+Te0t6KW/ked0vVdBffo7TjZkvhxpIujyMphD1OJaN8ONa+ElBFHAvImaxpILGU@vger.kernel.org X-Gm-Message-State: AOJu0YwBXuE8bArQcT9gxXwzGMBFcVN4XnVWytwUvbO4Pio7GbC9sE4u wjMTkTftqmxGx8Ai2nqdHagJyV467uXvK/89DTq3At7u8aMXRPV0AMgWv092hou776QPMWal6Uc hyQ== X-Google-Smtp-Source: AGHT+IGx/NxvkfS2OFg1baPqeoVFV4d0bcEnAR0ITEv4+pM80ouDhmtSHUvc8cTeZfEpmvTkRzfFHQzWbQ== X-Received: from wmbji6.prod.google.com ([2002:a05:600c:a346:b0:442:f4a3:b8f5]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f53:0:b0:3a4:e002:5f97 with SMTP id ffacd0b85a97d-3a51d8ef871mr6732462f8f.1.1749137894915; Thu, 05 Jun 2025 08:38:14 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:48 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-7-tabba@google.com> Subject: [PATCH v11 06/18] KVM: Fix comments that refer to slots_lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Fix comments so that they refer to slots_lock instead of slots_locks (remove trailing s). Reviewed-by: David Hildenbrand Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index d9616ee6acc7..ae70e4e19700 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -859,7 +859,7 @@ struct kvm { struct notifier_block pm_notifier; #endif #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ + /* Protected by slots_lock (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 2468d50a9ed4..6289ea1685dd 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -333,7 +333,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, * All current use cases for flushing the TLBs for a specific memslot * are related to dirty logging, and many do the TLB flush out of * mmu_lock. The interaction between the various operations on memslot - * must be serialized by slots_locks to ensure the TLB flush from one + * must be serialized by slots_lock to ensure the TLB flush from one * operation is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); From patchwork Thu Jun 5 15:37:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894249 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 852BD13D8A4 for ; Thu, 5 Jun 2025 15:38:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137900; cv=none; b=CB/HohqAqF7NzmEN9PT/Efdxbr3iXN/6Tue7ayKCvaV/lailPeD0K4HGSInGJ3Ww4h2SU5Dm2chkHbKV40COVxO9O4eWO6sPkqAnWaUDGi8xKAvT+0+FCWltNWRLhOp7zsRXhrNiJ7nrrbO1l/0ZrF9DHVTFCQnXq8lGlPX/flA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137900; c=relaxed/simple; bh=h9y2PxHa3T/iR0irvc8EFzsEpKcI12tJbooS4AaFVxk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lqNE6pfOlSmNJEaN8HLID3Woi455HMAoU2uZstp6MmZlHBemutNF6m/0NIgys7NTmy2CIKqsqVQk3MpDhBgF4Xh21l2YJ0J6nsB1WSFyAVffXfyVRwv38qJBTCfK8ZxhgC3yEa68kH3qPXatmg1C3g7npqEKBpFk0UxmGAN153M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ntP3O165; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ntP3O165" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450d6768d4dso7160925e9.2 for ; Thu, 05 Jun 2025 08:38:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137897; x=1749742697; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ya9V8PlB1f9QKkO6N2m4/v94EO/9DMMarQp6ACk30eE=; b=ntP3O165ePxDnjzBGo2ZhWatOLdxvgxVibREXBJE2AVGE2eHXHaP9KywfFe50IlWcH gAiaUxpnppovltsscSKzdoEI9ZftyX3YBRYOQ+5ZRN/twZPMInSB2bTLgMukLlxqiZZF /ddBn+pjNIieZTizkQCcinymoqrTyvOX8ASBryiaCrGtuEquOITyniIK53uB40Gx2Zr1 Nky+XC0kQhbnnlDBfI1K/exR2GCwrdaK0U502pqwnpz+VZk3KQB47ZGhBu2eLk5ULTXL F398KZQf+Sr8zmrws1mG1btn3uv7s0+fZEBWRqQfMvQf8jqXN32E0ufUIyQ4BqnVcipz sQHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137897; x=1749742697; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ya9V8PlB1f9QKkO6N2m4/v94EO/9DMMarQp6ACk30eE=; b=Lk0y2G6xM+YYXTaw/emLO3lC+Qzsz2+8oeziVuJBSK0ylEX4UibgwZImuC4MiumCKG VAYWCaz7q/ArmK0/eSqd9X5tJVLa9ZHtIIu9iURmcIf9WmJWZxDVvmLKjPMQ7aIgn4O/ 5yhHEoAkAnYWyb3DBYPmMaI3HicB6WM+yRdFs6Ej8K0Ttsq/R0aI7urJmEqWQArmeFo1 aUnBOYxojZ3mW2PfSDsl3Wz4WFg+E52RMPlwqjdKZQF9SG1E+8zy0E9+F7yeLOsmodeL FydfQxjMcISTgt9LUpDJwQIMHfcX41gGU7pXTBZfBpbdLibsmUvjW2drU/ZLr4MgP7AL l5MA== X-Forwarded-Encrypted: i=1; AJvYcCU8fCLY7z86vcER+wabrAiY0LwsLifQPlgtgwwFfFY7i1g+as/2NF7okvxkMxrOooiJjlAryIkGV1BtNc5L@vger.kernel.org X-Gm-Message-State: AOJu0YwWJphN4mAlPv2a7OOl2CCCI08WP6OLNIl9BwEwe/Yz0No+twcF RUj6RhbX0jyZjRigRQ6IRJ7+jg8xzsXYe3K9RKNFGw5fKRCKEHpnYtEGNnNIvicVBrD1nFLCWMb Zaw== X-Google-Smtp-Source: AGHT+IFhrDIh6gCXeF6nxFdv6Zput8Ia9sHzYK88p92EvJQS2AgWKQpdoJNqkVyvlyFiBZqhNRmwPvnK8A== X-Received: from wmbhc7.prod.google.com ([2002:a05:600c:8707:b0:442:f482:bba5]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3106:b0:441:b3eb:570a with SMTP id 5b1f17b1804b1-451f0a6a94bmr72946715e9.2.1749137897069; Thu, 05 Jun 2025 08:38:17 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:49 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-8-tabba@google.com> Subject: [PATCH v11 07/18] KVM: Fix comment that refers to kvm uapi header path From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The comment that points to the path where the user-visible memslot flags are refers to an outdated path and has a typo. Update the comment to refer to the correct path. Reviewed-by: David Hildenbrand Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index ae70e4e19700..80371475818f 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -52,7 +52,7 @@ /* * The bit 16 ~ bit 31 of kvm_userspace_memory_region::flags are internally * used in kvm, other bits are visible for userspace which are defined in - * include/linux/kvm_h. + * include/uapi/linux/kvm.h. */ #define KVM_MEMSLOT_INVALID (1UL << 16) From patchwork Thu Jun 5 15:37:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894519 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF4FD23E340 for ; Thu, 5 Jun 2025 15:38:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137902; cv=none; b=Yf6DSOv5MuanvguX/x1hoORkbBuIAebmmjgX7BDYwDoX2Ws/z3IxKe80sp2cyM3sRrnew1kUYt4PZWQWgPXfX3vjWut4tMMD8Q41vU9yXJqcp7NBONmJdSr3KT/Z7ZN+AxVL+6nJXoPOLx/eG0nxThUm2x/Rf/L4ux0jTzKsZuQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137902; c=relaxed/simple; bh=KeVoYtk1YfTE/wzFysihsCLmJLEbzerBsVgvKKHm9O0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NgZkbsJUIJOshGDVi2XB76a3uhXY7ra+EwmnM8Fz6qz6kjNooxF9pqeMNACSit5b9k3jVDZlOWahZ+HsVzBwWkRdSqUpTUHeY7nYSK8RR6I2NL5l4vqxDRw4RF8IDaiD9tsfmyTta7NgOM94hy587RLS/g05MskbIYlM54O7Pcc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=c8F3/6f8; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="c8F3/6f8" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-440667e7f92so8343135e9.3 for ; Thu, 05 Jun 2025 08:38:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137899; x=1749742699; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=R4e32WJa+B4iCwI5nu4RLLyxxyZ6wYBS+apA8IG1UJc=; b=c8F3/6f8QDPfjLS1D7XFwE4TJMyJd0cElXlQRs/WzymoXzCxtv+sarjmyQTnD/XJ/d IXu+gq3XXlgVej6XneAOa3U+y50eph6QnWfARmcYXg8NjUSspzsf3NqcciYrvmL6GDHB LWIVL/FdgdD3hMgmKXN5xJgz6QoNIvwFoYciBUhcTFJHKjcaa2ysOipvPEA4CgTw8qRJ eLtNsGHjwsnyxrc45akXSjeip0CTS0sJb7KU0rAyVbbYq/hhcFb+KLXSlxlWEqkXKol+ VVBMwXRhSUwiXTHhSFwmTrN7BPjq51toZUeeui9HyN8kECgX8MQ1OF0hgNheQvMXJeXH MaUg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137899; x=1749742699; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=R4e32WJa+B4iCwI5nu4RLLyxxyZ6wYBS+apA8IG1UJc=; b=PaDn2fKiDuQyNiQODFnUHSKYZTNBFeITSzt5QmiNM1ABogMyYdluxJWwmvkRXfKeHQ s+zJvyREpVDZBvULo8h+zhuMP0kzjPmb7kVOQElyy5rpS2zZm3OXBmndvzIGXC7rzF7Q IYuN3K+QBAvr8fQT74+7tE8cPJwnnZi8WtYAWPh25fhObwuOeHtca4dTI4yl4YIH6Vh6 IrC+Sfv0JR7/OX6ZkTZlA/MEHnzItYPWHioYh60ybAy9o1nsp4qsrEbWUQlWQcIl6rXh 2xpXWAjN5LMU30ujkyt4YBRZkLbU4+EPGpi+3JnLFXp0DcWLOd1rO1oD0z7ueZVkiTt0 pIyQ== X-Forwarded-Encrypted: i=1; AJvYcCV+9cvWFImw+DXMm4TbSFezHzKXu7p8BJoR2cBIlgFvn3nZqu2kHWGXu5pK2ZZH/8Y0OLFhqgoXCm9/SanM@vger.kernel.org X-Gm-Message-State: AOJu0YwCE5I16J/0R0R4uSDzyBJCq6QZhFwUb5yBVrIrztXecmzosJxv ZehsYtaj5PhQ3n8TzOrOBiyYtji4mfQh0yR6k21NqfDpTKi7AfzfCETJG96g6AsYepp+NEfxbgz /TA== X-Google-Smtp-Source: AGHT+IFsSXSPsIzzjG8cSs5qmMKIcMNndtU6bT4BIo+LmuAUzRynITV3oecnA+zrz7m1ALSccrGRPAfFeQ== X-Received: from wmbel20.prod.google.com ([2002:a05:600c:3e14:b0:450:d104:29f8]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1986:b0:442:ff8e:11ac with SMTP id 5b1f17b1804b1-451f0a74b01mr79236105e9.12.1749137899065; Thu, 05 Jun 2025 08:38:19 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:50 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-9-tabba@google.com> Subject: [PATCH v11 08/18] KVM: guest_memfd: Allow host to map guest_memfd pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch enables support for shared memory in guest_memfd, including mapping that memory from host userspace. This functionality is gated by the KVM_GMEM_SHARED_MEM Kconfig option, and enabled for a given instance by the GUEST_MEMFD_FLAG_SUPPORT_SHARED flag at creation time. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 13 +++++++ include/uapi/linux/kvm.h | 1 + virt/kvm/Kconfig | 4 +++ virt/kvm/guest_memfd.c | 76 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 94 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 80371475818f..640ce714cfb2 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) } #endif +/* + * Returns true if this VM supports shared mem in guest_memfd. + * + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for + * guest_memfd is enabled. + */ +#if !defined(kvm_arch_supports_gmem_shared_mem) +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm) +{ + return false; +} +#endif + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index b6ae8ad8934b..c2714c9d1a0e 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1566,6 +1566,7 @@ struct kvm_memory_attributes { #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0) struct kvm_create_guest_memfd { __u64 size; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 559c93ad90be..e90884f74404 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -128,3 +128,7 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_GMEM + +config KVM_GMEM_SHARED_MEM + select KVM_GMEM + bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 6db515833f61..7a158789d1df 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -312,7 +312,79 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +static bool kvm_gmem_supports_shared(struct inode *inode) +{ + u64 flags; + + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)) + return false; + + flags = (u64)inode->i_private; + + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; +} + +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + struct folio *folio; + vm_fault_t ret = VM_FAULT_LOCKED; + + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode)) + return VM_FAULT_SIGBUS; + + folio = kvm_gmem_get_folio(inode, vmf->pgoff); + if (IS_ERR(folio)) { + int err = PTR_ERR(folio); + + if (err == -EAGAIN) + return VM_FAULT_RETRY; + + return vmf_error(err); + } + + if (WARN_ON_ONCE(folio_test_large(folio))) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (!folio_test_uptodate(folio)) { + clear_highpage(folio_page(folio, 0)); + kvm_gmem_mark_prepared(folio); + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + +out_folio: + if (ret != VM_FAULT_LOCKED) { + folio_unlock(folio); + folio_put(folio); + } + + return ret; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault_shared, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if (!kvm_gmem_supports_shared(file_inode(file))) + return -ENODEV; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -428,6 +500,7 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) } file->f_flags |= O_LARGEFILE; + allow_write_access(file); inode = file->f_inode; WARN_ON(file->f_mapping != inode->i_mapping); @@ -463,6 +536,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) u64 flags = args->flags; u64 valid_flags = 0; + if (kvm_arch_supports_gmem_shared_mem(kvm)) + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; + if (flags & ~valid_flags) return -EINVAL; From patchwork Thu Jun 5 15:37:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894248 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0524219D092 for ; Thu, 5 Jun 2025 15:38:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137904; cv=none; b=Rh9i3JygqMAqVopVJG2x9UWEyR9BWK2UDnrCOpe2409S4RuJzNyyzoSAuG3eUs7KwP3ibUBD/aSg9v/IU0X8FtMSCEXJNljTSmfl0BHqzi1TsvVh+lPcNMT1KymZcAIHGG9KOTvXr5039q3dYG8oeyio8kbh5s9CZcm2PWlYCEo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137904; c=relaxed/simple; bh=/gDlC+c+mYbhEIjFER9Aypoi49xuhmQpWh2Jln/swxw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=IibwCaRc4ogmlu/TkMwYrJ6Ce/SylsuIE7q60NXG3V+YkzcY1uf5aiHu5cZh42L/KY+86yR2MvbZMmqfnSDpYV09Hec3DtNb3y/AAULeHv6HXSq5fVPvVZZ+6MCKU6GXHW1nL35Zhn8jD90EuNKTaa/BQ1dstu+ka/F9lQYkbwE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=bxP8yivv; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="bxP8yivv" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450cf229025so3238425e9.1 for ; Thu, 05 Jun 2025 08:38:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137901; x=1749742701; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sMdvYQ/d0VW8VpJk25WKourz4e44AIQm8pQAcHG28VM=; b=bxP8yivvo5aAQF7BLgoYQoFDCm8IrD4EA3rkr7T6QLnUOQxeeqRlhANIb5dz/W/OAH SyVNZSpFhMLMBR1rFsxeBC3amA1WQNgd9SGezy/754EgAP1X+xs44WgUrlL8xk40sg72 shEmOW+XVPZ97s4VSskX3vi3goKKBo9cPHhhJexahlp2RZtIWqZDJl0URjI896MMIUD8 kEKNxvugxgUH1e+MlT6fdMfGpJiDgOrdA8khmIX7XglRRO/0nneENXRYnyvZc3eTGipe m3ioABh/sYOD5Dn3W1oZWXmwy5hkqGAjRXWiwPB9CQa4yYYBCvHU+6YeRtoZztK8gY9x MsvQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137901; x=1749742701; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sMdvYQ/d0VW8VpJk25WKourz4e44AIQm8pQAcHG28VM=; b=t6Pck8BBaQ8Vrxq/mA3Ne1HJI9oOGDh10ZUMp3efAm+k42TTG06udH6ytVASOWJDRA 12hgjeCwm5RWttF4hv7YR2ZFzedLq4D4js60KV5JdRdcFsq1yyyaXOifgvDQ72BBC9Wb ActFkRiyMNGnn5X96OO39Z4984lpOh9/D1gU8LGiAuduyJiIVkO5i/LpbtRnoZPqDfCe WZZNl03XH610HAQEzOyEoqVQP8/Xxr4DPi1MhTnuxbSPv0kouegNDtZ34d+VcP4es64g fAAt6/Isjk9vRMGNCB12T2NtopZ7y8Wku6wZxiHnfOVIYaFfXAGNM3fq3qldcdABfm/R GZCg== X-Forwarded-Encrypted: i=1; AJvYcCWZM0a+W7vrNQ3aSWDTxthCneyJE2RUSgSBrISjKfo/1Ap4Xtljq+es4sCBv4CuCaKbYDyjVmwwK7IqqKsS@vger.kernel.org X-Gm-Message-State: AOJu0Yx7H5rnI/hIVltDqbdh4a2t/O8bQR+OzuCdhDX6xZ9h2WpZnSXO IiaJd3SDW9gWawXFPAVyc8iNHa2eE+dMgldFGlVjcn0enQw0ujNp+/V17P+SZovXIkK9dfHRx8a +rA== X-Google-Smtp-Source: AGHT+IF6u3Z6Htlnfn5ZxQZU8datzZcfLbCYrTGAifog68TQ1MWi4ePY32WlwAmGrzeUYho49K/iYdDukw== X-Received: from wmbjh14.prod.google.com ([2002:a05:600c:a08e:b0:442:f9ef:e460]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:870c:b0:43c:ee3f:2c3 with SMTP id 5b1f17b1804b1-451f0a636ebmr71631385e9.7.1749137901405; Thu, 05 Jun 2025 08:38:21 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:51 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-10-tabba@google.com> Subject: [PATCH v11 09/18] KVM: guest_memfd: Track shared memory support in memslot From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Add a new internal flag in the top half of memslot->flags to track when a guest_memfd-backed slot supports shared memory, which is reserved for internal use in KVM. This avoids repeatedly checking the underlying guest_memfd file for shared memory support, which requires taking a reference on the file. Suggested-by: David Hildenbrand Signed-off-by: Fuad Tabba Acked-by: David Hildenbrand Reviewed-by: Gavin Shan --- include/linux/kvm_host.h | 11 ++++++++++- virt/kvm/guest_memfd.c | 2 ++ 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 640ce714cfb2..6326d1ad8225 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -54,7 +54,8 @@ * used in kvm, other bits are visible for userspace which are defined in * include/uapi/linux/kvm.h. */ -#define KVM_MEMSLOT_INVALID (1UL << 16) +#define KVM_MEMSLOT_INVALID (1UL << 16) +#define KVM_MEMSLOT_SUPPORTS_GMEM_SHARED (1UL << 17) /* * Bit 63 of the memslot generation number is an "update in-progress flag", @@ -2502,6 +2503,14 @@ static inline void kvm_prepare_memory_fault_exit(struct kvm_vcpu *vcpu, vcpu->run->memory_fault.flags |= KVM_MEMORY_EXIT_FLAG_PRIVATE; } +static inline bool kvm_gmem_memslot_supports_shared(const struct kvm_memory_slot *slot) +{ + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)) + return false; + + return slot->flags & KVM_MEMSLOT_SUPPORTS_GMEM_SHARED; +} + #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) { diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 7a158789d1df..e0fa49699e05 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -595,6 +595,8 @@ int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, */ WRITE_ONCE(slot->gmem.file, file); slot->gmem.pgoff = start; + if (kvm_gmem_supports_shared(inode)) + slot->flags |= KVM_MEMSLOT_SUPPORTS_GMEM_SHARED; xa_store_range(&gmem->bindings, start, end - 1, slot, GFP_KERNEL); filemap_invalidate_unlock(inode->i_mapping); From patchwork Thu Jun 5 15:37:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894518 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F05EE261390 for ; Thu, 5 Jun 2025 15:38:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137906; cv=none; b=YXQJyVg5NB7ZHkHmiHFpVZ1rlJsa+UORb+8OIijWz4HGD2c5FQ/GJ9gOEkk9yJf+hw6yW4dsEGzwmPabQWGBdVNHgzps3neLkFogxIV+UkhUt4f4weObg3mMt8IzhVJ76c+tUjEol6xljMXgAq1QeMp2oFuI53M1KN36NUQAy24= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137906; c=relaxed/simple; bh=aEGIxDz90Zon9QCUIY4z+F3iah7Pnq1sMPdwiCjlQLY=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=K7reIwnF8z56jpqA9cK69VWO9PWGZS3L89+/cysknJrc5qCmnTesIeo/XTQeogXrPP5t+nGphp3HPWjOTnLm1oTSsJeqHcsYos8LTS7hiBSJiobAmmJn8SXQZ2SwPg0Y8gzvJmxgYgtmakHHBX2HPHJW+9SfRQlqX4lBTeWTRqU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Li83ThBL; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Li83ThBL" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a4fabcafecso474596f8f.0 for ; Thu, 05 Jun 2025 08:38:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137903; x=1749742703; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=k5mT80MZDXcAwlQCNoaIhoW3gcZnOr/cDM6o1B6bi7Y=; b=Li83ThBLC8QaIP1XS9l4u7YQNhstrUQYtRQiN9VLCn9ZCWLuqBM+KV5FYP2QsMeXm9 S6ksbtwQfgOBKhzFiIW5LptGYUL/rNIEtdyVSmdbHwzgq9hYw/Dykwfex3BPh2q6ufd2 ZwkPVOf8vQRwHuFjScYb5TqpaVgpHwjx2Ug6XXe+v/ZXXxj84QIIk6X614L12y6I/W/N sOLX1YzVIuBTJyGeqDLZf6l5EC1vydcBv9j1ijTy46PTLnDj8ZDEDavyf27UxFGVibQH ANyQFon0xK8JvFrual1RooDod7ailci/BPwYiCuP71hX65cw1p0SXOk54vwjbIVkvnRm bRVA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137903; x=1749742703; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=k5mT80MZDXcAwlQCNoaIhoW3gcZnOr/cDM6o1B6bi7Y=; b=UbvIJ1vX2pa3AqIXHFg7yplZnY0wMs2Iz6iE+5tpajdCpQgEE2OS/P168wDwCmwT6F 2MNAvnG51KUL5s5MGbKGneB5TOc7M91VLALCrrKgc056nq52i3dgdNjDSkJTQP0Z61kz p5MiInG8kWmcCoECFx5kxVvuIQlFhDJAQ7425WHp4kEWgifiyqbqzuqzI2LPhUQzoiuJ 4wrUGjgSYUq627RnGWbpBGsuQReQiVGLO1dyX75oaSGqd/Ga68S0E+8KK9MrGptPaIj5 UJkengzFkg9MBONeKjl8tBLai4qbXrjsGbGLyl545L+/CrDu+Um6/9e+1pJLY1Fv1q/c VS8Q== X-Forwarded-Encrypted: i=1; AJvYcCV44dAFsAzc65CZZPoxtH1y+hVbGdcHrWkL1Xuaj77VpkmHdGHLCFmOQxp45ZNcqwv3o9TAsTWEfJ9xwkAj@vger.kernel.org X-Gm-Message-State: AOJu0YzldpIFqo1onaelKo69oJbQ4mXdP0xxgNG0kvBuALuxGb6BueyK YqlUcK9HA7QBsqkIAmKkr+cuHN3XSY4rdQbKSchCdwONGOnH6kve2EesbJrUGWXiOj82u+O7lGJ sag== X-Google-Smtp-Source: AGHT+IGc1LkSHwQNrznKJ1b33wj++AUM7x7/8ughlVaHpxKIkTfS1rPIxYD8sjJvOn6Gs4MKeRuaa/i+eQ== X-Received: from wmbem24.prod.google.com ([2002:a05:600c:8218:b0:450:41ed:d20e]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2907:b0:3a5:2653:7322 with SMTP id ffacd0b85a97d-3a5265374aemr4365482f8f.3.1749137903336; Thu, 05 Jun 2025 08:38:23 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:52 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-11-tabba@google.com> Subject: [PATCH v11 10/18] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng For memslots backed by guest_memfd with shared mem support, the KVM MMU must always fault in pages from guest_memfd, and not from the host userspace_addr. Update the fault handler to do so. This patch also refactors related function names for accuracy: kvm_mem_is_private() returns true only when the current private/shared state (in the CoCo sense) of the memory is private, and returns false if the current state is shared explicitly or impicitly, e.g., belongs to a non-CoCo VM. kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used to fault in not just private memory, but more generally, from guest_memfd. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Ackerley Tng Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 38 +++++++++++++++++++++++--------------- include/linux/kvm_host.h | 25 +++++++++++++++++++++++-- 2 files changed, 46 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 2b6376986f96..5b7df2905aa9 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3289,6 +3289,11 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); } +static inline bool fault_from_gmem(struct kvm_page_fault *fault) +{ + return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot); +} + void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4465,21 +4470,25 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, + struct kvm_page_fault *fault, + int order) { - u8 req_max_level; + u8 max_level = fault->max_level; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - max_level = min(kvm_max_level_for_order(gmem_order), max_level); + max_level = min(kvm_max_level_for_order(order), max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); - if (req_max_level) - max_level = min(max_level, req_max_level); + if (fault->is_private) { + u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn); + + if (level) + max_level = min(max_level, level); + } return max_level; } @@ -4491,10 +4500,10 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - int max_order, r; + int gmem_order, r; if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4502,15 +4511,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &fault->refcounted_page, &max_order); + &fault->refcounted_page, &gmem_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_level_for_fault_and_order(vcpu->kvm, fault, gmem_order); return RET_PF_CONTINUE; } @@ -4520,8 +4528,8 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + if (fault_from_gmem(fault)) + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 6326d1ad8225..c1c76794b25a 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2524,10 +2524,31 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); +/* + * Returns true if the given gfn's private/shared status (in the CoCo sense) is + * private. + * + * A return value of false indicates that the gfn is explicitly or implicitly + * shared (i.e., non-CoCo VMs). + */ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_GMEM) && - kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; + struct kvm_memory_slot *slot; + + if (!IS_ENABLED(CONFIG_KVM_GMEM)) + return false; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) { + /* + * Without in-place conversion support, if a guest_memfd memslot + * supports shared memory, then all the slot's memory is + * considered not private, i.e., implicitly shared. + */ + return false; + } + + return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) From patchwork Thu Jun 5 15:37:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894247 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2F9A72749C1 for ; Thu, 5 Jun 2025 15:38:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137909; cv=none; b=tr/S9bHp0lZWSWSXU084qP/EEf3q/6KN0DdEr8a5bgt04nkoQpCQtXOq2CkTpkz6cOn/NG69M/n9GUpcwSF3MCJuT2LywYV10lpeq7z9JypFmgsPV+4WsPc5tjDdaqV20A2r36XsCVPhHxNrTevcnb3dfCZYgrtCkfHXxjVhTVA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137909; c=relaxed/simple; bh=1Q8cbQvjFXtZKZecW0xkLQCqrCm1hPK+LAqNqvm2qKA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jQdYeSVNnBr6nbQZFfis5Y9W7iBB2QNCS9ztV1AAfOJzunD+HWY77l/5/o7SbJMNzopqmF9njxWtES7knAEdlkooKryUQEPwXoErqAlR6SxrsD1TZb1PwqNuLnIys0+oyA8Shcdp9+HGzAX3r3DCI5rCIlbWgDZbBcLFbNpm+pI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=KntdqbpH; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="KntdqbpH" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450d6768d4dso7161615e9.2 for ; Thu, 05 Jun 2025 08:38:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137905; x=1749742705; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8Q3MZaB+gkddolpn5GExyUuoZk0C0aW5OJ088gHNwOM=; b=KntdqbpH0ppY/qKJDS+j+f+2TSE+jL+r2asdUydMOG3Rrs9U3AxJiJJnQEG/YtLaet W0BkivP+PG6ME1kk69sCKNLFsN0t2PTcGCzoWWvmAJBihKo+/RqGTFddzWd+rBHVqES/ jzHTBEW/nRKBMFJFpaJYAF64Lj9KL3YkEMtjYjP2WYmwPeTRQMRDy6BvTfD+88UtLyHy RN+ZuxmlqpfGRtCRp8xXrD3UJz2RXf5UzrsZ7q2NNox9ZhE0YfKtksS79jjtmXR1WB4O CZeLDpggmL5/uh+phzqp9G+TbeQhI+uvK8q6Kq+qSqpX6V4A/vQ8/axePdHHqB25ojyY u/Gw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137905; x=1749742705; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8Q3MZaB+gkddolpn5GExyUuoZk0C0aW5OJ088gHNwOM=; b=d3O80YlaOj8nW4lQqAgkW2A2y4D0NPEC+kRanfHtMs2dU8zclwk20ONiaIqeokKfHb sGr/lzITAGo2UzG05RbO6/7t65LmjXADcxSBIwOAQ0JqP2963CjGWPN48IibfIv18IJI VeD8Ns+FStcdhb22zNk6Jt5YuZ8ETUEYYVbw6+gxMNTila+joWzZs9ZVIjfI/pt5w0Bx 6vMaIzHJhla03Yrji49fW2kIaltu6LDmZVezDwHuk9YmkUj/2vVZS8dr6qpUKilHNbgu JzTt8iOKId6q7ULXy1qWfN7j4TX4ZEMfCX5/6jqiZwwjAtN60oXSOq/QIusXBK4TdJ0K zkMA== X-Forwarded-Encrypted: i=1; AJvYcCVJaspVc3hnf3GHdUwRopos/H+8DKq5lT93P0uBRvjOu4jEl4cMqOakxZvK0GiJmRZR7Eq0mAHlPStg6LXQ@vger.kernel.org X-Gm-Message-State: AOJu0Yxd4xWw4nCculsUKX8v3gVwn2zsWI5KXm/2gGgy2yzu2fkkdBJb 8HuZwTuIFHaeU/GPrWAXewqqE9KnNvtNFGGYjjwgGCh2fXrzACHfcNSI5Hhc0XHMOro8wmXwxOg 4SA== X-Google-Smtp-Source: AGHT+IEA7uYOS1Y9QP3JkFrU8QvHSg/X2tqLhqzqw4p1Ujpz33BJ7l1JlQZBOKqfB5toHtGtrk7dSHdR/w== X-Received: from wmbgw9.prod.google.com ([2002:a05:600c:8509:b0:451:d768:b11d]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:2087:b0:3a4:f00b:69b6 with SMTP id ffacd0b85a97d-3a51d98b212mr6642446f8f.54.1749137905440; Thu, 05 Jun 2025 08:38:25 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:53 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-12-tabba@google.com> Subject: [PATCH v11 11/18] KVM: x86: Consult guest_memfd when computing max_mapping_level From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng This patch adds kvm_gmem_max_mapping_level(), which always returns PG_LEVEL_4K since guest_memfd only supports 4K pages for now. When guest_memfd supports shared memory, max_mapping_level (especially when recovering huge pages - see call to __kvm_mmu_max_mapping_level() from recover_huge_pages_range()) should take input from guest_memfd. Input from guest_memfd should be taken in these cases: + if the memslot supports shared memory (guest_memfd is used for shared memory, or in future both shared and private memory) or + if the memslot is only used for private memory and that gfn is private. If the memslot doesn't use guest_memfd, figure out the max_mapping_level using the host page tables like before. This patch also refactors and inlines the other call to __kvm_mmu_max_mapping_level(). In kvm_mmu_hugepage_adjust(), guest_memfd's input is already provided (if applicable) in fault->max_level. Hence, there is no need to query guest_memfd. lpage_info is queried like before, and then if the fault is not from guest_memfd, adjust fault->req_level based on input from host page tables. Signed-off-by: Ackerley Tng Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba Acked-by: David Hildenbrand --- arch/x86/kvm/mmu/mmu.c | 87 +++++++++++++++++++++++++--------------- include/linux/kvm_host.h | 11 +++++ virt/kvm/guest_memfd.c | 12 ++++++ 3 files changed, 78 insertions(+), 32 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 5b7df2905aa9..9e0bc8114859 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3256,12 +3256,11 @@ static int host_pfn_mapping_level(struct kvm *kvm, gfn_t gfn, return level; } -static int __kvm_mmu_max_mapping_level(struct kvm *kvm, - const struct kvm_memory_slot *slot, - gfn_t gfn, int max_level, bool is_private) +static int kvm_lpage_info_max_mapping_level(struct kvm *kvm, + const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) { struct kvm_lpage_info *linfo; - int host_level; max_level = min(max_level, max_huge_page_level); for ( ; max_level > PG_LEVEL_4K; max_level--) { @@ -3270,28 +3269,61 @@ static int __kvm_mmu_max_mapping_level(struct kvm *kvm, break; } - if (is_private) - return max_level; + return max_level; +} + +static inline u8 kvm_max_level_for_order(int order) +{ + BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); + + KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && + order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) + return PG_LEVEL_1G; + + if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) + return PG_LEVEL_2M; + + return PG_LEVEL_4K; +} + +static inline int kvm_gmem_max_mapping_level(const struct kvm_memory_slot *slot, + gfn_t gfn, int max_level) +{ + int max_order; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - host_level = host_pfn_mapping_level(kvm, gfn, slot); - return min(host_level, max_level); + max_order = kvm_gmem_mapping_order(slot, gfn); + return min(max_level, kvm_max_level_for_order(max_order)); } int kvm_mmu_max_mapping_level(struct kvm *kvm, const struct kvm_memory_slot *slot, gfn_t gfn) { - bool is_private = kvm_slot_has_gmem(slot) && - kvm_mem_is_private(kvm, gfn); + int max_level; - return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); + max_level = kvm_lpage_info_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM); + if (max_level == PG_LEVEL_4K) + return PG_LEVEL_4K; + + if (kvm_slot_has_gmem(slot) && + (kvm_gmem_memslot_supports_shared(slot) || + kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE)) { + return kvm_gmem_max_mapping_level(slot, gfn, max_level); + } + + return min(max_level, host_pfn_mapping_level(kvm, gfn, slot)); } static inline bool fault_from_gmem(struct kvm_page_fault *fault) { - return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot); + return fault->is_private || + (kvm_slot_has_gmem(fault->slot) && + kvm_gmem_memslot_supports_shared(fault->slot)); } void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) @@ -3314,12 +3346,20 @@ void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault * Enforce the iTLB multihit workaround after capturing the requested * level, which will be used to do precise, accurate accounting. */ - fault->req_level = __kvm_mmu_max_mapping_level(vcpu->kvm, slot, - fault->gfn, fault->max_level, - fault->is_private); + fault->req_level = kvm_lpage_info_max_mapping_level(vcpu->kvm, slot, + fault->gfn, fault->max_level); if (fault->req_level == PG_LEVEL_4K || fault->huge_page_disallowed) return; + if (!fault_from_gmem(fault)) { + int host_level; + + host_level = host_pfn_mapping_level(vcpu->kvm, fault->gfn, slot); + fault->req_level = min(fault->req_level, host_level); + if (fault->req_level == PG_LEVEL_4K) + return; + } + /* * mmu_invalidate_retry() was successful and mmu_lock is held, so * the pmd can't be split from under us. @@ -4453,23 +4493,6 @@ void kvm_arch_async_page_ready(struct kvm_vcpu *vcpu, struct kvm_async_pf *work) vcpu->stat.pf_fixed++; } -static inline u8 kvm_max_level_for_order(int order) -{ - BUILD_BUG_ON(KVM_MAX_HUGEPAGE_LEVEL > PG_LEVEL_1G); - - KVM_MMU_WARN_ON(order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M) && - order != KVM_HPAGE_GFN_SHIFT(PG_LEVEL_4K)); - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_1G)) - return PG_LEVEL_1G; - - if (order >= KVM_HPAGE_GFN_SHIFT(PG_LEVEL_2M)) - return PG_LEVEL_2M; - - return PG_LEVEL_4K; -} - static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, struct kvm_page_fault *fault, int order) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index c1c76794b25a..d55d870b354d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2551,6 +2551,10 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else +static inline unsigned long kvm_get_memory_attributes(struct kvm *kvm, gfn_t gfn) +{ + return 0; +} static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { return false; @@ -2561,6 +2565,7 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2570,6 +2575,12 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } +static inline int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, + gfn_t gfn) +{ + BUG(); + return 0; +} #endif /* CONFIG_KVM_GMEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index e0fa49699e05..b07e38fd91f5 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -716,6 +716,18 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +/* + * Returns the mapping order for this @gfn in @slot. + * + * This is equal to max_order that would be returned if kvm_gmem_get_pfn() were + * called now. + */ +int kvm_gmem_mapping_order(const struct kvm_memory_slot *slot, gfn_t gfn) +{ + return 0; +} +EXPORT_SYMBOL_GPL(kvm_gmem_mapping_order); + #ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) From patchwork Thu Jun 5 15:37:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894517 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3AD472749DF for ; Thu, 5 Jun 2025 15:38:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137910; cv=none; b=cCT69a3a47cDQS586R4nPZYivzbhwdpvWa2UWikjRTMgSppTXJok98OJpHtEuvJxMVtUTX3TAQ9GDXmauo3ivWAbQj6DRJaRi4OZHPNmLwulZv6Piv08qyNNz0t0wRkHlw95iuT59U+j1OzrMWdu15kJ8yGg+ohS1Z0Dyuw0JsU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137910; c=relaxed/simple; bh=+si0IwQYfdnfpPTbB12Ya48FTxWYrHLxC7qomaPe714=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=nnDhb1/h+CMoDGx79fXgtrkjV543Kvc3Mdt9pkQvXC/8WbS4DLnGhC+vuBVUOtWmIghcx4bb5Bma/4UbY+vZiVRgnGosObsRLDyfy4mLLjaDE068zlGYM4E9wW8nX+MLgbF9c9J7hbGRvTcgIw7uJ+OIhHx4sRw4cOVtA3rGZYA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=xKuvhM5v; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="xKuvhM5v" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a4e9252ba0so609287f8f.0 for ; Thu, 05 Jun 2025 08:38:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137907; x=1749742707; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LyV7wZJIVzKpkK9EkEwsBwrhtHO9OpqVVjeAhwV3yWE=; b=xKuvhM5vNmATDCyyBVEhecbPVF0IkYnmnD17kzkdINNmlc8bd+k7hjJCW7o2ZlsIlE hjlN5N40c/+suHqOgA/5ab9sccCq3H4lzGUiPy9UAAjva2sbYBIst94cN0b1ijxQ4pUG h61vAeNCowPFpKTCf66F0HAuTL/1MilqdyOxnGb9as/4zGnv+goLVi+9pT+igSpZtTsw eMKDM/e0L5weC90n4mGroZ7AR37gBTEzf1JSV/zf0ytMVGX3nGJc50qAahK15cnyUK8Q MFvqeF2UifOjG5U9XTInoRclYDB+4YYeV1UoygktIoHf5H2jBRA8+5esUlYD3WV6XK66 kDQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137907; x=1749742707; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LyV7wZJIVzKpkK9EkEwsBwrhtHO9OpqVVjeAhwV3yWE=; b=dPlBmB8B8W02T5L9yYfXM/5fdfJjk56ppTTjrhEe7suCRKkYG44p0cuu/IJgPrAwgq KKjunypeb/dHZ3jf9oS7FSIZ4WBu3RlbrSylBz2hAc+1bj3XjslHU8tk5Tq7EtapM+Yu /WoY9QmMRPcP0ORPlBNdrt1XEixcfkcvBHFSOF1gPUb6TbxHcEMEJhpAuGmo7Nra0Mmt kYp5aWkPx8VV62pa6SJdAfIOJAd8XT3U35NtV2V1eIAYi6LJIO5q1iTDzuskuHf4vDpa L5XUy7r3cdp+0VdNq0egie7uP/bFwm2DJ9qhBkI5vwC9HSs0TGjviM9GCdsW18xMqw8q OJKg== X-Forwarded-Encrypted: i=1; AJvYcCWFx1p+J87mjCqxGW/egHwc+1jDYYXjZ9CNkNlWVQjflVUlrRIhTT+LV+IORXszmCEhMUxLm3VwNddS3ji0@vger.kernel.org X-Gm-Message-State: AOJu0Yzl4+XDD4lxZKZDi+2yl4TxOSF2B0q8mWBO5yb1FaE67S18sxMB GkTo/OqfXKFuDP5mTMT8wHU+xf0P+f/SDAJYWvZVCNdaJJjdWs5U7gySzZA/7CE/PYAsLU6Xqd6 71w== X-Google-Smtp-Source: AGHT+IGgV5ke5hXPrHMuRnrP+wv3u6Tkx21fI8p6ntOfTtgrw4d762tpa+/9V2HMZqsC7PIFnB4sz/aRjw== X-Received: from wrbdr8.prod.google.com ([2002:a5d:5f88:0:b0:3a4:ead6:8232]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f49:0:b0:3a4:eb7a:2cda with SMTP id ffacd0b85a97d-3a51d95dcd5mr6346937f8f.30.1749137907542; Thu, 05 Jun 2025 08:38:27 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:54 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-13-tabba@google.com> Subject: [PATCH v11 12/18] KVM: x86: Enable guest_memfd shared memory for SW-protected VMs From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Define the architecture-specific macro to enable shared memory support in guest_memfd for relevant software-only VM types, specifically KVM_X86_DEFAULT_VM and KVM_X86_SW_PROTECTED_VM. Enable the KVM_GMEM_SHARED_MEM Kconfig option if KVM_SW_PROTECTED_VM is enabled. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 10 ++++++++++ arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/x86.c | 3 ++- 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 709cc2a7ba66..ce9ad4cd93c5 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2255,8 +2255,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) + +/* + * CoCo VMs with hardware support that use guest_memfd only for backing private + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. + */ +#define kvm_arch_supports_gmem_shared_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) #else #define kvm_arch_supports_gmem(kvm) false +#define kvm_arch_supports_gmem_shared_mem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index b37258253543..fdf24b50af9d 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -47,6 +47,7 @@ config KVM_X86 select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM + select KVM_GMEM_SHARED_MEM if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 035ced06b2dd..2a02f2457c42 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12718,7 +12718,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Thu Jun 5 15:37:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894246 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1A7032749F6 for ; Thu, 5 Jun 2025 15:38:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137913; cv=none; b=LkBwotuJSdg72ZbTXH683pA6xG+dctmVHNTni4aVPMqefCOMRi/nv9i990Q1vOWcwypEpO4xSVVAXmTB5x3ttzrtdbMI7E3Q1dhL1lYA+c5Ywm6ZgiUjmAyEQOQApbBu3VGGxtla5Eh+pjDnRmw45bwFwhdR6No8XFfrxEukTXw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137913; c=relaxed/simple; bh=icedCB6Wz7Jabq6JZVGc/8daEQIp1G+3ePFs723AnHE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=MSayKdpReMLVpTvu19Zih8WipsRWdhbu7zvU8RTYp4feq1RBSRQhts4/yl474NSGy1Lz81UMCUcpEXO98gvPCNRcK12hHLqFLZXDLK5asmvBCm2kTwibYNUxCrHEvYnEGP8JZF7AOk5xXcLQ5R0Q+moRH4F0MLoEriVYvSQwF0I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=QlmGtSAn; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="QlmGtSAn" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450d9f96f61so8156355e9.1 for ; Thu, 05 Jun 2025 08:38:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137909; x=1749742709; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=g1iopiNEacnt/zrf2KolJOPgZprgFWMs9+wdf/O4tEU=; b=QlmGtSAnrAWjcbNGxU88M/pSiW1yUdNzq2atTpfbXRDHlwTb0ehZFtRvu9LXSKJSrx 1wZWKWaz404nqpotU73vCR2sQzFzWlihnKOmFQxr5vdjOM1FSrwkXgW3M8HaS16871zb bHzEfmz3H/69mKxvuY9ViTp8pOmd5ok73DwegoE+lNWoqdcrtxMXP0eJqG8qciQSo4mG z6CdV526dWIOT233wF5sOmXA8SZarfc0ETp/FhR2WtqZt3U9ZGZTTYYUkNelQC3J3PWu vBgV3TOBpNayDh/7hwV0hGJcSYlEAhYNiu6ICPU3abpWb9F2nEn8ao15w5T+/ELzhbp5 c+Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137909; x=1749742709; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=g1iopiNEacnt/zrf2KolJOPgZprgFWMs9+wdf/O4tEU=; b=t2aFrHBomeIiU3hjmhbReomKYz9PvztvtpqLwlaMR8H87x7a8ko+1hxNkxs5by4qz7 lixnkGKmILEDr0Y7xyUfAsDV0rlAmeBeE4AnXg+oXRFTT33H+xOfy4a9KEgzjZ/GLlNa hzUt8/ZMgUx+4mes5FNxdT3JoODg3FCglSt+7yGHr0ll9ZI9vs63GXF7Enqrt+oRv5c3 XhCyJsnU+1n++Wz/HGSvIgMCPBtvsGM+5nlohqUOcBYxgQBaKjEeqYJZ2HZO/B2zP8l4 gYZwPhVDKQ0o8oTxc/2NWjcrhaA50J1NnGsT56GQJh5lBdk6cfPNjD7XjQWpRa+BNSnM 5oxg== X-Forwarded-Encrypted: i=1; AJvYcCWfCCOOuRMG+rfMGg4ru7I0IUj/w+eMXg81eDTkVzOLDah8RHKsUtpxk4woRZkbIvZb0iG5e7gTuAEnY0Do@vger.kernel.org X-Gm-Message-State: AOJu0Yyg9pVBqWtBeG88Jbu/PymngOId0UcZRn/aDdOlYSYRMoLdpcs2 Bx426A6qmwY5vVihIh/hqxcpDtUqEmawZDbPrS7tcT/sOUhf4DX0OaL7oRGmox7TvhiuJXYuBdO NKQ== X-Google-Smtp-Source: AGHT+IGasXWyU+CSngaPDa1G8ri1XRKna1hUX5IGWHUlkteXImhCgzrwi56lwVAtUvRYlgwnKMQTe4KUCA== X-Received: from wmbdz10.prod.google.com ([2002:a05:600c:670a:b0:450:dca1:cf91]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e89:b0:44b:eb56:1d45 with SMTP id 5b1f17b1804b1-451f0aa7ff9mr75037585e9.15.1749137909540; Thu, 05 Jun 2025 08:38:29 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:55 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-14-tabba@google.com> Subject: [PATCH v11 13/18] KVM: arm64: Refactor user_mem_abort() From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com To simplify the code and to make the assumptions clearer, refactor user_mem_abort() by immediately setting force_pte to true if the conditions are met. Remove the comment about logging_active being guaranteed to never be true for VM_PFNMAP memslots, since it's not actually correct. Move code that will be reused in the following patch into separate functions. Other small instances of tidying up. No functional change intended. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 100 ++++++++++++++++++++++++------------------- 1 file changed, 55 insertions(+), 45 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index eeda92330ade..ce80be116a30 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1466,13 +1466,56 @@ static bool kvm_vma_mte_allowed(struct vm_area_struct *vma) return vma->vm_flags & VM_MTE_ALLOWED; } +static int prepare_mmu_memcache(struct kvm_vcpu *vcpu, bool topup_memcache, + void **memcache) +{ + int min_pages; + + if (!is_protected_kvm_enabled()) + *memcache = &vcpu->arch.mmu_page_cache; + else + *memcache = &vcpu->arch.pkvm_memcache; + + if (!topup_memcache) + return 0; + + min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); + + if (!is_protected_kvm_enabled()) + return kvm_mmu_topup_memory_cache(*memcache, min_pages); + + return topup_hyp_memcache(*memcache, min_pages); +} + +/* + * Potentially reduce shadow S2 permissions to match the guest's own S2. For + * exec faults, we'd only reach this point if the guest actually allowed it (see + * kvm_s2_handle_perm_fault). + * + * Also encode the level of the original translation in the SW bits of the leaf + * entry as a proxy for the span of that translation. This will be retrieved on + * TLB invalidation from the guest and used to limit the invalidation scope if a + * TTL hint or a range isn't provided. + */ +static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, + enum kvm_pgtable_prot *prot, + bool *writable) +{ + *writable &= kvm_s2_trans_writable(nested); + if (!kvm_s2_trans_readable(nested)) + *prot &= ~KVM_PGTABLE_PROT_R; + + *prot |= kvm_encode_nested_level(nested); +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, bool fault_is_perm) { int ret = 0; - bool write_fault, writable, force_pte = false; + bool topup_memcache; + bool write_fault, writable; bool exec_fault, mte_allowed; bool device = false, vfio_allow_any_uc = false; unsigned long mmu_seq; @@ -1484,6 +1527,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, gfn_t gfn; kvm_pfn_t pfn; bool logging_active = memslot_is_logging(memslot); + bool force_pte = logging_active || is_protected_kvm_enabled(); long vma_pagesize, fault_granule; enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; @@ -1501,28 +1545,16 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - if (!is_protected_kvm_enabled()) - memcache = &vcpu->arch.mmu_page_cache; - else - memcache = &vcpu->arch.pkvm_memcache; - /* * Permission faults just need to update the existing leaf entry, * and so normally don't require allocations from the memcache. The * only exception to this is when dirty logging is enabled at runtime * and a write fault needs to collapse a block entry into a table. */ - if (!fault_is_perm || (logging_active && write_fault)) { - int min_pages = kvm_mmu_cache_min_pages(vcpu->arch.hw_mmu); - - if (!is_protected_kvm_enabled()) - ret = kvm_mmu_topup_memory_cache(memcache, min_pages); - else - ret = topup_hyp_memcache(memcache, min_pages); - - if (ret) - return ret; - } + topup_memcache = !fault_is_perm || (logging_active && write_fault); + ret = prepare_mmu_memcache(vcpu, topup_memcache, &memcache); + if (ret) + return ret; /* * Let's check if we will get back a huge page backed by hugetlbfs, or @@ -1536,16 +1568,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, return -EFAULT; } - /* - * logging_active is guaranteed to never be true for VM_PFNMAP - * memslots. - */ - if (logging_active || is_protected_kvm_enabled()) { - force_pte = true; + if (force_pte) vma_shift = PAGE_SHIFT; - } else { + else vma_shift = get_vma_page_shift(vma, hva); - } switch (vma_shift) { #ifndef __PAGETABLE_PMD_FOLDED @@ -1597,7 +1623,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, max_map_size = PAGE_SIZE; force_pte = (max_map_size == PAGE_SIZE); - vma_pagesize = min(vma_pagesize, (long)max_map_size); + vma_pagesize = min_t(long, vma_pagesize, max_map_size); } /* @@ -1626,7 +1652,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, * Rely on mmap_read_unlock() for an implicit smp_rmb(), which pairs * with the smp_wmb() in kvm_mmu_invalidate_end(). */ - mmu_seq = vcpu->kvm->mmu_invalidate_seq; + mmu_seq = kvm->mmu_invalidate_seq; mmap_read_unlock(current->mm); pfn = __kvm_faultin_pfn(memslot, gfn, write_fault ? FOLL_WRITE : 0, @@ -1661,24 +1687,8 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, if (exec_fault && device) return -ENOEXEC; - /* - * Potentially reduce shadow S2 permissions to match the guest's own - * S2. For exec faults, we'd only reach this point if the guest - * actually allowed it (see kvm_s2_handle_perm_fault). - * - * Also encode the level of the original translation in the SW bits - * of the leaf entry as a proxy for the span of that translation. - * This will be retrieved on TLB invalidation from the guest and - * used to limit the invalidation scope if a TTL hint or a range - * isn't provided. - */ - if (nested) { - writable &= kvm_s2_trans_writable(nested); - if (!kvm_s2_trans_readable(nested)) - prot &= ~KVM_PGTABLE_PROT_R; - - prot |= kvm_encode_nested_level(nested); - } + if (nested) + adjust_nested_fault_perms(nested, &prot, &writable); kvm_fault_lock(kvm); pgt = vcpu->arch.hw_mmu->pgt; From patchwork Thu Jun 5 15:37:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894516 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 55906274FD4 for ; Thu, 5 Jun 2025 15:38:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137915; cv=none; b=n7A5v0/anuytojfS2DGZVW925AY0u1nXY1U8Pb7usDhpdmfnpac7Y88Q6JDVZ7bSnQIxsvtZFLOBu3knrDWlWmwFXt03E2geCQYE7G5VOiNK3IAlyTVP1TOIS/uqk0GHiuUSCKUNNXGwBjKeSsw5xuBBKWdHBm7574qbmfyDukI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137915; c=relaxed/simple; bh=j3r7Coku7j3Jd+pKg8DvBXPHSPEHnnoo2Ks6Ban0GCg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=U/Bfb0vB9O2Q0bEkDcT7zcVW5X/iZMXSAWBz0+rAU0Xs0TYL2oL62M6yT7+H3hf4udAS5E3xi87TErl3QNXw7ZcN+Aw4x65uUtEV5GqFj2Qsln0mGpjdr9jtEfIUvLm/x2EpQ23CZsFFVFs+hGLhkeVDPQYS/9b32vlJUN4fyJc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=OVteEr8W; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="OVteEr8W" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450d6768d4dso7162215e9.2 for ; Thu, 05 Jun 2025 08:38:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137912; x=1749742712; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=NPZBE5PTGW9kWBzPk02s4iKZXmTT2aIVBFlSN4SanJw=; b=OVteEr8WEg1jvWpzKTBOWDRM8nzZ6CeV44z4oQC+QNSFN7ET8oawMZIQaPv0DTq4+f coiF4xbhASZpVIa4XyAZYwc1fvOM/bFEuzoeOLr9Io+/9jh9vo+vvYDf5IBehkej8NZ4 MzESHAMdw6sIcZYBZbkCcJDvHR7jPwytw0h9pQUw86kCmEfJpV+5zAwrhqillygJiaos ABrPavgshxuSeGt5nBq0kQlA5t3DveFNMrYTSIVEr0dpCgyVLxQqJOXIHPnhZV1QGy3s XTA5RvR2OKmIjDWsMhnCh+ueQTDCWSAwrGB3gejjZ/VDnIOHFucIQUrKSh3l+53bnIER 40zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137912; x=1749742712; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=NPZBE5PTGW9kWBzPk02s4iKZXmTT2aIVBFlSN4SanJw=; b=ZN7mwWfVjJYPldECLr8/SOBSmFJKWqgE9NS9ZZOptO+q+oNABM7naQCGZIQyLMrbyu VTiZQy2vlftMciCbXy4KthicsSZnb0Q0I6+y+tnK/fvN870FkeW4St49Ph7glrGl1dHF qgkFFAbhLEvQFe8dFUBze9eQVT+WiGydxgpoL00rv5k6qugaZueMR+EqxN4+lz63509q 9lGPqnWMwF6dBnFSEpRlzpWZBAUJwpeHCIZSv/HJn3Rzw7lwTdOrZEUHoQMmPlTUGV5a dZJjjv4aII2A1IkHlHqoOpzlRNZI3cuZ/JpzhS6IXQs0/cGhbo3fhZTowt1b8/3NGyfe S4Lg== X-Forwarded-Encrypted: i=1; AJvYcCVuNtAZdRDM4WeIZ30f3b4hvOZOAmf48lKHBpqNKQvodNbtdf2gVmaP6W2hTHcxiqQeLMsp+OYVmJwXJfOV@vger.kernel.org X-Gm-Message-State: AOJu0YwvU4IN1vcf35C+QOITUy3Y34U5Jwjr/k+EQagpYw6HRpPYmVCo 7bYYqewU3aGs/ulXNVFZ1OBj2G7dBM1eyxuUQiF7ii7wbVKGhXhz7K7wDo+qwKIkSEZxh7DxDN3 YAw== X-Google-Smtp-Source: AGHT+IGMJty4mR+w6Q7W1YmFTUSwU16vv8WAlkYUzsCh1k98WN49+HkHCW9IHOZiu4FjckwQZIFoRzzqew== X-Received: from wmbdt15.prod.google.com ([2002:a05:600c:630f:b0:440:595d:fba9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3595:b0:43d:7588:667b with SMTP id 5b1f17b1804b1-451f0a88e69mr92705785e9.10.1749137911624; Thu, 05 Jun 2025 08:38:31 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:56 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-15-tabba@google.com> Subject: [PATCH v11 14/18] KVM: arm64: Handle guest_memfd-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Add arm64 support for handling guest page faults on guest_memfd backed memslots. Until guest_memfd supports huge pages, the fault granule is restricted to PAGE_SIZE. Signed-off-by: Fuad Tabba --- arch/arm64/kvm/mmu.c | 93 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 90 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index ce80be116a30..f14925fe6144 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1508,6 +1508,89 @@ static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, *prot |= kvm_encode_nested_level(nested); } +#define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED) + +static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *nested, + struct kvm_memory_slot *memslot, bool is_perm) +{ + bool logging, write_fault, exec_fault, writable; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; + struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; + struct page *page; + struct kvm *kvm = vcpu->kvm; + void *memcache; + kvm_pfn_t pfn; + gfn_t gfn; + int ret; + + ret = prepare_mmu_memcache(vcpu, !is_perm, &memcache); + if (ret) + return ret; + + if (nested) + gfn = kvm_s2_trans_output(nested) >> PAGE_SHIFT; + else + gfn = fault_ipa >> PAGE_SHIFT; + + logging = memslot_is_logging(memslot); + write_fault = kvm_is_write_fault(vcpu); + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); + + if (write_fault && exec_fault) { + kvm_err("Simultaneous write and execution fault\n"); + return -EFAULT; + } + + if (is_perm && !write_fault && !exec_fault) { + kvm_err("Unexpected L2 read permission error\n"); + return -EFAULT; + } + + ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL); + if (ret) { + kvm_prepare_memory_fault_exit(vcpu, fault_ipa, PAGE_SIZE, + write_fault, exec_fault, false); + return ret; + } + + writable = !(memslot->flags & KVM_MEM_READONLY) && + (!logging || write_fault); + + if (nested) + adjust_nested_fault_perms(nested, &prot, &writable); + + if (writable) + prot |= KVM_PGTABLE_PROT_W; + + if (exec_fault || + (cpus_have_final_cap(ARM64_HAS_CACHE_DIC) && + (!nested || kvm_s2_trans_executable(nested)))) + prot |= KVM_PGTABLE_PROT_X; + + kvm_fault_lock(kvm); + if (is_perm) { + /* + * Drop the SW bits in favour of those stored in the + * PTE, which will be preserved. + */ + prot &= ~KVM_NV_GUEST_MAP_SZ; + ret = KVM_PGT_FN(kvm_pgtable_stage2_relax_perms)(pgt, fault_ipa, prot, flags); + } else { + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, PAGE_SIZE, + __pfn_to_phys(pfn), prot, + memcache, flags); + } + kvm_release_faultin_page(kvm, page, !!ret, writable); + kvm_fault_unlock(kvm); + + if (writable && !ret) + mark_page_dirty_in_slot(kvm, memslot, gfn); + + return ret != -EAGAIN ? ret : 0; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1532,7 +1615,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1959,8 +2042,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva, - esr_fsc_is_permission_fault(esr)); + if (kvm_slot_has_gmem(memslot)) + ret = gmem_abort(vcpu, fault_ipa, nested, memslot, + esr_fsc_is_permission_fault(esr)); + else + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva, + esr_fsc_is_permission_fault(esr)); if (ret == 0) ret = 1; out: From patchwork Thu Jun 5 15:37:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894245 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50A0C1F1315 for ; Thu, 5 Jun 2025 15:38:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137917; cv=none; b=EeFfmXLR5saSZN0wzpvwxYUGcvUBR+oB2fnpFqkOw9ukXgy9RReWFIWtvqViuNz5uCNbt4HEVd/tPQs6UNYxOCjSsnA3kqOcK/SeazB0aAFSobeGIdrZv9No03QBGO1mndTtVsu+y+EBg+B7dKMPLsIyiad89h6S1nbsuO1N9XA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137917; c=relaxed/simple; bh=BK3VzOi6x3EKu5QuZepEQ0F5haIyPLKoEu+JnsSG2Yg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rFJryvbVg7MsSDFk3FOO4c839sXTJ4KIA4WmZNtqXZZFtGA/+kcgMtcpxyKBnPuzwVa+O60Y3W0wZoJ0i1XHt2tla9icU5CidbcQ0wquQ0OGraQNbTsNOj+A8IzoS4ggO0y3XOiYKt1cLr+7u7HaAwIxvQYgwxU6aJyiIfH5fu4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=R9ur9m82; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="R9ur9m82" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450d9f96f61so8157025e9.1 for ; Thu, 05 Jun 2025 08:38:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137914; x=1749742714; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=sdTi36595hf+KxJmYyJWjfbB0V6sDCqPIzT+xc3Ya54=; b=R9ur9m82EbGBMTvcKrwQKNL6XtGG8UL8uOF5no+o3uykJSiz2aUSjSV+7Rv2af793Y r8Nx4P6Nvsk4Rqeb4SOQG9cc175wFMHx1xCahYZotjqv7U5NeLQofWotjCBxHUFfAjhI Py0hHDTPlHRufBLomBvAM58IeobvSnQweGnnDP30RHQWeLF9L3Vs49w5umPAM+ZTudhY nw/Mme4CyFnA/rBinQ3pcPgfsb4zAQUo9P0iH7ug2R9x3VB2tuSQ9YOAwruyxkweRCkg 2B77Tk5a2xl4ghCPB0INLCX5lj823gfEmSqs3SJ91DVYDI4pLuqsZXIf9BiE72zTxrKK XuqA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137914; x=1749742714; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sdTi36595hf+KxJmYyJWjfbB0V6sDCqPIzT+xc3Ya54=; b=CPCra+K4foHczHk3psqBMJLwA005D8qJkY4EAIclNUbra2xb3fEy/t4+WsCGQIQjXF cKBWt0GD1xdULERyYr1ufuQ9u29LHpWZgrPIaXB658lgWkG/bZlWEMdeG61oi9aqm/Jh mo0Irm6Lk4dYt8ss7GVZxa89AHkqN5xZJYnVYMbVeN3YRuhvEMdxokNYlXeV1t0MmoSR bTLnTcuKb/JaweHX8xrBleM8mvBujMeh3tA3a4eQeYA19z+TG7JKeiP94TbR6fVuHl2W qMeDTuccKKQwz/SHIAaz+KE61hxulJODd5tnm4cPYNCQMlf2NNl344hdoayBbkOU3bdD wOBw== X-Forwarded-Encrypted: i=1; AJvYcCUOvKoc0QYP+cVIeA7vQEWKGb4FJ7yomifXKsvjEpQmLfP/sESn9iwvN5uGVO3Vlg8zkj+Ld16Ry2kiKUC4@vger.kernel.org X-Gm-Message-State: AOJu0YwUe3m1Xx/8V6KkpihitVWQEvl8Bl6YSVt+l2NnTx42GupolbLn gDjL+wSk8b7m9SAXr4gJ95text+Jm7wtHxcDYKsTdakOl+qlNNe7fxI6HS9w7YZZbdiDgoM2tEq pTQ== X-Google-Smtp-Source: AGHT+IHjMKXCvTmBxoiQUc3S0//vqXVb77uBJR4SvhxKOwINqq6jYrFaYhNl/XwH/tFcftUafwOB/rmHeg== X-Received: from wmbhc7.prod.google.com ([2002:a05:600c:8707:b0:442:f482:bba5]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:1e89:b0:44b:eb56:1d45 with SMTP id 5b1f17b1804b1-451f0aa7ff9mr75040805e9.15.1749137913905; Thu, 05 Jun 2025 08:38:33 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:57 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-16-tabba@google.com> Subject: [PATCH v11 15/18] KVM: arm64: Enable host mapping of shared guest_memfd memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Enable the host mapping of guest_memfd-backed memory on arm64. This applies to all current arm64 VM types that support guest_memfd. Future VM types can restrict this behavior via the kvm_arch_gmem_supports_shared_mem() hook if needed. Acked-by: David Hildenbrand Signed-off-by: Fuad Tabba Reviewed-by: James Houghton --- arch/arm64/include/asm/kvm_host.h | 5 +++++ arch/arm64/kvm/Kconfig | 1 + arch/arm64/kvm/mmu.c | 7 +++++++ 3 files changed, 13 insertions(+) diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index 08ba91e6fb03..8add94929711 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -1593,4 +1593,9 @@ static inline bool kvm_arch_has_irq_bypass(void) return true; } +#ifdef CONFIG_KVM_GMEM +#define kvm_arch_supports_gmem(kvm) true +#define kvm_arch_supports_gmem_shared_mem(kvm) IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) +#endif + #endif /* __ARM64_KVM_HOST_H__ */ diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig index 096e45acadb2..8c1e1964b46a 100644 --- a/arch/arm64/kvm/Kconfig +++ b/arch/arm64/kvm/Kconfig @@ -38,6 +38,7 @@ menuconfig KVM select HAVE_KVM_VCPU_RUN_PID_CHANGE select SCHED_INFO select GUEST_PERF_EVENTS if PERF_EVENTS + select KVM_GMEM_SHARED_MEM help Support hosting virtualized guest machines. diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index f14925fe6144..19aca1442bbf 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -2281,6 +2281,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, if ((new->base_gfn + new->npages) > (kvm_phys_size(&kvm->arch.mmu) >> PAGE_SHIFT)) return -EFAULT; + /* + * Only support guest_memfd backed memslots with shared memory, since + * there aren't any CoCo VMs that support only private memory on arm64. + */ + if (kvm_slot_has_gmem(new) && !kvm_gmem_memslot_supports_shared(new)) + return -EINVAL; + hva = new->userspace_addr; reg_end = hva + (new->npages << PAGE_SHIFT); From patchwork Thu Jun 5 15:37:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894515 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6C12D274FFE for ; Thu, 5 Jun 2025 15:38:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137919; cv=none; b=oJoPk+qt2LSaHRMSLjF2dpFqDDwbOvBpYv26v+q/DZVpUX7ZpDkH7K3Zk9e4GmlSwuHEBudDBA0aw5+YxqDXifxK+dL32ILx3J58chlo4eL+UMcwJlIcaj+4T/1VJ/MpLEMWFIZmUUizl9SXqPbJ9OmkCc+QTJolxmo7pe7qIJo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137919; c=relaxed/simple; bh=rdPDhyaoW6mSYLiuwo9TvHfdujxfWtJwzIYzqV3Hw8k=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=A5GOpFrhnKUyOEBsCiXYz6bqfruWUJVCrTwcuHnTm/T31jgGJQ380ie6cXt1VVLt9vNXh56DAR7UTDRpUqOi2BU58/gIPWeO4Lxehg46lSTyNJ3gRS27AqXe0J/DkHw0/d31nly31ADVTKIjexiKHgVd2ktOjZOBWcY9ND5k+9I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=tNN1d1NB; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="tNN1d1NB" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a503f28b09so575170f8f.0 for ; Thu, 05 Jun 2025 08:38:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137916; x=1749742716; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WvGWYKbUHc1QjOWUUi5+rgbSvPXaQdRqcKS/O0QgOdg=; b=tNN1d1NBLZbCIW1ICAHeUhnhOUVb6hxZCDYhEHYPMo98JLuzzmycTMfIJQBYf/B7pr MKiz4UaN0KMIcPEkqrQ4hn/3QTeafaat7Gwnp+oMx0hEe9V+eADujBOA21ycG7BrZ42A pfJm7dUQGHgj5AyvhXZ/riF/zlvrbET3ot3MW6toI3RpZbdHkvu1tDxewz3FfMNmX6LU sjcdBtQfLV6IOzoCRZ3PUZplBinWq0uDW7N8JxwGhJWbT3zBwhYwYsCW9ucxobfaQbpR 2oYqrztA8rzYuvvkwcOORBGjl7rV/PhiMnUf08V1EsjJBsnGMHV/qG3frYBUg1f2KHxg 3qTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137916; x=1749742716; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WvGWYKbUHc1QjOWUUi5+rgbSvPXaQdRqcKS/O0QgOdg=; b=ZbN8XQuHgIo9tEtBFLAUgFZFqj2J7D02eI+7OGqAjzAN+KhXx2EVUCrcpg+PzD6Ds1 fS2U3mbKM11tpqmYbzFyj1f9UEYN4uBT7LxkfQ76+sRE3TPWXKmLe3WTC/iijoKdWC86 DEFkkDzeXrZRu6BXAQV6GDGI31tBIh0xWNge7SQbpHrkTicvvFChr3YuwZ5PtYEr/FIA 9+LLCjY26cSAs7nnCpYBS2fIhUULgUT1sfuXSV3huk8sJgN5u6wm5GWX9MUqrfkMKAIe 57dIiwm4rQ7LsqDyZL6sSV7jwHZHXu+fzBTEuewZv/P87uIhwpwKvlY0Dn7t2v/gVNCd weHg== X-Forwarded-Encrypted: i=1; AJvYcCUrxg+ITX16u8xzEC21dZmy+sTd/WhazWAHGGuPxjWhkOib2mfAbgpUhdiiM8VXFxQ/Zx97KCFPRdR1whmQ@vger.kernel.org X-Gm-Message-State: AOJu0Ywb3qd2b0ThRVheT5GJ5ASeZwZSCz0uIywU+6PXHlxCdpyPsZIU yWWp7Poq1jjyYxqXPXLwIZD7qpD4JDf4Qi6sWF0h2/RRclJ219MssTvZvAPqwsMKCDOziRCHFzb AKA== X-Google-Smtp-Source: AGHT+IH7KfeqxCmjb0WiUWXwmgleVKgVdPR9DackR9YTnqc3z+eX4w/0VPhkSS5eE5dAxaZVOhq2HozL3g== X-Received: from wmbhg22.prod.google.com ([2002:a05:600c:5396:b0:450:d422:69f9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:290c:b0:3a5:24cc:6d5e with SMTP id ffacd0b85a97d-3a526dc5198mr3567855f8f.3.1749137915770; Thu, 05 Jun 2025 08:38:35 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:58 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-17-tabba@google.com> Subject: [PATCH v11 16/18] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch introduces the KVM capability KVM_CAP_GMEM_SHARED_MEM, which indicates that guest_memfd supports shared memory (when enabled by the flag). This support is limited to certain VM types, determined per architecture. This patch also updates the KVM documentation with details on the new capability, flag, and other information about support for shared memory in guest_memfd. Reviewed-by: David Hildenbrand Reviewed-by: Gavin Shan Signed-off-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 9 +++++++++ include/uapi/linux/kvm.h | 1 + virt/kvm/kvm_main.c | 4 ++++ 3 files changed, 14 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 47c7c3f92314..59f994a99481 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6390,6 +6390,15 @@ most one mapping per page, i.e. binding multiple memory regions to a single guest_memfd range is not allowed (any number of memory regions can be bound to a single guest_memfd file, but the bound ranges must not overlap). +When the capability KVM_CAP_GMEM_SHARED_MEM is supported, the 'flags' field +supports GUEST_MEMFD_FLAG_SUPPORT_SHARED. Setting this flag on guest_memfd +creation enables mmap() and faulting of guest_memfd memory to host userspace. + +When the KVM MMU performs a PFN lookup to service a guest fault and the backing +guest_memfd has the GUEST_MEMFD_FLAG_SUPPORT_SHARED set, then the fault will +always be consumed from guest_memfd, regardless of whether it is a shared or a +private fault. + See KVM_SET_USER_MEMORY_REGION2 for additional details. 4.143 KVM_PRE_FAULT_MEMORY diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c2714c9d1a0e..5aa85d34a29a 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -930,6 +930,7 @@ struct kvm_enable_cap { #define KVM_CAP_X86_APIC_BUS_CYCLES_NS 237 #define KVM_CAP_X86_GUEST_MODE 238 #define KVM_CAP_ARM_WRITABLE_IMP_ID_REGS 239 +#define KVM_CAP_GMEM_SHARED_MEM 240 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6289ea1685dd..64ed4da70d2f 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4845,6 +4845,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_supports_gmem(kvm); +#endif +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + case KVM_CAP_GMEM_SHARED_MEM: + return !kvm || kvm_arch_supports_gmem_shared_mem(kvm); #endif default: break; From patchwork Thu Jun 5 15:37:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894244 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 46825275103 for ; Thu, 5 Jun 2025 15:38:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137921; cv=none; b=IVIqTAeEEYgXBarXTechqIbC6OO5CBy01sReMX2Oo7CD6343fWcJNHLY34lAMjGf1MMpM7fy9GO8HT0YXL7rH/Xdrng6wueD1Wnn2Ttgms02PX7hQsxl30PS6jx0espukYvGjmqjlWGiFnn6eL9z3p9Y5P8/nhj9UUCM4ZSVWwI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137921; c=relaxed/simple; bh=np1bN13wn9C90KK+H+wwEHyvF3CFjYVZDRFxiem/Bw0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BKIOH6nnW2UZvyL0/ixL6VLrYaRCcyXzSfaSmBBbfBuL7MXy7IJLyhDByw6G2pewHEGB2gsOuLkneO8taJTve65QCfsDEx4/A1mJcny8O/eej1VD2hrHRxytOymmynPrGIakB0DLeR1vFTotNEiTnXYpa/wGehLNcfe/L2TFnxo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mVgcaAHr; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mVgcaAHr" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450d244bfabso9047935e9.0 for ; Thu, 05 Jun 2025 08:38:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137918; x=1749742718; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9CTszacuPAxWQTwzIU0nGotfU9vxy8BL3CP9EQLh798=; b=mVgcaAHrJyO+Ys7yGlW8ZDDrwFDZIKMbnw+JgnIQjCgcZLFdqIkGLSSKiYUX/wGU7U +iuvoi7JAL58g6uPRFWSFpsLRFB51N6bgafdy5IFjkXQLlcwwm+YL+09dICguLXW557K LahU9R/1J+RmKEAq6xIKIQNKVsl4tY6DQTNeyBS7xIXzpg624RMrnEtgsZyTokX/H1zJ yP/pIPG22MBwx1zWVK8VaGQlMjVFxeyThrRjTFnjjcWUA94hYiCI2FiQrHORnHPDMnZv GV0AA3+DPSpVe6Qh/p/rFBvfMB7y1iSyboJjnKs+BLCwuMdiAjSVKakFAnQFCNhIHV9O 5P6g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137918; x=1749742718; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9CTszacuPAxWQTwzIU0nGotfU9vxy8BL3CP9EQLh798=; b=uW+/rpNNHJfQEx5tJsYnbihwUID+3nqAQkIwNI8cUHNgeXh0+qa46UDvbi2cu65ArG aGEepd7OZQgR5RX+tS+Q7ecZksq6wimD+HOyv/jLiFkO/FGlhF89W6gKIg7JFMoWqhmI c3bqunIdjg7Yedp6wedJZfyf8bKzanSz6rYX90FWMeHoImg7luYYJJJnU52Y3LRMUPKa 9zNGfxI/k+1FqKCHgltQNLc7zZ90VOXdjyw6G9/KICrihYFfioChD5+u4aD90p7P7ZEk Kw7PFXig/wujb8uu0DM/6BICGYTbO/aDkuGfTOIJqpMDJknOh4i4plpixP/E+bvXXvGw z8RQ== X-Forwarded-Encrypted: i=1; AJvYcCUE+HvUfgbdWpIT1Pe+ZkwP9ZBOqzEYFo3l+fuFgPkWVGpoa9ZFLrpvfuZZ5xbIqcl+QPS+1ipbbaoOUhXi@vger.kernel.org X-Gm-Message-State: AOJu0YyN+BnPeVBKUC1hUG6rLBFmBGNiItitGhTCiTotb5cv/l7pm/fA zNHq1vnxQogvk/6BFJO5D03PDoQLoNS+NtXjKK2ZJ/W6GlWYm1cwo6x05eY14tXde9T79KwCH2U TGQ== X-Google-Smtp-Source: AGHT+IGIOvyaIq+arYR+kGxlz5MLXMzKAjfF3gerPhhAkuJUuarivw68VGg1S5URm28toMXSQIBmwPWvzg== X-Received: from wmbhj26.prod.google.com ([2002:a05:600c:529a:b0:451:edc8:7816]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:828f:b0:43d:ac5:11e8 with SMTP id 5b1f17b1804b1-451f0b26621mr61939045e9.21.1749137917744; Thu, 05 Jun 2025 08:38:37 -0700 (PDT) Date: Thu, 5 Jun 2025 16:37:59 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-18-tabba@google.com> Subject: [PATCH v11 17/18] KVM: selftests: Don't use hardcoded page sizes in guest_memfd test From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Using hardcoded page size values could cause the test to fail on systems that have larger pages, e.g., arm64 with 64kB pages. Use getpagesize() instead. Also, build the guest_memfd selftest for arm64. Suggested-by: Gavin Shan Signed-off-by: Fuad Tabba Reviewed-by: David Hildenbrand --- tools/testing/selftests/kvm/Makefile.kvm | 1 + tools/testing/selftests/kvm/guest_memfd_test.c | 11 ++++++----- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..845fcaf8b6c9 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -164,6 +164,7 @@ TEST_GEN_PROGS_arm64 += arch_timer TEST_GEN_PROGS_arm64 += coalesced_io_test TEST_GEN_PROGS_arm64 += dirty_log_perf_test TEST_GEN_PROGS_arm64 += get-reg-list +TEST_GEN_PROGS_arm64 += guest_memfd_test TEST_GEN_PROGS_arm64 += memslot_modification_stress_test TEST_GEN_PROGS_arm64 += memslot_perf_test TEST_GEN_PROGS_arm64 += mmu_stress_test diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..341ba616cf55 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -146,24 +146,25 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) { int fd1, fd2, ret; struct stat st1, st2; + size_t page_size = getpagesize(); - fd1 = __vm_create_guest_memfd(vm, 4096, 0); + fd1 = __vm_create_guest_memfd(vm, page_size, 0); TEST_ASSERT(fd1 != -1, "memfd creation should succeed"); ret = fstat(fd1, &st1); TEST_ASSERT(ret != -1, "memfd fstat should succeed"); - TEST_ASSERT(st1.st_size == 4096, "memfd st_size should match requested size"); + TEST_ASSERT(st1.st_size == page_size, "memfd st_size should match requested size"); - fd2 = __vm_create_guest_memfd(vm, 8192, 0); + fd2 = __vm_create_guest_memfd(vm, page_size * 2, 0); TEST_ASSERT(fd2 != -1, "memfd creation should succeed"); ret = fstat(fd2, &st2); TEST_ASSERT(ret != -1, "memfd fstat should succeed"); - TEST_ASSERT(st2.st_size == 8192, "second memfd st_size should match requested size"); + TEST_ASSERT(st2.st_size == page_size * 2, "second memfd st_size should match requested size"); ret = fstat(fd1, &st1); TEST_ASSERT(ret != -1, "memfd fstat should succeed"); - TEST_ASSERT(st1.st_size == 4096, "first memfd st_size should still match requested size"); + TEST_ASSERT(st1.st_size == page_size, "first memfd st_size should still match requested size"); TEST_ASSERT(st1.st_ino != st2.st_ino, "different memfd should have different inode numbers"); close(fd2); From patchwork Thu Jun 5 15:38:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 894514 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3E21A1F4CBB for ; Thu, 5 Jun 2025 15:38:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137923; cv=none; b=VhPSAfBZzo5aF9cr0BVzvY20nhs6Ksw2FvXgpOFs+eqnfC6TJA57a/09VJ/A0VCheP/vLZO0mqmnG2aCveZH8fQ1SM96lVTL932j1RDkFdxalc+5hoZflZXE1dDVyG+ewZ8mM/G6elyqZSVxPHqnDBgO2CL1SB0fCJcN2h7RqiQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749137923; c=relaxed/simple; bh=5Uk6frMZlFNE7OyXazOxR4rvlk6GhJYW9brmvCHnlTk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=UuDWjhiceZXtVu3XSVM2Hl9nCTJsrE64z56xPEq3Gbr9j/raob43ptFJU4uUnsA8x24w5IzeLcK/WinPLlSbV3++HZWiEBbml3/zGp0wK/imQr2WyI2YjSyKNrNPjJNG0wW2GY95Xi+K2y5Uwp1INlaynAtgrxWXHK3u6Se+uQI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Y07BG/x6; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Y07BG/x6" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a4f55ea44dso539109f8f.1 for ; Thu, 05 Jun 2025 08:38:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749137920; x=1749742720; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=tWQAb+OhtwbC9L7HmigdO6nKvxjGiiyUfyog+Pwa37w=; b=Y07BG/x6BWqDFk5c5uO9Kyp5wbM7y+6dAka+7rhX7Fj3X/b+Yx4zq5PSSew/uLEy5c RhzV9e1ySli9S7oFywkFb8C9iHupOXhg+ojWhLo5x8JcR7a1zRq9IP23hs9FQkw3Jxqx ZjaZhqx6aXwfBfR+8jUiCbx8WRE5GLEhsUutofXIqEhItROhAy/HRjQkkMxMCJnsXvoi 46ecSIdClhnUUldQqlnQ9Otvz6rGgW5LoaE/2b7ks9Hib49hNGA/l1G5dZTpjYTvKuZF jCERBHKKOQWFRAhxGwCG314CqZhYzXQbXEc1SeqMv7t4ok2TDGhcku7Xlr6I2vAnnKdM D7OA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749137920; x=1749742720; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tWQAb+OhtwbC9L7HmigdO6nKvxjGiiyUfyog+Pwa37w=; b=kBDrqEwy+FLTzqvKV0XqtKNPxQVt1JrXs8n6v3kLK4Ja1huwWrAdcf0yNDuHzW819V y/cROcY8rk/LAR2lO5usA9VkRKa52wAR0KAUrdFma20LjEXC4TuXBQTd3Ft/tLYi0RXJ mqYIrpLfbZfQEhUXf4JSNWzl0Tnby1olVlZtrvHEYNrZE8z40enUga/tEWs1vXcGlM2R at4pW/iIVQrMe/W69MjLo45IsrWl1UYc19QxhCZ5rbFgf98G2rz0IPbWJQue5L7ZXybC GLIGqRDV7OfEPhKeTnY3RggVinLL29sHMUpODDlOpXpsPm8woiv9CzItSm88vzZCRTB+ pnVw== X-Forwarded-Encrypted: i=1; AJvYcCUjBbZIGHOyjGDqbtxXv0totQ+nxzTFvTEeJVMm12x//EtNoJXPkyg1gmy2AhNFN/c35DhdlAymmKeryK0T@vger.kernel.org X-Gm-Message-State: AOJu0YzStggjsZE2AzfAkgjfCPFCTkE7cvEoV7fyeKaLMebLOhDSBSDA AP4Pyj0vETBeQD5YW/fcFvraGzCd/Oennx6KQTtiH3rshOosjIqVi3+AytZWv4AuuilSIh91LU2 oaw== X-Google-Smtp-Source: AGHT+IFuj5CmSrtXXuibWgxj8Zl72fdGCc2PqOrRdrgbjdA20Vn6Biy8RRbmfj7vgficiAKB3ZQs9JV+QQ== X-Received: from wmbay34.prod.google.com ([2002:a05:600c:1e22:b0:450:d398:f3ff]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a5d:5f46:0:b0:3a4:eef9:818a with SMTP id ffacd0b85a97d-3a51d966588mr6513471f8f.27.1749137919767; Thu, 05 Jun 2025 08:38:39 -0700 (PDT) Date: Thu, 5 Jun 2025 16:38:00 +0100 In-Reply-To: <20250605153800.557144-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250605153800.557144-1-tabba@google.com> X-Mailer: git-send-email 2.49.0.1266.g31b7d2e469-goog Message-ID: <20250605153800.557144-19-tabba@google.com> Subject: [PATCH v11 18/18] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba Reviewed-by: Gavin Shan --- .../testing/selftests/kvm/guest_memfd_test.c | 201 ++++++++++++++++-- 1 file changed, 180 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index 341ba616cf55..1612d3adcd0d 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -13,6 +13,8 @@ #include #include +#include +#include #include #include #include @@ -34,12 +36,83 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_supported(int fd, size_t page_size, size_t total_size) +{ + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmap() for shared guest memory should succeed."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed."); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(mem[i], 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + memset(mem, val, page_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap() should succeed."); +} + +static sigjmp_buf jmpbuf; +void fault_sigbus_handler(int signum) +{ + siglongjmp(jmpbuf, 1); +} + +static void test_fault_overflow(int fd, size_t page_size, size_t total_size) +{ + struct sigaction sa_old, sa_new = { + .sa_handler = fault_sigbus_handler, + }; + size_t map_size = total_size * 4; + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmap() for shared guest memory should succeed."); + + sigaction(SIGBUS, &sa_new, &sa_old); + if (sigsetjmp(jmpbuf, 1) == 0) { + memset(mem, 0xaa, map_size); + TEST_ASSERT(false, "memset() should have triggered SIGBUS."); + } + sigaction(SIGBUS, &sa_old, NULL); + + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(mem[i], val); + + ret = munmap(mem, map_size); + TEST_ASSERT(!ret, "munmap() should succeed."); +} + +static void test_mmap_not_supported(int fd, size_t page_size, size_t total_size) { char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -120,26 +193,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) } } -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, + uint64_t guest_memfd_flags, + size_t page_size) { - size_t page_size = getpagesize(); - uint64_t flag; size_t size; int fd; for (size = 1; size < page_size; size++) { - fd = __vm_create_guest_memfd(vm, size, 0); - TEST_ASSERT(fd == -1 && errno == EINVAL, + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); + TEST_ASSERT(fd < 0 && errno == EINVAL, "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", size); } - - for (flag = BIT(0); flag; flag <<= 1) { - fd = __vm_create_guest_memfd(vm, page_size, flag); - TEST_ASSERT(fd == -1 && errno == EINVAL, - "guest_memfd() with flag '0x%lx' should fail with EINVAL", - flag); - } } static void test_create_guest_memfd_multiple(struct kvm_vm *vm) @@ -171,30 +237,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +static bool check_vm_type(unsigned long vm_type) { - size_t page_size; + /* + * Not all architectures support KVM_CAP_VM_TYPES. However, those that + * support guest_memfd have that support for the default VM type. + */ + if (vm_type == VM_TYPE_DEFAULT) + return true; + + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type); +} + +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, + bool expect_mmap_allowed) +{ + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + if (!check_vm_type(vm_type)) + return; page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(vm_type); - test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); - fd = vm_create_guest_memfd(vm, total_size, 0); + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (expect_mmap_allowed) { + test_mmap_supported(fd, page_size, total_size); + test_fault_overflow(fd, page_size, total_size); + + } else { + test_mmap_not_supported(fd, page_size, total_size); + } + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_release(vm); +} + +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, + uint64_t expected_valid_flags) +{ + size_t page_size = getpagesize(); + struct kvm_vm *vm; + uint64_t flag = 0; + int fd; + + if (!check_vm_type(vm_type)) + return; + + vm = vm_create_barebones_type(vm_type); + + for (flag = BIT(0); flag; flag <<= 1) { + fd = __vm_create_guest_memfd(vm, page_size, flag); + + if (flag & expected_valid_flags) { + TEST_ASSERT(fd >= 0, + "guest_memfd() with flag '0x%lx' should be valid", + flag); + close(fd); + } else { + TEST_ASSERT(fd < 0 && errno == EINVAL, + "guest_memfd() with flag '0x%lx' should fail with EINVAL", + flag); + } + } + + kvm_vm_release(vm); +} + +static void test_gmem_flag_validity(void) +{ + uint64_t non_coco_vm_valid_flags = 0; + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; + + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); + +#ifdef __x86_64__ + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); +#endif +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + test_gmem_flag_validity(); + + test_with_type(VM_TYPE_DEFAULT, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, + true); + } + +#ifdef __x86_64__ + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(KVM_X86_SW_PROTECTED_VM, + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); + } +#endif }