From patchwork Wed Jun 11 13:33:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896722 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BF8B928B41A for ; Wed, 11 Jun 2025 13:33:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648820; cv=none; b=hbiD1Ym4YgUnR0LbKmXW/0NvRCRtyn4a/zMHbHusfQVBPJtffXu8rLjIKzySobfko1GebyFSPlPDYmidhUXIGTbovwOEubkP4yDRIgt0V30Y1QyRRViJeg/RpoWNPR4yHv+pG4RaxImX2c3ksG9CotuvoH+WnCgtbdY1R2fDW1E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648820; c=relaxed/simple; bh=eedZ0Odlmb15tEnzmGB/L5qIaAHZsoYCFIO/KNKE/pE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tjX201H/JnsDnFw0bPHRKa6+tf2xuzSt/M/skup+1jB/MGAEqaIVicDxBbPAbm3Zabu0o2nSG9iJbgOrl48TbxAgdwXIDqcX7bGEdDRIc7irDmLzgR7zqQaUZtt2egIPR8Wm8CPmsC0wk1CoNuTn/zBbLNrqtk7maXJrXqHaTfI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kIFec2RC; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kIFec2RC" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450df53d461so57762715e9.1 for ; Wed, 11 Jun 2025 06:33:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648816; x=1750253616; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Sby3t6E3SqseU66+AjZSRAM5YQx/H+Ih8yt31sFaamo=; b=kIFec2RCa1NPknaeGahGu2nLR1SX8hh5kAU4w8GUCPyhhJmdTJ+ODeWWAjjrU0Tfzg z3xg0IyrWoZeRNqVToXp+rTODKAMcj8yOPHsvq+c/vD0vYntkhfF+E4EEpqI0R5heidz UdtoDyJxN0ZNH2nrqDvlQMVzsC+h4AkM0Sirw0Oz5fP6nUimDLEOocEDYiCJZV+/9xZ/ IGe6AxjRPDHT0sb3I8VucNDJF4ge7vw1eU9cskU6Ayy/9WI3DtleghGoH3wPjsM9uXJj nh/Xq8370hFLgNYVEhMxl7nAgyomOcXaa1Zr/axAST516rzujZ14UsQx6+NRawZJqb+H dDRw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648816; x=1750253616; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Sby3t6E3SqseU66+AjZSRAM5YQx/H+Ih8yt31sFaamo=; b=H0IGVZKfxt28x12wA6vk27gX5FhiHQU3cBpQWOVw7AyF2D9OB72UTAu5cEXt2iinPA q/JPVUy+5I3Ve4jC/xB4HKENdOsbh4IWWH8a+59ve/yhKuUpj6HNCY55aINpnrbjZs1Q PS769D1yaOtctc0/hG320OTCDPpzpxBrhMbC8t4oOUaxwcMAlaffpxYQ45nWLaF99qUH AjMwtTTF+vh9O0UPJGzIat5NDfJKz2+DqxrANEJOj7UB//pFlKQ44aOOXT2CJeQhI2sD J+p633r9AEcDUxv5lDiLBzqEy8CKqu77Yk0MlF/HGHntuOmZjtIzYVqvQr3+espRq6mX lq9g== X-Forwarded-Encrypted: i=1; AJvYcCXPSOAfUHUO4SxRymlvLlu0CzVwp8Mwgg//miRhKSAGsNOUtEgJk2WNxbpautrTp5Uk6HiKBVCln4DaqRPS@vger.kernel.org X-Gm-Message-State: AOJu0Yx0w1SONwlJwGmvoOEk1DKO9ffacljTU2+nmT9AppNWmOTnXLT7 +nei3eK4qToWs2IlCrd4vEOCtuqaaZItt8di7XEelkKKYEs8kGfVZ37Wiq27rJjYkUUjFpdm3T+ Jlg== X-Google-Smtp-Source: AGHT+IFaCPRTnRe5F4zwoOmailDRB5UmsMCtLol47gADiAuvV7sR757p1uzXQjjWMoxDfkgDrC1b9HUN7Q== X-Received: from wmbbi10.prod.google.com ([2002:a05:600c:3d8a:b0:442:f8e9:a2ac]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8012:b0:43d:3df:42d8 with SMTP id 5b1f17b1804b1-453296f41f6mr8408995e9.6.1749648816092; Wed, 11 Jun 2025 06:33:36 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:14 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-3-tabba@google.com> Subject: [PATCH v12 02/18] KVM: Rename CONFIG_KVM_GENERIC_PRIVATE_MEM to CONFIG_KVM_GENERIC_GMEM_POPULATE From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The option KVM_GENERIC_PRIVATE_MEM enables populating a GPA range with guest data. Rename it to KVM_GENERIC_GMEM_POPULATE to make its purpose clearer. Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/kvm/Kconfig | 4 ++-- include/linux/kvm_host.h | 2 +- virt/kvm/Kconfig | 2 +- virt/kvm/guest_memfd.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 2eeffcec5382..9151cd82adab 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -46,7 +46,7 @@ config KVM_X86 select HAVE_KVM_PM_NOTIFIER if PM select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY - select KVM_GENERIC_PRIVATE_MEM if KVM_SW_PROTECTED_VM + select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM @@ -157,7 +157,7 @@ config KVM_AMD_SEV depends on KVM_AMD && X86_64 depends on CRYPTO_DEV_SP_PSP && !(KVM_AMD=y && CRYPTO_DEV_CCP_DD=m) select ARCH_HAS_CC_PLATFORM - select KVM_GENERIC_PRIVATE_MEM + select KVM_GENERIC_GMEM_POPULATE select HAVE_KVM_ARCH_GMEM_PREPARE select HAVE_KVM_ARCH_GMEM_INVALIDATE help diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index b2c415e81e2e..7700efc06e35 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2556,7 +2556,7 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, int kvm_arch_gmem_prepare(struct kvm *kvm, gfn_t gfn, kvm_pfn_t pfn, int max_order); #endif -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE /** * kvm_gmem_populate() - Populate/prepare a GPA range with guest data * diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 49df4e32bff7..559c93ad90be 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -116,7 +116,7 @@ config KVM_GMEM select XARRAY_MULTI bool -config KVM_GENERIC_PRIVATE_MEM +config KVM_GENERIC_GMEM_POPULATE select KVM_GENERIC_MEMORY_ATTRIBUTES select KVM_GMEM bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..befea51bbc75 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -638,7 +638,7 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); -#ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM +#ifdef CONFIG_KVM_GENERIC_GMEM_POPULATE long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) { From patchwork Wed Jun 11 13:33:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896721 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A168428F930 for ; Wed, 11 Jun 2025 13:33:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648823; cv=none; b=IaakiiD4D7VudYnIUVzMuhf45jF6KiLZ3iA4Fql3ryUoQZMEfR5ZLgg6Mmoy7B1k/IPiN9N9xZHa50VbE5vi8IkzWs/7fqrxJeSDtaOWaZPa4Bc0uEwjE7Jg2tHbJPtBTQ3dGrtLBie6fWwLIzn5ysLOCEXxV5qpj5IugzHAHSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648823; c=relaxed/simple; bh=UnBr6ZKeBRX2X4RzMs4nllrQn7CMH9xEVizI/3gix6I=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=U7pjpNjQ/8a8uG/UFoCP6avZ+SkXbzq5roRCS48Ko53PPR3rVSKCsnaUHGfEffNWG1DhK2zFbSGMKAqFKbxHIb6ITc7COvZNyNCwbzV6YZq7/wIz84F2oPXqD6aD4alLSMlFUG54gomsXYej8px3/ZTNaqTRKes1oTXgJy7yjiM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=FO4nOTQX; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="FO4nOTQX" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450eaae2934so59535555e9.2 for ; Wed, 11 Jun 2025 06:33:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648820; x=1750253620; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=/7yd1sTKmH/kZRCQtJiM/n5Yx3TGG8dKDUQIn8zaPVo=; b=FO4nOTQXyjIvp1P7zjERUwk+R5UiGk9XyG4RNowSC6FxdOrxO1eIS6vdzevT4xfr2A qkn/WiMjK/P/t4s43s6JRX+pmZleNPQDkMjuARITjXdsig1urfqDpij5Qo2y+gU6dMJa /2doTEy6ZxlMftOYu2nS9um2qt/ENJW1pWsTnLhpyIiwCIcqCO0aOSkLnApmEobCy9op k+izJ7EJ4JaxRnJSMAknToPpwchcdWKeb/F5wI8Sh59rHHVJ3AozanbHFsBlye85174Z zHSgYo6M+ffNGrnB4ioGWQ6PTz9vdXZQ4PRaWrD4fve7667VKd1E4aRxk0v2S7jtOGXG irxQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648820; x=1750253620; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=/7yd1sTKmH/kZRCQtJiM/n5Yx3TGG8dKDUQIn8zaPVo=; b=eHiXV/rnOxPiHFRLCojlep0WuOF5RAj0wK3wvDSkq7YsJtlmMbyqjKTBxa6X6DVkPX d/Obp8nMNTO83yT41SrZ01Jk9q/MN7/MPByzM4JsZ+kB2hvWpQHUSE+t+wfaEraw6z+7 +KtJ9+VIHHh+lPb5YbGbTDBFhXHtK5S5jr2pICAdBuFO6tBKI81z6PPujg3dApI5/sg8 u/S1FS6YAzsOQz8uJIrEPCJ1PZJRC0+heOytuemDHb7S7kQB8lB4laLoYkjkcUPRsmZX ker5UwNxNej4aTMeSLKsP6hCj7iUI8ULVViAYFrvJkrouHRtf77Jo7HaSswtFruB4I6w f0MQ== X-Forwarded-Encrypted: i=1; AJvYcCUhrig/sKD8zbRLSymU7HwT9lbtsNDpeNiMqaGHCzhAjqxzwYimummusfMyaK1FDq12dIrUmeZ4lTF8Ybzn@vger.kernel.org X-Gm-Message-State: AOJu0Yz9K3yvijwJFlkd23wgJ1nAayWD2LdKqClD5kHkrvFgrhDigXOZ bMiQZppf0jE35Xt4CrmBJwiI9KEgjj342CnR7gff3UEBm2AcY8/C5K+wu9zagpr6dTBmubqJl6x hnw== X-Google-Smtp-Source: AGHT+IFYroU/xAzMZJoiD4oD3uWDCygQkGmdgzYq+m8br75EHz3uTlJS0gPFUhk11qXKmVV8klVdxJfKvA== X-Received: from wmrn17.prod.google.com ([2002:a05:600c:5011:b0:450:d5b8:85b2]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:8b04:b0:453:a88:d509 with SMTP id 5b1f17b1804b1-45324edfb8amr33328415e9.10.1749648820022; Wed, 11 Jun 2025 06:33:40 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:16 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-5-tabba@google.com> Subject: [PATCH v12 04/18] KVM: x86: Rename kvm->arch.has_private_mem to kvm->arch.supports_gmem From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com The bool has_private_mem is used to indicate whether guest_memfd is supported. Rename it to supports_gmem to make its meaning clearer and to decouple memory being private from guest_memfd. Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 4 ++-- arch/x86/kvm/mmu/mmu.c | 2 +- arch/x86/kvm/svm/svm.c | 4 ++-- arch/x86/kvm/x86.c | 3 +-- 4 files changed, 6 insertions(+), 7 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 3d69da6d2d9e..4bc50c1e21bd 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1341,7 +1341,7 @@ struct kvm_arch { unsigned int indirect_shadow_pages; u8 mmu_valid_gen; u8 vm_type; - bool has_private_mem; + bool supports_gmem; bool has_protected_state; bool pre_fault_allowed; struct hlist_head mmu_page_hash[KVM_NUM_MMU_PAGES]; @@ -2270,7 +2270,7 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM -#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.has_private_mem) +#define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) #else #define kvm_arch_supports_gmem(kvm) false #endif diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index e7ecf089780a..c4e10797610c 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3488,7 +3488,7 @@ static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault * on RET_PF_SPURIOUS until the update completes, or an actual spurious * case might go down the slow path. Either case will resolve itself. */ - if (kvm->arch.has_private_mem && + if (kvm->arch.supports_gmem && fault->is_private != kvm_mem_is_private(kvm, fault->gfn)) return false; diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c index ab9b947dbf4f..67ab05fd3517 100644 --- a/arch/x86/kvm/svm/svm.c +++ b/arch/x86/kvm/svm/svm.c @@ -5180,8 +5180,8 @@ static int svm_vm_init(struct kvm *kvm) (type == KVM_X86_SEV_ES_VM || type == KVM_X86_SNP_VM); to_kvm_sev_info(kvm)->need_init = true; - kvm->arch.has_private_mem = (type == KVM_X86_SNP_VM); - kvm->arch.pre_fault_allowed = !kvm->arch.has_private_mem; + kvm->arch.supports_gmem = (type == KVM_X86_SNP_VM); + kvm->arch.pre_fault_allowed = !kvm->arch.supports_gmem; } if (!pause_filter_count || !pause_filter_thresh) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index b58a74c1722d..401256ee817f 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12778,8 +12778,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.has_private_mem = - (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Wed Jun 11 13:33:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896720 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BB5CF29AB16 for ; Wed, 11 Jun 2025 13:33:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648827; cv=none; b=kPAAf57EfYcAr9OvdD3G6GXGxvEZkqF5+8P2MDAMJG9+3Wx9xgZMrUTvsk57AU1azguHAObCdlti3GOAuxWRjYHzYj4AwaCDNth6+Sp15NfEeKBbSHcBVKACYyoPKZZNfAO7c69uTnbvVgG+FOzsC3NV0EEHPT8YI/woWkVO0z8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648827; c=relaxed/simple; bh=YGKWeS/02fp1ZvntQjnXiLE+15TL6ZwC9xQKq/URy3c=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=SbYwvwuoNiEwfFe71j3umuGKJt/bDe4SnxibKxC7QxHDs0eOVCgt5FesuX1h8j7j4TSZIzSBhmdKqPAWIY6Cqg0xlqunEbrgl3NGJokfUb2JHMK1+t2y8/WFWUDh5z2V1VbLUkIF/PdM98FpFTJZZfRfakgTeJF/VifYMlMcxfk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ggIFgvgt; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ggIFgvgt" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-4530c186394so16831405e9.0 for ; Wed, 11 Jun 2025 06:33:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648824; x=1750253624; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=8aR8d2ohKDYUtwqlUQwft9/8cX0oCVtZ+VCOgpKfhcs=; b=ggIFgvgtb0U3D2aPVezV0lZEo6adA196EhwOLmg1ftZzOtcL46leWZnpKHPIHy4Sv/ quem/WtKwy49sROiAupVA2x/tO5VCWDn+SxfeKrF9zZ0zVtjXoPT7ROgPRcX/vfo6320 dpdJRfuScJl9lwGj+V/xyOrzQ6lQ8EdP6MN+suRNzYGmq7uOkRWcyaLjtLjVDkBNyAIV xN15cuiiJIhqmIqhs7vu2ZmBgBXJYw9h2r7WK9lSNetK27ZIZQ6qs2LZd7NZ888PFZai +NWxVYnPzaPMdfPLdxLwNrlmagvB/OpnwQiwhAo9+8nB1mke2DQgCQSB8XftUmsROYL7 OEyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648824; x=1750253624; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=8aR8d2ohKDYUtwqlUQwft9/8cX0oCVtZ+VCOgpKfhcs=; b=F1qYdqKt2MYCy7ZjhE14p31enpAHGEDuBs4fr2hH93NFrUvAJ7dSGnEyujpPz3uNHM r9ebX8zXWDAAzJL0PuJSECLMhelQzNM10Bqdb7aYbPB6sc8QZnIAa5OrO4HCz2WUwyWw ml8oNFCUPS+f+2HGE54wXpZWFIyFrLfn4uT6JIMucRZ8S4RguKbPrVzqoiaFGDiZpwfI OVh8FrKCVulp9AoIFv5Q0LopUGsBy+yIzbPZNHoERCCZK0y/Q17R67HHtwpDt2P6tga6 VqO6JCGo0KzeIgVPr58Au0x3FlUEx1ZXJy83+su6lW2S1pTUpv7l5PSxMIEmHAMdRWBr to/Q== X-Forwarded-Encrypted: i=1; AJvYcCVS2PyoqvWsaMTwAwE7vE/g7efImFUbNPT4qWYa9mAa4gHEvPJxpCnm6NLWL+RWSS1B2xrNG/Hy1xcr4P8n@vger.kernel.org X-Gm-Message-State: AOJu0Yy2ZEuqWTM2Yoi92x2chOhg7bFfWE43MIiq+8h9ljB474PD+9uK DCCMQAl4IynCpvjAbib4ZxAqyjB/AbzSjSYc94q8VZtyNHHzeQbic5nzg6DT6vNV6Q23C3omIv8 SsA== X-Google-Smtp-Source: AGHT+IFCb8jHeuPCB7LEmI7avFCyTPGfgQduPiszdiF9PVCb/8oC0jpiLuFzDzkO8e+SWuappNsS6C6Y4A== X-Received: from wmbez14.prod.google.com ([2002:a05:600c:83ce:b0:450:d422:69f9]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b9b:b0:44a:b9e4:4e6f with SMTP id 5b1f17b1804b1-4532956d50bmr9767915e9.16.1749648824195; Wed, 11 Jun 2025 06:33:44 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:18 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-7-tabba@google.com> Subject: [PATCH v12 06/18] KVM: Fix comments that refer to slots_lock From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Fix comments so that they refer to slots_lock instead of slots_locks (remove trailing s). Reviewed-by: David Hildenbrand Reviewed-by: Ira Weiny Reviewed-by: Gavin Shan Reviewed-by: Shivank Garg Reviewed-by: Vlastimil Babka Signed-off-by: Fuad Tabba --- include/linux/kvm_host.h | 2 +- virt/kvm/kvm_main.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 76b85099da99..aec8e4182a65 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -859,7 +859,7 @@ struct kvm { struct notifier_block pm_notifier; #endif #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES - /* Protected by slots_locks (for writes) and RCU (for reads) */ + /* Protected by slots_lock (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif char stats_id[KVM_STATS_NAME_SIZE]; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 6efbea208fa6..d41bcc6a78b0 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -331,7 +331,7 @@ void kvm_flush_remote_tlbs_memslot(struct kvm *kvm, * All current use cases for flushing the TLBs for a specific memslot * are related to dirty logging, and many do the TLB flush out of * mmu_lock. The interaction between the various operations on memslot - * must be serialized by slots_locks to ensure the TLB flush from one + * must be serialized by slots_lock to ensure the TLB flush from one * operation is observed by any other operation on the same memslot. */ lockdep_assert_held(&kvm->slots_lock); From patchwork Wed Jun 11 13:33:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896719 Received: from mail-wr1-f74.google.com (mail-wr1-f74.google.com [209.85.221.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6B89429AAE3 for ; Wed, 11 Jun 2025 13:33:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648832; cv=none; b=XMa0n8TJkhw22uOjWMkNBQK9zcbayGnZwhgnP2ONECSqBEwlQpDJvn7x/lmgeZLW9gmD8jHEgw8EkTRA1PQqs+KLZJQmpRBtC2EVIwMtpw4uYkVN+X/1axId15b1yqqi/uz1z9NgxuyiOrxmWtyqyWK99eyxwYABmJ4CR2qUdik= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648832; c=relaxed/simple; bh=hpf2XueJIoNKejpgsS6eEy+779j23jPnOsrhXYz7b+s=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tMi1Y79+wOxLdbpCtZYUtVCzpOq6RBIXs6UW+J3rK8BSPyKCFx+xyIavWBj3UMgEbWtNxH9esDJon2GYdoLg/q5W1FTRHOkIwOD9JSpCQSeGZ+BPuI0aazlCgLxknzML3jtvzNyu15xtfnws5GzWhN9BmYj7T7WfIc8jyIF/Uh8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=l32OR9nx; arc=none smtp.client-ip=209.85.221.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="l32OR9nx" Received: by mail-wr1-f74.google.com with SMTP id ffacd0b85a97d-3a4e9252ba0so3883949f8f.0 for ; Wed, 11 Jun 2025 06:33:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648829; x=1750253629; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=GhEdb4XSRH5QZbPC7u52jNBRkWh7Zb0cdMH+s2fESHE=; b=l32OR9nx2ju7A4diBgIx6AbLrV6iGOBAxEYVnLem5V76oWCrxSBh45M76GHbFQ9oaG XSMWkvZlcqeMAIbZvqHubJ8TOPqo0sXsl2xQ/lJ9fgfDHqezwpepdOgtD6ZYcgFwvUPQ LZwwE6xqQN4zTqHgpUsrKs5zFBC5+m/cZowBPp9Txf/04GRE7VrUxtWa4dXoAQbM5Fuu ZM/+L3ZBDRPk6eJMgv4FkqUpn8EEm+wr4QlihX++nkRmelGTFluLQqFo20KhFZOX3Ej7 uuf6Pmtto0RSPFHjHesXTTaBW+0Fmc9QbToyDhD0CTByZ7igI6p6PAZMLQ94af1pOzW2 M3bg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648829; x=1750253629; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=GhEdb4XSRH5QZbPC7u52jNBRkWh7Zb0cdMH+s2fESHE=; b=cfeJGPvvdTtNAi2b268jTR6urjs9cFm/rq3m9GImKJNnkYkipn/I1SsFGFhqJt/w4H U2R6iexemBiXzHzysLchi2JCLuj3ZyzBVcIsdFYJkOHQREwA5jY3V72OfmtJSMpAK2VR f/0CGOc9iZ8UCWITl1E/VF8zyskVMpNe8AzTx530R9gPHpHfv7xeBdvIxq6tCFBsfjne 7TL12C2AcYDeiy8U23oEU/4UMOrdbPYrXB41VM9Mo5ZKBT72NqV2pIEcRuIsg24gE7YV Fy6cTkuep3Bim8o7FFmtpjV1+Szhw8oMB/Xw8ulmNZI+b1WFwCETjovAWKurhjd9l1uy Bcgg== X-Forwarded-Encrypted: i=1; AJvYcCVjzierMTRYWp0hH5wiodeftLbFIvUg9ymqxYylZtbBXwYMP9P7E6k1vQ77NQCPcz0F12EnQ/moCcEVIAiM@vger.kernel.org X-Gm-Message-State: AOJu0Yy8l5BOougC2qJkfOTKwi+ym40caMXoegvUszntA2a7cB03ErM+ Kex2zOCAowpj/VWxPAm6vdqzQnqZpO9mtVOkH3jQqWeyN9CUQQXkIUrzhTYsEhSIyYCvx6Pl93V YIw== X-Google-Smtp-Source: AGHT+IHEd86idQu16yNMsd/8yL8nqu5HVRSk9EU79tULRTu402KZNrREDQmVRg+XHZj/Hl7vzp0Q2cBjwQ== X-Received: from wma7.prod.google.com ([2002:a05:600c:8907:b0:441:aaa8:fb65]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1a87:b0:3a5:1c3c:8d8d with SMTP id ffacd0b85a97d-3a558af983fmr2481697f8f.55.1749648828461; Wed, 11 Jun 2025 06:33:48 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:20 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-9-tabba@google.com> Subject: [PATCH v12 08/18] KVM: guest_memfd: Allow host to map guest_memfd pages From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch enables support for shared memory in guest_memfd, including mapping that memory from host userspace. This functionality is gated by the KVM_GMEM_SHARED_MEM Kconfig option, and enabled for a given instance by the GUEST_MEMFD_FLAG_SUPPORT_SHARED flag at creation time. Reviewed-by: Gavin Shan Acked-by: David Hildenbrand Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba Reviewed-by: Shivank Garg --- include/linux/kvm_host.h | 13 +++++++ include/uapi/linux/kvm.h | 1 + virt/kvm/Kconfig | 4 +++ virt/kvm/guest_memfd.c | 73 ++++++++++++++++++++++++++++++++++++++++ 4 files changed, 91 insertions(+) diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 9a6712151a74..6b63556ca150 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -729,6 +729,19 @@ static inline bool kvm_arch_supports_gmem(struct kvm *kvm) } #endif +/* + * Returns true if this VM supports shared mem in guest_memfd. + * + * Arch code must define kvm_arch_supports_gmem_shared_mem if support for + * guest_memfd is enabled. + */ +#if !defined(kvm_arch_supports_gmem_shared_mem) +static inline bool kvm_arch_supports_gmem_shared_mem(struct kvm *kvm) +{ + return false; +} +#endif + #ifndef kvm_arch_has_readonly_mem static inline bool kvm_arch_has_readonly_mem(struct kvm *kvm) { diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index d00b85cb168c..cb19150fd595 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1570,6 +1570,7 @@ struct kvm_memory_attributes { #define KVM_MEMORY_ATTRIBUTE_PRIVATE (1ULL << 3) #define KVM_CREATE_GUEST_MEMFD _IOWR(KVMIO, 0xd4, struct kvm_create_guest_memfd) +#define GUEST_MEMFD_FLAG_SUPPORT_SHARED (1ULL << 0) struct kvm_create_guest_memfd { __u64 size; diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig index 559c93ad90be..e90884f74404 100644 --- a/virt/kvm/Kconfig +++ b/virt/kvm/Kconfig @@ -128,3 +128,7 @@ config HAVE_KVM_ARCH_GMEM_PREPARE config HAVE_KVM_ARCH_GMEM_INVALIDATE bool depends on KVM_GMEM + +config KVM_GMEM_SHARED_MEM + select KVM_GMEM + bool diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 6db515833f61..06616b6b493b 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -312,7 +312,77 @@ static pgoff_t kvm_gmem_get_index(struct kvm_memory_slot *slot, gfn_t gfn) return gfn - slot->base_gfn + slot->gmem.pgoff; } +static bool kvm_gmem_supports_shared(struct inode *inode) +{ + const u64 flags = (u64)inode->i_private; + + if (!IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM)) + return false; + + return flags & GUEST_MEMFD_FLAG_SUPPORT_SHARED; +} + +static vm_fault_t kvm_gmem_fault_shared(struct vm_fault *vmf) +{ + struct inode *inode = file_inode(vmf->vma->vm_file); + struct folio *folio; + vm_fault_t ret = VM_FAULT_LOCKED; + + if (((loff_t)vmf->pgoff << PAGE_SHIFT) >= i_size_read(inode)) + return VM_FAULT_SIGBUS; + + folio = kvm_gmem_get_folio(inode, vmf->pgoff); + if (IS_ERR(folio)) { + int err = PTR_ERR(folio); + + if (err == -EAGAIN) + return VM_FAULT_RETRY; + + return vmf_error(err); + } + + if (WARN_ON_ONCE(folio_test_large(folio))) { + ret = VM_FAULT_SIGBUS; + goto out_folio; + } + + if (!folio_test_uptodate(folio)) { + clear_highpage(folio_page(folio, 0)); + kvm_gmem_mark_prepared(folio); + } + + vmf->page = folio_file_page(folio, vmf->pgoff); + +out_folio: + if (ret != VM_FAULT_LOCKED) { + folio_unlock(folio); + folio_put(folio); + } + + return ret; +} + +static const struct vm_operations_struct kvm_gmem_vm_ops = { + .fault = kvm_gmem_fault_shared, +}; + +static int kvm_gmem_mmap(struct file *file, struct vm_area_struct *vma) +{ + if (!kvm_gmem_supports_shared(file_inode(file))) + return -ENODEV; + + if ((vma->vm_flags & (VM_SHARED | VM_MAYSHARE)) != + (VM_SHARED | VM_MAYSHARE)) { + return -EINVAL; + } + + vma->vm_ops = &kvm_gmem_vm_ops; + + return 0; +} + static struct file_operations kvm_gmem_fops = { + .mmap = kvm_gmem_mmap, .open = generic_file_open, .release = kvm_gmem_release, .fallocate = kvm_gmem_fallocate, @@ -463,6 +533,9 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) u64 flags = args->flags; u64 valid_flags = 0; + if (kvm_arch_supports_gmem_shared_mem(kvm)) + valid_flags |= GUEST_MEMFD_FLAG_SUPPORT_SHARED; + if (flags & ~valid_flags) return -EINVAL; From patchwork Wed Jun 11 13:33:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896718 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E79ED2BD584 for ; Wed, 11 Jun 2025 13:33:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648837; cv=none; b=rhrgj3Vo5URTywOCQ638xHCKPCg01u4y9P+J/BEpSX3ucumU0Z5dE2wd2R+Ec7OMCs3duJ/vkztWTsYCFVP9HfF5Ea2i7N+3Cob7Amy5dLYOf0T81s+ZOTFn0z2xGQDTjmYldOZIzflFydNh39d+bEvMOIef8+ujmRa4wj3iYZA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648837; c=relaxed/simple; bh=ndUTJwepqYwEfponWZYA+LKiY/F05t1HO7+9wLu2cKg=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=rnTcsiP+aIYu/5OQyL098qeir74JHmaEPrBsu+Ea2mPDOv1jl4Z0R26AL7KhZeqmq3mMMnRwZT8qP+T0ekcoOCoXPias+LaJEtwI1qLZPHNBeom5SyAwnP26fi8DXPciJxCM9n6n+tyNSV7fKhJkCnEyXlmsGwTVSWKeQOmRav8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kLPKUeBx; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kLPKUeBx" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-450cb8ff0c6so36644815e9.3 for ; Wed, 11 Jun 2025 06:33:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648833; x=1750253633; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=VsETe8JP2RK+jwc0W86kgeiqvrWmEHj3D0sgGemuJ58=; b=kLPKUeBxYrEIi8fe/aeO+8nlFz/gifCpX3mF64ujRj8Yq+IZtQ5bIcQ8sZPX93mqWd 2gmtvSsHXuLijwvBsgCf9FeBMpCwiSa5tAykEV7Hz8OyCmGoXWVT6GEkXHp2ct5nwDjw oH9Dz7wqT1gj1w8fGYJ5g/ZwBlmVeCht+BJz0uDBNyHR6ZtzfXcu6c2GeFLH83/8FTiE Ce6na+JyziGUU9i5223rWMnNtrX5x2L+sVdwFFdNtvuQ115GnZsGNfZx+qnFgLPjjyT1 JwkE5B46qBO+aF8hsDhUzCTtLBvblfJN7fW+g09PRuc9SYIvrckhY61qKAHvMb4wCeVP qwNA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648833; x=1750253633; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VsETe8JP2RK+jwc0W86kgeiqvrWmEHj3D0sgGemuJ58=; b=Tr8L+l+0td4F5TgdAunMCoPr6lSqc701AzWawSUsyKUmlPoD7jpPBTzCxec3VCn9Dx PzLGzIUjKc610D0q+PfwlzJDnxOFDgLZOYPIVoiP9AytRG56ZYQetcwZFAxLIcbYmeG/ 5bJl8yKm9Xid7vj8XUYwsU7SXKkWCTsiZn8tL/RrvNsxhCVpNLX38UWXzcXY8909QYXQ uRoJVS4m1rlw8JCLM6Y/krkZ63GRB4tTEK8Fq1bOhpxept7LDI7xUJP3oLr76MliwuF2 XRgbCiPQF9D4N86fU5FTubNbrzgaNc1iWcW4ev6mum/38Nr94ycw7SBmnKXGBDg8hSvm C6Xw== X-Forwarded-Encrypted: i=1; AJvYcCUV4M4b+ZgpMOjES4GTRJhbMRRv3wNuYrv7kfcqyntve2OHb86phpf4kYufCs0N68o6Fa3SXYT4Pjkoqqoe@vger.kernel.org X-Gm-Message-State: AOJu0YyIz7YbVmBlfd79hhMG9x8woPHXXSD0vcyBbr4yFCTV6ctyzUKw iqEbG2lJvuePgE86MnOYyM+F1BEY+PNiegUKEQ1h+YA1LA3WSYqY6pY5u0kpH/cpdx4jMhF0OWP 2Qw== X-Google-Smtp-Source: AGHT+IEOpTQLWb/W99hYWPydN9AF4WxIXy9RQ1ao2fIs6Oh5u3U5LCTeCP7RQZ+lgMAn59d3H/C1V9UDvQ== X-Received: from wmbhe6.prod.google.com ([2002:a05:600c:5406:b0:453:f28:e99f]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:190e:b0:442:dcdc:41c8 with SMTP id 5b1f17b1804b1-45325965490mr23058795e9.19.1749648833055; Wed, 11 Jun 2025 06:33:53 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:22 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-11-tabba@google.com> Subject: [PATCH v12 10/18] KVM: x86/mmu: Handle guest page faults for guest_memfd with shared memory From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com From: Ackerley Tng For memslots backed by guest_memfd with shared mem support, the KVM MMU must always fault in pages from guest_memfd, and not from the host userspace_addr. Update the fault handler to do so. This patch also refactors related function names for accuracy: kvm_mem_is_private() returns true only when the current private/shared state (in the CoCo sense) of the memory is private, and returns false if the current state is shared explicitly or impicitly, e.g., belongs to a non-CoCo VM. kvm_mmu_faultin_pfn_gmem() is updated to indicate that it can be used to fault in not just private memory, but more generally, from guest_memfd. Co-developed-by: David Hildenbrand Signed-off-by: David Hildenbrand Signed-off-by: Ackerley Tng Co-developed-by: Fuad Tabba Signed-off-by: Fuad Tabba --- arch/x86/kvm/mmu/mmu.c | 38 +++++++++++++++++++++++--------------- include/linux/kvm_host.h | 25 +++++++++++++++++++++++-- 2 files changed, 46 insertions(+), 17 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 75b7b02cfcb7..2aab5a00caee 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -3291,6 +3291,11 @@ int kvm_mmu_max_mapping_level(struct kvm *kvm, return __kvm_mmu_max_mapping_level(kvm, slot, gfn, PG_LEVEL_NUM, is_private); } +static inline bool fault_from_gmem(struct kvm_page_fault *fault) +{ + return fault->is_private || kvm_gmem_memslot_supports_shared(fault->slot); +} + void kvm_mmu_hugepage_adjust(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault) { struct kvm_memory_slot *slot = fault->slot; @@ -4467,21 +4472,25 @@ static inline u8 kvm_max_level_for_order(int order) return PG_LEVEL_4K; } -static u8 kvm_max_private_mapping_level(struct kvm *kvm, kvm_pfn_t pfn, - u8 max_level, int gmem_order) +static u8 kvm_max_level_for_fault_and_order(struct kvm *kvm, + struct kvm_page_fault *fault, + int order) { - u8 req_max_level; + u8 max_level = fault->max_level; if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - max_level = min(kvm_max_level_for_order(gmem_order), max_level); + max_level = min(kvm_max_level_for_order(order), max_level); if (max_level == PG_LEVEL_4K) return PG_LEVEL_4K; - req_max_level = kvm_x86_call(private_max_mapping_level)(kvm, pfn); - if (req_max_level) - max_level = min(max_level, req_max_level); + if (fault->is_private) { + u8 level = kvm_x86_call(private_max_mapping_level)(kvm, fault->pfn); + + if (level) + max_level = min(max_level, level); + } return max_level; } @@ -4493,10 +4502,10 @@ static void kvm_mmu_finish_page_fault(struct kvm_vcpu *vcpu, r == RET_PF_RETRY, fault->map_writable); } -static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, - struct kvm_page_fault *fault) +static int kvm_mmu_faultin_pfn_gmem(struct kvm_vcpu *vcpu, + struct kvm_page_fault *fault) { - int max_order, r; + int gmem_order, r; if (!kvm_slot_has_gmem(fault->slot)) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); @@ -4504,15 +4513,14 @@ static int kvm_mmu_faultin_pfn_private(struct kvm_vcpu *vcpu, } r = kvm_gmem_get_pfn(vcpu->kvm, fault->slot, fault->gfn, &fault->pfn, - &fault->refcounted_page, &max_order); + &fault->refcounted_page, &gmem_order); if (r) { kvm_mmu_prepare_memory_fault_exit(vcpu, fault); return r; } fault->map_writable = !(fault->slot->flags & KVM_MEM_READONLY); - fault->max_level = kvm_max_private_mapping_level(vcpu->kvm, fault->pfn, - fault->max_level, max_order); + fault->max_level = kvm_max_level_for_fault_and_order(vcpu->kvm, fault, gmem_order); return RET_PF_CONTINUE; } @@ -4522,8 +4530,8 @@ static int __kvm_mmu_faultin_pfn(struct kvm_vcpu *vcpu, { unsigned int foll = fault->write ? FOLL_WRITE : 0; - if (fault->is_private) - return kvm_mmu_faultin_pfn_private(vcpu, fault); + if (fault_from_gmem(fault)) + return kvm_mmu_faultin_pfn_gmem(vcpu, fault); foll |= FOLL_NOWAIT; fault->pfn = __kvm_faultin_pfn(fault->slot, fault->gfn, foll, diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index bba7d2c14177..8f7069385189 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -2547,10 +2547,31 @@ bool kvm_arch_pre_set_memory_attributes(struct kvm *kvm, bool kvm_arch_post_set_memory_attributes(struct kvm *kvm, struct kvm_gfn_range *range); +/* + * Returns true if the given gfn's private/shared status (in the CoCo sense) is + * private. + * + * A return value of false indicates that the gfn is explicitly or implicitly + * shared (i.e., non-CoCo VMs). + */ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) { - return IS_ENABLED(CONFIG_KVM_GMEM) && - kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; + struct kvm_memory_slot *slot; + + if (!IS_ENABLED(CONFIG_KVM_GMEM)) + return false; + + slot = gfn_to_memslot(kvm, gfn); + if (kvm_slot_has_gmem(slot) && kvm_gmem_memslot_supports_shared(slot)) { + /* + * Without in-place conversion support, if a guest_memfd memslot + * supports shared memory, then all the slot's memory is + * considered not private, i.e., implicitly shared. + */ + return false; + } + + return kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE; } #else static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) From patchwork Wed Jun 11 13:33:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896717 Received: from mail-wr1-f73.google.com (mail-wr1-f73.google.com [209.85.221.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2FFCA2BD59E for ; Wed, 11 Jun 2025 13:33:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648840; cv=none; b=NB7RcLZ8+UtMPZH7C8ufEluY2jEwrmqRljN4AJsrxvYpfvItHWbxlvU0Y7EyL6H44LDA+99P+iByOBJ9z8auKi+QE3LqyLIytut9vsf9dxZAfkvqdU4C5DNqDoEzI5sHaASaqaxWtxak512uUhePXdJrXAAK+anACRzY1cOkyiU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648840; c=relaxed/simple; bh=tYG9no4G4/4JoJdOu43WxRTdUoeN59thOVKVvS8wFCQ=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=BGutFAEX0s0KIClf3tpLdq0ZwB6qmJfPTpBRVbGWxo01l4iaLThF89o49Pf3lfew6c5xr+T6TFC39xmpIrpX5R97XcRz5DAdScKBc6PNz9z0RJlgsupwm97+bsT8dSv0cdWa+0ff9DE5fxAAPSlWORznd50OJsyKwSKNeFsIawM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YexuNDx4; arc=none smtp.client-ip=209.85.221.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YexuNDx4" Received: by mail-wr1-f73.google.com with SMTP id ffacd0b85a97d-3a4eeed54c2so4365863f8f.3 for ; Wed, 11 Jun 2025 06:33:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648837; x=1750253637; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Q0GmgHpBmCA6zvNGFoi1Z/wq5QrRCsY3MSz2+Tp46gs=; b=YexuNDx4EGrWe/n9wwGizG/otD2dUD9k8IWo3aV+MTS7hgf9x20mrm6pRFEnj1uCT5 36dFJDVQ6PeKOGyDmVJ02K8v2+lIe1xKL2PMG3ewAkPZl1Z41l1k9fK5ogII1mYha7jQ HNLmB2jgBlStKdr8k9mChaBnoUwE2ylLB+UVxhaVSxDq9Uz/b/E7SllKpI3qcfZg7MpG wOwF/ytFyQ2kYIfHnEMU+xL+E8dj4Z26Zg/kDcRDWR0cBaF8CG8Vu2lwFxXJj2SSqQhb JzU4dzoKNtmWyxcYkgaEaQbCnSTIoVo98mvUcnp+JyJclF1R+NjLNorMEvFfe+jCvzaR loUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648837; x=1750253637; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Q0GmgHpBmCA6zvNGFoi1Z/wq5QrRCsY3MSz2+Tp46gs=; b=boaOnV8LTgwbH4nOEpDG7WskPiIQaAwPanEoBJS9AniLCAF0dvRy+Fqadyhc2X/msz g8ePTwzj11TWc6PjbBLqjBv+MBkLheBLtTPeyLCt4/NytuPNhpdhIxNP7pJp7mVpo+IU jWcyB0qLYINmXqD1Pi+cSgvJbebywyIjQBf5KPwRFRiDGneonbvQPb/IIiUUemkIP5Sq dSPzZEW8G3Dghe6vhY2EKF/MDT/Tlizy3DZEBha7uQ1AaXUYcgMwKho6dTpOvNTClvQd js5w0nuaLfH9IVAaGKcR0tGAGeWhKflvhxHakqRZouxYB7dWyBz+0TDkLWLyMaJ/ksJ/ TcYA== X-Forwarded-Encrypted: i=1; AJvYcCXrs549X8UaeZr+FneSy4bNLHKmcmQ9TdR0mr9noP9OGkUdpPQcbYbjqbo1XK+HkvPmT8Ca+OXpaT/Gzy8K@vger.kernel.org X-Gm-Message-State: AOJu0Ywn8CZh0SyXsueZuBeGBgqZLWA3qdr++N6rBna0oLGSb4nZ2PmS pyPrqVZgTNnJ9OPOmke3wilAt1Vn0+YWEb7HcYYTkbyxJMfqZll34LLcuVBI5l6QOIVjj2go+Vb 3JQ== X-Google-Smtp-Source: AGHT+IFsqWff+6T7vsnYAkU42VVIgmr2bH+P6xTa1OxLp26jM0y9wIP/2qeKEvMKHtR2cbUL2VHzGV2row== X-Received: from wmbji2.prod.google.com ([2002:a05:600c:a342:b0:453:910:1a18]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:290d:b0:3a5:2ef8:3512 with SMTP id ffacd0b85a97d-3a558aa3a1fmr2525369f8f.14.1749648837456; Wed, 11 Jun 2025 06:33:57 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:24 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-13-tabba@google.com> Subject: [PATCH v12 12/18] KVM: x86: Enable guest_memfd shared memory for non-CoCo VMs From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Define the architecture-specific macro to enable shared memory support in guest_memfd for ordinary, i.e., non-CoCo, VM types, specifically KVM_X86_DEFAULT_VM and KVM_X86_SW_PROTECTED_VM. Enable the KVM_GMEM_SHARED_MEM Kconfig option if KVM_SW_PROTECTED_VM is enabled. Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba --- arch/x86/include/asm/kvm_host.h | 10 ++++++++++ arch/x86/kvm/Kconfig | 1 + arch/x86/kvm/x86.c | 3 ++- 3 files changed, 13 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 4bc50c1e21bd..7b9ccdd99f32 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -2271,8 +2271,18 @@ void kvm_configure_mmu(bool enable_tdp, int tdp_forced_root_level, #ifdef CONFIG_KVM_GMEM #define kvm_arch_supports_gmem(kvm) ((kvm)->arch.supports_gmem) + +/* + * CoCo VMs with hardware support that use guest_memfd only for backing private + * memory, e.g., TDX, cannot use guest_memfd with userspace mapping enabled. + */ +#define kvm_arch_supports_gmem_shared_mem(kvm) \ + (IS_ENABLED(CONFIG_KVM_GMEM_SHARED_MEM) && \ + ((kvm)->arch.vm_type == KVM_X86_SW_PROTECTED_VM || \ + (kvm)->arch.vm_type == KVM_X86_DEFAULT_VM)) #else #define kvm_arch_supports_gmem(kvm) false +#define kvm_arch_supports_gmem_shared_mem(kvm) false #endif #define kvm_arch_has_readonly_mem(kvm) (!(kvm)->arch.has_protected_state) diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig index 9151cd82adab..29845a286430 100644 --- a/arch/x86/kvm/Kconfig +++ b/arch/x86/kvm/Kconfig @@ -47,6 +47,7 @@ config KVM_X86 select KVM_GENERIC_HARDWARE_ENABLING select KVM_GENERIC_PRE_FAULT_MEMORY select KVM_GENERIC_GMEM_POPULATE if KVM_SW_PROTECTED_VM + select KVM_GMEM_SHARED_MEM if KVM_SW_PROTECTED_VM select KVM_WERROR if WERROR config KVM diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 401256ee817f..e21f5f2fe059 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -12778,7 +12778,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type) return -EINVAL; kvm->arch.vm_type = type; - kvm->arch.supports_gmem = (type == KVM_X86_SW_PROTECTED_VM); + kvm->arch.supports_gmem = + type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; /* Decided by the vendor code for other VM types. */ kvm->arch.pre_fault_allowed = type == KVM_X86_DEFAULT_VM || type == KVM_X86_SW_PROTECTED_VM; From patchwork Wed Jun 11 13:33:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896716 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D1B322BD588 for ; Wed, 11 Jun 2025 13:34:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648846; cv=none; b=UJuAiAJQZ+I60oRm8GQ/SIqV8dMCO1XRGBuQPlfCAZeAAAo0ePN0vgMcUojrRCs8bax+k1FTpLPTabtDA5D2FG1NHq2fFWXhg309wE2G19FufgVZEyu60ZTmCe1bpwctoOwVVwsCgI9oms/lT1K9VP8VZTBmdP5xLeyFDBjTgh0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648846; c=relaxed/simple; bh=hLE89x3YTQ4DTY5U2WYbKmaFGiAV0EEuWKrUIRjgeDA=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=TY2S3ChnF0QuqXM/T44/awlniE6G85UsvfxiZY4SAn7Vl9sHl+pIaOnYETaJfbkEkt1YkU3hT5MhrldDjlyHUBYLLlfqmeZM0/xZH+U2L14O2Svr/A7JNUPaX9w3kDYk6hDrGTwSJ0OimwApJ80c9J+QmoLogBDgUXwX12106eA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=gxuIdf9D; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="gxuIdf9D" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-451dda846a0so48097685e9.2 for ; Wed, 11 Jun 2025 06:34:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648841; x=1750253641; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=29LxTesP+esrl3s+kJyR+R34EjmdYlvjhl4IkCQVBdM=; b=gxuIdf9D0VtbCN6E9CXJw9l++cR+BlAoDApf6Ub9LowVC37WcEHZ+SUKiGuIJPZj8T DWfWVU+l7WnD2FUsEli6Iauckvc9Jyl/5HF+skEJmkxYnu/phfCfQ1Wd3/LjDu26tQeY yUnqschXfdIEC+jhuITmqrcNj3JWkvPgtSjaH9immThrCLxq2j/pbelAZKseeWKJtTwO KAB6XoTKcDmtnAxATfFk/pq7nPhqvajkzv/STEY/9L4tg+BynfhhpifLAWHmGix9ndrr Ez+1xpHVnca2uBeBcOZJaPRJRtx9tey36CFZp8/gpZc74Z3UX93OaRAHqttfJ/5rhQ8+ i/Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648841; x=1750253641; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=29LxTesP+esrl3s+kJyR+R34EjmdYlvjhl4IkCQVBdM=; b=XTK9Gaovvfa6GpFwXN6RBKr8wvkUAGNsNl898sRU98lqMWrzdJLb/5dhX55BIaGyM2 L/Z8Fv6vdFEfuKySLX/ZulGB/X29RWWuVu8XNYr9MSjk/zNNoh1ks/0XV0rwwc2wBgDO ghnxOzSoe2kNQ4WAnJNofbykZEfydfs0gvKbJRGqq0QrSTVOw3Nt0lX/9d4etK4pd+7i R3N/9kD8RST1v+5+94fqAqZMCnrOhbefJTQ8EVibf3Nv/Aj+IXj6bMPxun1uSiYnztp1 7OcWKK+rBb4eZzJQgLQj74mUBVJ8TU7xrSH1ezYOp6oFJI51Zgo5M1RKa+uasn/DefUU A3aQ== X-Forwarded-Encrypted: i=1; AJvYcCU22lg9LgFvKTPBt5BGVGiBFAXpNKLhn6yQBKtKP8/DD5GtOb8aMEvG5hlCMjfuOskB4EswZDu8/XLBdnOb@vger.kernel.org X-Gm-Message-State: AOJu0YyFaHV/swuJVDk1hjfDiBDZ6ITTeLpAdeWnAHYYKLpQOnnA1qGF +/IDEhYrsUHLrJCYIahMYVxBMsNi3TEaFf8uAouT7UT47WFg8WS+a1lm4DyuPHlDtIplx5K3Lwz dkQ== X-Google-Smtp-Source: AGHT+IHGBYLvYqL0QA4258S7dEzkQ3lLJOuhLe7Oixu/fDL5tnHuJueVLFJw+pESXsx9Yw1BzHfTKh4YYA== X-Received: from wmrm7.prod.google.com ([2002:a05:600c:37c7:b0:451:f443:5948]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6000:1a8f:b0:3a4:cfbf:519b with SMTP id ffacd0b85a97d-3a558a31311mr2924191f8f.44.1749648841359; Wed, 11 Jun 2025 06:34:01 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:26 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-15-tabba@google.com> Subject: [PATCH v12 14/18] KVM: arm64: Handle guest_memfd-backed guest page faults From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Add arm64 support for handling guest page faults on guest_memfd backed memslots. Until guest_memfd supports huge pages, the fault granule is restricted to PAGE_SIZE. Reviewed-by: Gavin Shan Signed-off-by: Fuad Tabba Reviewed-by: James Houghton --- arch/arm64/kvm/mmu.c | 82 ++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 79 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c index 58662e0ef13e..71f8b53683e7 100644 --- a/arch/arm64/kvm/mmu.c +++ b/arch/arm64/kvm/mmu.c @@ -1512,6 +1512,78 @@ static void adjust_nested_fault_perms(struct kvm_s2_trans *nested, *prot |= kvm_encode_nested_level(nested); } +#define KVM_PGTABLE_WALK_MEMABORT_FLAGS (KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED) + +static int gmem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, + struct kvm_s2_trans *nested, + struct kvm_memory_slot *memslot, bool is_perm) +{ + bool write_fault, exec_fault, writable; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; + enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; + struct kvm_pgtable *pgt = vcpu->arch.hw_mmu->pgt; + struct page *page; + struct kvm *kvm = vcpu->kvm; + void *memcache; + kvm_pfn_t pfn; + gfn_t gfn; + int ret; + + ret = prepare_mmu_memcache(vcpu, true, &memcache); + if (ret) + return ret; + + if (nested) + gfn = kvm_s2_trans_output(nested) >> PAGE_SHIFT; + else + gfn = fault_ipa >> PAGE_SHIFT; + + write_fault = kvm_is_write_fault(vcpu); + exec_fault = kvm_vcpu_trap_is_exec_fault(vcpu); + + if (write_fault && exec_fault) { + kvm_err("Simultaneous write and execution fault\n"); + return -EFAULT; + } + + if (is_perm && !write_fault && !exec_fault) { + kvm_err("Unexpected L2 read permission error\n"); + return -EFAULT; + } + + ret = kvm_gmem_get_pfn(kvm, memslot, gfn, &pfn, &page, NULL); + if (ret) { + kvm_prepare_memory_fault_exit(vcpu, fault_ipa, PAGE_SIZE, + write_fault, exec_fault, false); + return ret; + } + + writable = !(memslot->flags & KVM_MEM_READONLY); + + if (nested) + adjust_nested_fault_perms(nested, &prot, &writable); + + if (writable) + prot |= KVM_PGTABLE_PROT_W; + + if (exec_fault || + (cpus_have_final_cap(ARM64_HAS_CACHE_DIC) && + (!nested || kvm_s2_trans_executable(nested)))) + prot |= KVM_PGTABLE_PROT_X; + + kvm_fault_lock(kvm); + ret = KVM_PGT_FN(kvm_pgtable_stage2_map)(pgt, fault_ipa, PAGE_SIZE, + __pfn_to_phys(pfn), prot, + memcache, flags); + kvm_release_faultin_page(kvm, page, !!ret, writable); + kvm_fault_unlock(kvm); + + if (writable && !ret) + mark_page_dirty_in_slot(kvm, memslot, gfn); + + return ret != -EAGAIN ? ret : 0; +} + static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, struct kvm_s2_trans *nested, struct kvm_memory_slot *memslot, unsigned long hva, @@ -1536,7 +1608,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa, enum kvm_pgtable_prot prot = KVM_PGTABLE_PROT_R; struct kvm_pgtable *pgt; struct page *page; - enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_HANDLE_FAULT | KVM_PGTABLE_WALK_SHARED; + enum kvm_pgtable_walk_flags flags = KVM_PGTABLE_WALK_MEMABORT_FLAGS; if (fault_is_perm) fault_granule = kvm_vcpu_trap_get_perm_fault_granule(vcpu); @@ -1963,8 +2035,12 @@ int kvm_handle_guest_abort(struct kvm_vcpu *vcpu) goto out_unlock; } - ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva, - esr_fsc_is_permission_fault(esr)); + if (kvm_slot_has_gmem(memslot)) + ret = gmem_abort(vcpu, fault_ipa, nested, memslot, + esr_fsc_is_permission_fault(esr)); + else + ret = user_mem_abort(vcpu, fault_ipa, nested, memslot, hva, + esr_fsc_is_permission_fault(esr)); if (ret == 0) ret = 1; out: From patchwork Wed Jun 11 13:33:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896715 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0CFF228B7C6 for ; Wed, 11 Jun 2025 13:34:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648850; cv=none; b=cwrZMNee0Oo26Xflu51Y6mtmJg5L268CkWbPzJSynQfX9BG5+0x9SucISbFd/HlRCgl1KqwlNG4SoZu9Qca4AH2TS2ZgBbYusZPwgmMkYZvuPqP5MDDXJ3W+N4u0ARs8WV3fR0J+Ln5Svi+EHjLguw7yNjnD6TkQX59J4SwbKPI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648850; c=relaxed/simple; bh=85h3na1gsYFMlWN8UpZMM/0gWDZKX7wBW0lX7RlGk8Y=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=NO4f0i6Mt6McHKhBc8j63lTwPUm9cO7DSHFiDluV3FjFmEPD3ExR7+I0iR9IzqTikwUAVMfcJ9eIsQy/CJDez7tjEe+vC+IoRymVEzpE7EtKUU2QqdCduiPXwhohc00L39/xBkyrR4HuqF8e9y+0XV31EQTU2OQO92TNjUhpVRc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=CYmv5jbr; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="CYmv5jbr" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-450cb8ff0c6so36646115e9.3 for ; Wed, 11 Jun 2025 06:34:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648845; x=1750253645; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=1XR59PTGuiW1gb55TNwkuIBSMjZG1XsBaxGQqCed/os=; b=CYmv5jbrhSQGYyDbuhLIMPBEFG6/Z3s6qES0hJveb9yZASR0UlgBPdXU0X3PQ4o/uL T9OkBezO/Qm3ggR2PCm5CKztMqKvQsm/kLhGMBihO1QJw+cFXmj6YWpmz/my6ZoxLLrG tnrup4PUZC9Cnz16BU0KLJRC6ihuz4LNPdDa7G3rL9iZ88WdHpobmyHvsAOH5qHRWLmC WCsdPmOp5/7VXdB2+t6N0oZ5/ApSlSDmJi/xpnN6HSvqtxxovVsr7YZvBUAYBDkaLOaT MOj+Cl2MYB0BmA/U4SoNZ8dbkcWe3ae7xZ03FyypMTXSYb7s60Te3EEh3IF3sc8NuK5t dajA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648845; x=1750253645; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1XR59PTGuiW1gb55TNwkuIBSMjZG1XsBaxGQqCed/os=; b=pLL7bykhQr7eA8aaWse/4S48Gf1Mt5RyliikB8dJuWj3T6w/fTjR6Z5yXqieGFVCGM dwfr9e2ro4Jn4R7GeWQVn2rHdmvNRcaesZXbyTN9aWfFoPw+jowm3xpHTuTI3TbBxwm1 FGSIckE2XdtIW6KzvvbmLt/k+O5SPpEU6Z1DPwdZUm8Nst4j9KZpoBelICLfevxX2BUi TZehismSAogwdBHRTvjOqz0LMUy2bj9KKTsfexhQOkqm1cMNYj1aoi11eTcJNFsIuvB7 DefGwVZNxGo+Y+UFC6cja5RTKFZ5AAhEgjwfX4FVB6WyvTITqRJqsoyDo4jj2Ax8pIVs eByw== X-Forwarded-Encrypted: i=1; AJvYcCWm/e3T08dQqyllKHGkLy7a/aD7r3jfponYLUjQcADrBbj82dbonIKEAm26VbLExKmagdMSJYLg5v4se/fl@vger.kernel.org X-Gm-Message-State: AOJu0YypWcrv7+DU3iz1+LteYrbSla2ndybWQKiNhboD3hWjabVehlgV 3Ol5bxnCRpIf/A1Q5PJFx9nhTaH++gAoY+O0ja+Xm+gZtlkezc6krJo+zjvi+AFloeNWLo2CQXS QKw== X-Google-Smtp-Source: AGHT+IEfAr/l95Kf+rwSAj9mppIOYExWAEsd3FitYHNfOu71BGp8Mi+xQZ98YDO3SYy4dT9bCQ1C1E+czA== X-Received: from wmbep25.prod.google.com ([2002:a05:600c:8419:b0:442:ea0c:c453]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:6986:b0:439:9424:1b70 with SMTP id 5b1f17b1804b1-45324f6a144mr26676635e9.30.1749648845441; Wed, 11 Jun 2025 06:34:05 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:28 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-17-tabba@google.com> Subject: [PATCH v12 16/18] KVM: Introduce the KVM capability KVM_CAP_GMEM_SHARED_MEM From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com This patch introduces the KVM capability KVM_CAP_GMEM_SHARED_MEM, which indicates that guest_memfd supports shared memory (when enabled by the flag). This support is limited to certain VM types, determined per architecture. This patch also updates the KVM documentation with details on the new capability, flag, and other information about support for shared memory in guest_memfd. Reviewed-by: David Hildenbrand Reviewed-by: Gavin Shan Signed-off-by: Fuad Tabba --- Documentation/virt/kvm/api.rst | 9 +++++++++ include/uapi/linux/kvm.h | 1 + virt/kvm/kvm_main.c | 4 ++++ 3 files changed, 14 insertions(+) diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst index 1bd2d42e6424..4ef3d8482000 100644 --- a/Documentation/virt/kvm/api.rst +++ b/Documentation/virt/kvm/api.rst @@ -6407,6 +6407,15 @@ most one mapping per page, i.e. binding multiple memory regions to a single guest_memfd range is not allowed (any number of memory regions can be bound to a single guest_memfd file, but the bound ranges must not overlap). +When the capability KVM_CAP_GMEM_SHARED_MEM is supported, the 'flags' field +supports GUEST_MEMFD_FLAG_SUPPORT_SHARED. Setting this flag on guest_memfd +creation enables mmap() and faulting of guest_memfd memory to host userspace. + +When the KVM MMU performs a PFN lookup to service a guest fault and the backing +guest_memfd has the GUEST_MEMFD_FLAG_SUPPORT_SHARED set, then the fault will +always be consumed from guest_memfd, regardless of whether it is a shared or a +private fault. + See KVM_SET_USER_MEMORY_REGION2 for additional details. 4.143 KVM_PRE_FAULT_MEMORY diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index cb19150fd595..c74cf8f73337 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -934,6 +934,7 @@ struct kvm_enable_cap { #define KVM_CAP_ARM_EL2 240 #define KVM_CAP_ARM_EL2_E2H0 241 #define KVM_CAP_RISCV_MP_STATE_RESET 242 +#define KVM_CAP_GMEM_SHARED_MEM 243 struct kvm_irq_routing_irqchip { __u32 irqchip; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index d41bcc6a78b0..441c9b53b876 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -4913,6 +4913,10 @@ static int kvm_vm_ioctl_check_extension_generic(struct kvm *kvm, long arg) #ifdef CONFIG_KVM_GMEM case KVM_CAP_GUEST_MEMFD: return !kvm || kvm_arch_supports_gmem(kvm); +#endif +#ifdef CONFIG_KVM_GMEM_SHARED_MEM + case KVM_CAP_GMEM_SHARED_MEM: + return !kvm || kvm_arch_supports_gmem_shared_mem(kvm); #endif default: break; From patchwork Wed Jun 11 13:33:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fuad Tabba X-Patchwork-Id: 896714 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7BF6D2BE7D2 for ; Wed, 11 Jun 2025 13:34:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648854; cv=none; b=rNIEaL54aIPvG+fWAeTMogKt8E3baNwnUgrse/CRMjYWss9p32CxU88Too2KeOZJo1zoOfEAQJ5TUUMtqs/a3f2KEKJLFJt4GvHmSulwzZSvPO0aT8KMcPAMf1YxMRtzJcK6uflnF3aTmAwUHEYe4TCzZWUYvqtxJH2pw+4DdnI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749648854; c=relaxed/simple; bh=VprLzvt+vObRoKLuazGuu9uL0Seg93Z2eYRgLUR9Lk4=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=s6uLYHN8eIWJq5a0TqcEY0hRDhNd4OlnHjxsGSJ/8Xhs/JMJ54aihdpaAtWwZDfeVme8f3l9n7t3UYn4zf1ePD+Ynv0e5SxfaGZHAMtrpvW0HiwJAXmlVyN50sXXcJx0VFOX5xBW0bP2u13HzqNa/pzKMiVDZV8HTh2QSF9KVTE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=b3PKmO+N; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--tabba.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="b3PKmO+N" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-451d5600a54so50520375e9.2 for ; Wed, 11 Jun 2025 06:34:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1749648850; x=1750253650; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=FJvbE/ShWBw64uN/gfxPQEGQRHHqSZqc097MOFM4eTw=; b=b3PKmO+Nljt2IdS7K6PZDx1aJDaA/EThTBcRfLWO97xE9Qun+Wy2NmebqC8LlbL/FM DefdXqi6XCRsKWIgPtyO3STcKj/aIh9Kwd1mpwfBgAATsrYP7p0Z2EEzJDZC2Wqw8QrG 6zAyjlx7HopSD9VDGlhr+q+gDhjaQ2hhH1GnZ2TQuYTGcqPowp2zu8GQzlmglaV1dqx9 rM2iKNLIwqhJ+eAQzhmZl4rOviTZRkqlGuQElC+xhoWlD32PjT+2/VbyfjxMcn0fVvyC zq7bWI0NYBtskHGuE+tfZ+M0u6EnmP178CTg8olNc5O95041FI9e+936+hmHBHb1CN1W zmLw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1749648850; x=1750253650; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FJvbE/ShWBw64uN/gfxPQEGQRHHqSZqc097MOFM4eTw=; b=gjioozQcZVIzq2SZKU9obBAGg28wkdZkyQboYfGM9rD3fPmIyv+3OGD/tu92S4nhTE W6ppTuq6IOpX/WqeR/oRkOLLNOwjO4F1XBCtB/FGRDjcF3b3m+wv2NkEUCKTEUEUg9ir 388BLTteK6I6EgXUX+WKpp2ug7W3UnHg5axFWg9SRjBH9TNX70POXtW+jPxeHJX2VNzm aRNcA1alJC7eQjQs0G1cz+8UfiRPspTe/SnEuyOtVvkjc20FQVy4bTzDqn3ilXA6XYNW +UiKJoQ9vynRsKLAwedgCn6NxgNy+5H6Au6d8gg1SweX4ENaY3VuS9Ol+PZ6ss/SE9uV aLSg== X-Forwarded-Encrypted: i=1; AJvYcCWFjrO6584ENCF5psI20NzUREZUASyClPmUMFLOgBpmAVS+q5fOMRKgaGptTuMxw+s9v1qB7j4hmBdPoT42@vger.kernel.org X-Gm-Message-State: AOJu0Yz4X/zl+Xat79X7Ns09lySlXrCJf5rfpfMeDk12spB6GPJb22Gc 6D/KzAHWnX0ycb4XyxvYrOY/cPanR3SRazXmYpIl6Bysr4nkth19PMPIaMWiQ1mRO2Tml+G7Upt aGg== X-Google-Smtp-Source: AGHT+IHeu8YIl9sBvNyQ+lPC3i+DWieMJ5Ege0MJSOrRxbX1kzQSNvkrsiPuQZjK1BZWrhiZw4SOi05djw== X-Received: from wmbfa21.prod.google.com ([2002:a05:600c:5195:b0:43c:faa1:bb58]) (user=tabba job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:3b2a:b0:451:df07:f437 with SMTP id 5b1f17b1804b1-453248dcd54mr32155365e9.30.1749648849873; Wed, 11 Jun 2025 06:34:09 -0700 (PDT) Date: Wed, 11 Jun 2025 14:33:30 +0100 In-Reply-To: <20250611133330.1514028-1-tabba@google.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20250611133330.1514028-1-tabba@google.com> X-Mailer: git-send-email 2.50.0.rc0.642.g800a2b2222-goog Message-ID: <20250611133330.1514028-19-tabba@google.com> Subject: [PATCH v12 18/18] KVM: selftests: guest_memfd mmap() test when mapping is allowed From: Fuad Tabba To: kvm@vger.kernel.org, linux-arm-msm@vger.kernel.org, linux-mm@kvack.org, kvmarm@lists.linux.dev Cc: pbonzini@redhat.com, chenhuacai@kernel.org, mpe@ellerman.id.au, anup@brainfault.org, paul.walmsley@sifive.com, palmer@dabbelt.com, aou@eecs.berkeley.edu, seanjc@google.com, viro@zeniv.linux.org.uk, brauner@kernel.org, willy@infradead.org, akpm@linux-foundation.org, xiaoyao.li@intel.com, yilun.xu@intel.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, amoorthy@google.com, dmatlack@google.com, isaku.yamahata@intel.com, mic@digikod.net, vbabka@suse.cz, vannapurve@google.com, ackerleytng@google.com, mail@maciej.szmigiero.name, david@redhat.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, suzuki.poulose@arm.com, steven.price@arm.com, quic_eberman@quicinc.com, quic_mnalajal@quicinc.com, quic_tsoni@quicinc.com, quic_svaddagi@quicinc.com, quic_cvanscha@quicinc.com, quic_pderrin@quicinc.com, quic_pheragu@quicinc.com, catalin.marinas@arm.com, james.morse@arm.com, yuzenghui@huawei.com, oliver.upton@linux.dev, maz@kernel.org, will@kernel.org, qperret@google.com, keirf@google.com, roypat@amazon.co.uk, shuah@kernel.org, hch@infradead.org, jgg@nvidia.com, rientjes@google.com, jhubbard@nvidia.com, fvdl@google.com, hughd@google.com, jthoughton@google.com, peterx@redhat.com, pankaj.gupta@amd.com, ira.weiny@intel.com, tabba@google.com Expand the guest_memfd selftests to include testing mapping guest memory for VM types that support it. Reviewed-by: James Houghton Reviewed-by: Gavin Shan Co-developed-by: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Fuad Tabba Reviewed-by: Shivank Garg --- .../testing/selftests/kvm/guest_memfd_test.c | 201 ++++++++++++++++-- 1 file changed, 180 insertions(+), 21 deletions(-) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index 341ba616cf55..5da2ed6277ac 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -13,6 +13,8 @@ #include #include +#include +#include #include #include #include @@ -34,12 +36,83 @@ static void test_file_read_write(int fd) "pwrite on a guest_mem fd should fail"); } -static void test_mmap(int fd, size_t page_size) +static void test_mmap_supported(int fd, size_t page_size, size_t total_size) +{ + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0); + TEST_ASSERT(mem == MAP_FAILED, "Copy-on-write not allowed by guest_memfd."); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmap() for shared guest memory should succeed."); + + memset(mem, val, total_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + ret = fallocate(fd, FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE, 0, + page_size); + TEST_ASSERT(!ret, "fallocate the first page should succeed."); + + for (i = 0; i < page_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), 0x00); + for (; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + memset(mem, val, page_size); + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + ret = munmap(mem, total_size); + TEST_ASSERT(!ret, "munmap() should succeed."); +} + +static sigjmp_buf jmpbuf; +void fault_sigbus_handler(int signum) +{ + siglongjmp(jmpbuf, 1); +} + +static void test_fault_overflow(int fd, size_t page_size, size_t total_size) +{ + struct sigaction sa_old, sa_new = { + .sa_handler = fault_sigbus_handler, + }; + size_t map_size = total_size * 4; + const char val = 0xaa; + char *mem; + size_t i; + int ret; + + mem = mmap(NULL, map_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT(mem != MAP_FAILED, "mmap() for shared guest memory should succeed."); + + sigaction(SIGBUS, &sa_new, &sa_old); + if (sigsetjmp(jmpbuf, 1) == 0) { + memset(mem, 0xaa, map_size); + TEST_ASSERT(false, "memset() should have triggered SIGBUS."); + } + sigaction(SIGBUS, &sa_old, NULL); + + for (i = 0; i < total_size; i++) + TEST_ASSERT_EQ(READ_ONCE(mem[i]), val); + + ret = munmap(mem, map_size); + TEST_ASSERT(!ret, "munmap() should succeed."); +} + +static void test_mmap_not_supported(int fd, size_t page_size, size_t total_size) { char *mem; mem = mmap(NULL, page_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); TEST_ASSERT_EQ(mem, MAP_FAILED); + + mem = mmap(NULL, total_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0); + TEST_ASSERT_EQ(mem, MAP_FAILED); } static void test_file_size(int fd, size_t page_size, size_t total_size) @@ -120,26 +193,19 @@ static void test_invalid_punch_hole(int fd, size_t page_size, size_t total_size) } } -static void test_create_guest_memfd_invalid(struct kvm_vm *vm) +static void test_create_guest_memfd_invalid_sizes(struct kvm_vm *vm, + uint64_t guest_memfd_flags, + size_t page_size) { - size_t page_size = getpagesize(); - uint64_t flag; size_t size; int fd; for (size = 1; size < page_size; size++) { - fd = __vm_create_guest_memfd(vm, size, 0); - TEST_ASSERT(fd == -1 && errno == EINVAL, + fd = __vm_create_guest_memfd(vm, size, guest_memfd_flags); + TEST_ASSERT(fd < 0 && errno == EINVAL, "guest_memfd() with non-page-aligned page size '0x%lx' should fail with EINVAL", size); } - - for (flag = BIT(0); flag; flag <<= 1) { - fd = __vm_create_guest_memfd(vm, page_size, flag); - TEST_ASSERT(fd == -1 && errno == EINVAL, - "guest_memfd() with flag '0x%lx' should fail with EINVAL", - flag); - } } static void test_create_guest_memfd_multiple(struct kvm_vm *vm) @@ -171,30 +237,123 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } -int main(int argc, char *argv[]) +static bool check_vm_type(unsigned long vm_type) { - size_t page_size; + /* + * Not all architectures support KVM_CAP_VM_TYPES. However, those that + * support guest_memfd have that support for the default VM type. + */ + if (vm_type == VM_TYPE_DEFAULT) + return true; + + return kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(vm_type); +} + +static void test_with_type(unsigned long vm_type, uint64_t guest_memfd_flags, + bool expect_mmap_allowed) +{ + struct kvm_vm *vm; size_t total_size; + size_t page_size; int fd; - struct kvm_vm *vm; - TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + if (!check_vm_type(vm_type)) + return; page_size = getpagesize(); total_size = page_size * 4; - vm = vm_create_barebones(); + vm = vm_create_barebones_type(vm_type); - test_create_guest_memfd_invalid(vm); test_create_guest_memfd_multiple(vm); + test_create_guest_memfd_invalid_sizes(vm, guest_memfd_flags, page_size); - fd = vm_create_guest_memfd(vm, total_size, 0); + fd = vm_create_guest_memfd(vm, total_size, guest_memfd_flags); test_file_read_write(fd); - test_mmap(fd, page_size); + + if (expect_mmap_allowed) { + test_mmap_supported(fd, page_size, total_size); + test_fault_overflow(fd, page_size, total_size); + + } else { + test_mmap_not_supported(fd, page_size, total_size); + } + test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); close(fd); + kvm_vm_free(vm); +} + +static void test_vm_type_gmem_flag_validity(unsigned long vm_type, + uint64_t expected_valid_flags) +{ + size_t page_size = getpagesize(); + struct kvm_vm *vm; + uint64_t flag = 0; + int fd; + + if (!check_vm_type(vm_type)) + return; + + vm = vm_create_barebones_type(vm_type); + + for (flag = BIT(0); flag; flag <<= 1) { + fd = __vm_create_guest_memfd(vm, page_size, flag); + + if (flag & expected_valid_flags) { + TEST_ASSERT(fd >= 0, + "guest_memfd() with flag '0x%lx' should be valid", + flag); + close(fd); + } else { + TEST_ASSERT(fd < 0 && errno == EINVAL, + "guest_memfd() with flag '0x%lx' should fail with EINVAL", + flag); + } + } + + kvm_vm_free(vm); +} + +static void test_gmem_flag_validity(void) +{ + uint64_t non_coco_vm_valid_flags = 0; + + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) + non_coco_vm_valid_flags = GUEST_MEMFD_FLAG_SUPPORT_SHARED; + + test_vm_type_gmem_flag_validity(VM_TYPE_DEFAULT, non_coco_vm_valid_flags); + +#ifdef __x86_64__ + test_vm_type_gmem_flag_validity(KVM_X86_SW_PROTECTED_VM, non_coco_vm_valid_flags); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SEV_ES_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_SNP_VM, 0); + test_vm_type_gmem_flag_validity(KVM_X86_TDX_VM, 0); +#endif +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_has_cap(KVM_CAP_GUEST_MEMFD)); + + test_gmem_flag_validity(); + + test_with_type(VM_TYPE_DEFAULT, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(VM_TYPE_DEFAULT, GUEST_MEMFD_FLAG_SUPPORT_SHARED, + true); + } + +#ifdef __x86_64__ + test_with_type(KVM_X86_SW_PROTECTED_VM, 0, false); + if (kvm_has_cap(KVM_CAP_GMEM_SHARED_MEM)) { + test_with_type(KVM_X86_SW_PROTECTED_VM, + GUEST_MEMFD_FLAG_SUPPORT_SHARED, true); + } +#endif }