From patchwork Fri May 16 19:19:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890950 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EF1327FB18 for ; Fri, 16 May 2025 19:19:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423190; cv=none; b=kW75aFYZ+FowSOSSVgHmeVSF83dm56MVuolcw4eVI7fPRU9qYxtTDrvr9Mv/o4tcyHpaKGYdlo7yp7uFKZLx5elArfW38QjkpGIGykdT7L2fmHE9zHDA+Iq0WKI+E0IKXpii/MRPxNmor/66359A0F+tJq6zmQEBKRwQJFr+zs8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423190; c=relaxed/simple; bh=dS83jb+y7xN20tz+qtsg/BVSv9ToIR1GtYl9j8CiIQc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VYb1+DhtQXglZ2Ph+7aVxnML4b2qxV+OvjvIa5AoLv0aCgFf4iyiiwtCi7X5NaAGNf2KpRi0Ury9gRx+zAQ8KjBt4WrcVFeIf99/Jr0D6AZzNfwVaIFlHmsyj3ojVSOMheObJLF7PjzJlpo9EuFfzlOXXCJMORiTZieCuHZ1zTI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=lNEmi2L4; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="lNEmi2L4" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-231d7f590ffso8695965ad.1 for ; Fri, 16 May 2025 12:19:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423188; x=1748027988; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=0zJH+idMyD211PTmBVVKp6b1RIvZ685Zg0MbGCG1vks=; b=lNEmi2L4+ijGzhsFQqq+5973EZxQnB71PTpYPxQimnM9NwoGvTeFaI6vuP9PcAZ1AC E+e3iLggKsnwSm2fPhIZXTrcB0ie7L0Nke9aX/Nql9/ZQ2ZKL/VRL28YV8FetBW80LPI LcqfgRYoRkp4tDOGYeiWHWC+MO3k2CgWjw8MDUs1EJiFhf+B3rrQHy/UTtjOOeO9jQTX 9DddzMs0Fu6qNG3f1e7AUKH+gJAxkPk99GUWilMAEri7ZVaPo5fJSHtx1ajMkV4xWmUJ efbujl1JLJKVbzg6otTuNweX/QilWBQnD/pfebIdQL1yO9e937KFdQwU6EqytkGTitOd soJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423188; x=1748027988; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=0zJH+idMyD211PTmBVVKp6b1RIvZ685Zg0MbGCG1vks=; b=cevCeyA7p4aeSGM8Nn2vfxafDN30Ch7UfnLyvGv9X4x2CMGi3rV1QPxXFVrlqlN8DD lvXqSR/yksxqFsLHwsxsKumMVPlplpDE4Wz5B4NDbS2jGQyUyukHEnkj9TDgGdBNLiTZ /L4z/6Os64doUVQDDwbuYUlqADVNaJp2HGjYWxEkcS5DptjBqsMeyjnYQgw1wNcgN8qT ce+4t/OwJEED7kpT0tZauvMLJcAsGr4NGZJrgsXCRBICmK37f07gEPDc8bzYly59QmSu OjliUBSHzqLJCcagkDILN5/PtByDkMQBVjC4a1LwIGLazDlneLERPHwMz8KH9Rpfsyny gXmA== X-Forwarded-Encrypted: i=1; AJvYcCXJiRq67ZpOkZOKkt3QkCsf5FD653AwgffvJoxIXoFRHNqpb+1Ob8i8XV5XrBf3ugDGP9Hnde6wDQdZIn22tts=@vger.kernel.org X-Gm-Message-State: AOJu0Yy/tglGBRpP2GfjYuuegfoPbaHhGtuNlaAz/BwYmnioWZ+d2+DW kRugVUkspMGDwh754D7PfnnYvNwaeKHlQ6C3NrE9g6H5O5cN5oSEZl/E2gJdt4agYcJd50b+hMW pd4jWZwk6Dw== X-Google-Smtp-Source: AGHT+IHSFgvebc0wpYjCknBRQjRfr1kshPZ9VPLjcsDoJ4Cjt0l0gPb930vxLxt7iLXiRL6Pz2kWGl52Gpa6 X-Received: from plgi15.prod.google.com ([2002:a17:902:cf0f:b0:231:bbbe:3c94]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:902:e94c:b0:22e:5e82:6701 with SMTP id d9443c01a7336-231de31179amr45763825ad.18.1747423187689; Fri, 16 May 2025 12:19:47 -0700 (PDT) Date: Fri, 16 May 2025 19:19:21 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <1f42c32fc18d973b8ec97c8be8b7cd921912d42a.1747368092.git.afranji@google.com> Subject: [RFC PATCH v2 01/13] fs: Refactor to provide function that allocates a secure anonymous inode From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: David Hildenbrand alloc_anon_secure_inode() returns an inode after running checks in security_inode_init_security_anon(). Also refactor secretmem's file creation process to use the new function. Signed-off-by: David Hildenbrand Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- fs/anon_inodes.c | 23 ++++++++++++++++------- include/linux/fs.h | 13 +++++++------ mm/secretmem.c | 9 +-------- 3 files changed, 24 insertions(+), 21 deletions(-) diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c index 583ac81669c2..0ce28959c43a 100644 --- a/fs/anon_inodes.c +++ b/fs/anon_inodes.c @@ -55,17 +55,20 @@ static struct file_system_type anon_inode_fs_type = { .kill_sb = kill_anon_super, }; -static struct inode *anon_inode_make_secure_inode( - const char *name, - const struct inode *context_inode) +static struct inode *anon_inode_make_secure_inode(struct super_block *s, + const char *name, const struct inode *context_inode, + bool fs_internal) { struct inode *inode; int error; - inode = alloc_anon_inode(anon_inode_mnt->mnt_sb); + inode = alloc_anon_inode(s); if (IS_ERR(inode)) return inode; - inode->i_flags &= ~S_PRIVATE; + + if (!fs_internal) + inode->i_flags &= ~S_PRIVATE; + error = security_inode_init_security_anon(inode, &QSTR(name), context_inode); if (error) { @@ -75,6 +78,12 @@ static struct inode *anon_inode_make_secure_inode( return inode; } +struct inode *alloc_anon_secure_inode(struct super_block *s, const char *name) +{ + return anon_inode_make_secure_inode(s, name, NULL, true); +} +EXPORT_SYMBOL_GPL(alloc_anon_secure_inode); + static struct file *__anon_inode_getfile(const char *name, const struct file_operations *fops, void *priv, int flags, @@ -88,7 +97,8 @@ static struct file *__anon_inode_getfile(const char *name, return ERR_PTR(-ENOENT); if (make_inode) { - inode = anon_inode_make_secure_inode(name, context_inode); + inode = anon_inode_make_secure_inode(anon_inode_mnt->mnt_sb, + name, context_inode, false); if (IS_ERR(inode)) { file = ERR_CAST(inode); goto err; @@ -318,4 +328,3 @@ static int __init anon_inode_init(void) } fs_initcall(anon_inode_init); - diff --git a/include/linux/fs.h b/include/linux/fs.h index 016b0fe1536e..8eeef9a7fe07 100644 --- a/include/linux/fs.h +++ b/include/linux/fs.h @@ -309,7 +309,7 @@ struct iattr { */ #define FILESYSTEM_MAX_STACK_DEPTH 2 -/** +/** * enum positive_aop_returns - aop return codes with specific semantics * * @AOP_WRITEPAGE_ACTIVATE: Informs the caller that page writeback has @@ -319,7 +319,7 @@ struct iattr { * be a candidate for writeback again in the near * future. Other callers must be careful to unlock * the page if they get this return. Returned by - * writepage(); + * writepage(); * * @AOP_TRUNCATED_PAGE: The AOP method that was handed a locked page has * unlocked it and the page might have been truncated. @@ -1141,8 +1141,8 @@ struct file *get_file_active(struct file **f); #define MAX_NON_LFS ((1UL<<31) - 1) -/* Page cache limit. The filesystems should put that into their s_maxbytes - limits, otherwise bad things can happen in VM. */ +/* Page cache limit. The filesystems should put that into their s_maxbytes + limits, otherwise bad things can happen in VM. */ #if BITS_PER_LONG==32 #define MAX_LFS_FILESIZE ((loff_t)ULONG_MAX << PAGE_SHIFT) #elif BITS_PER_LONG==64 @@ -2607,7 +2607,7 @@ int sync_inode_metadata(struct inode *inode, int wait); struct file_system_type { const char *name; int fs_flags; -#define FS_REQUIRES_DEV 1 +#define FS_REQUIRES_DEV 1 #define FS_BINARY_MOUNTDATA 2 #define FS_HAS_SUBTYPE 4 #define FS_USERNS_MOUNT 8 /* Can be mounted by userns root */ @@ -3195,7 +3195,7 @@ ssize_t __kernel_read(struct file *file, void *buf, size_t count, loff_t *pos); extern ssize_t kernel_write(struct file *, const void *, size_t, loff_t *); extern ssize_t __kernel_write(struct file *, const void *, size_t, loff_t *); extern struct file * open_exec(const char *); - + /* fs/dcache.c -- generic fs support functions */ extern bool is_subdir(struct dentry *, struct dentry *); extern bool path_is_under(const struct path *, const struct path *); @@ -3550,6 +3550,7 @@ extern int simple_write_begin(struct file *file, struct address_space *mapping, extern const struct address_space_operations ram_aops; extern int always_delete_dentry(const struct dentry *); extern struct inode *alloc_anon_inode(struct super_block *); +extern struct inode *alloc_anon_secure_inode(struct super_block *, const char *); extern int simple_nosetlease(struct file *, int, struct file_lease **, void **); extern const struct dentry_operations simple_dentry_operations; diff --git a/mm/secretmem.c b/mm/secretmem.c index 1b0a214ee558..c0e459e58cb6 100644 --- a/mm/secretmem.c +++ b/mm/secretmem.c @@ -195,18 +195,11 @@ static struct file *secretmem_file_create(unsigned long flags) struct file *file; struct inode *inode; const char *anon_name = "[secretmem]"; - int err; - inode = alloc_anon_inode(secretmem_mnt->mnt_sb); + inode = alloc_anon_secure_inode(secretmem_mnt->mnt_sb, anon_name); if (IS_ERR(inode)) return ERR_CAST(inode); - err = security_inode_init_security_anon(inode, &QSTR(anon_name), NULL); - if (err) { - file = ERR_PTR(err); - goto err_free_inode; - } - file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem", O_RDWR, &secretmem_fops); if (IS_ERR(file)) From patchwork Fri May 16 19:19:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890759 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4CA9527FD71 for ; Fri, 16 May 2025 19:19:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423194; cv=none; b=AyvZJsMSRNzpMlzQ4XFgoX7/1z3Jjx531r0v7MtYYR/TxYdsW4QnObboORgDinuhtKzpMHTVZvCSD8awIBddWj/qf9kH6A5g+E2+kNGBKdOHJIDsIVX1G96kSuFntIdIVL54oBcxk2Fq8fMMsa/ijMXPfn9WFuoyn/HL3ihy1x0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423194; c=relaxed/simple; bh=UXDez3ry/DaxCrQjKni9gf7Mfs0ISje/h07kz2tuJRk=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=H1avOZ9bDiuQrAz3BrETDxpvp+foFeLfd14Lh1YTEY2n3kq0P3/1/Kg6GPQ8dPLjh0vvYiPumvZLoamj+6TRvcL0q6xk4xPLHkbfQWQExLruXm1G7PLWbcbAni750VEa7vlM9sTH6eXUYNQTKJETEyEnWrfBHKiP+MLfJmC/6oM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=YXk7HJAs; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="YXk7HJAs" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30ab5d34fdbso2609221a91.0 for ; Fri, 16 May 2025 12:19:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423191; x=1748027991; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=rsqKzuSLf1x4EemD/tMjg1BQgSo1cJf+jNZhMbjjLok=; b=YXk7HJAsCWaOM7fpfGaNa/PZnhdzh4ln5RPJWiJwK+XfpyvpXCjOEjVP2bFANrP4Fg b/R1o7QRcv8k1Hj++GuKfM6w9sqdtrKYG1K9EbE0BUOaFeLjbKEYBHJRjXU4inEyHSQY fr8Q9xm2i75OOkubW8L4P605d7MaSoR2N4DCkoNNuClbvSSP3mQLfxRTErgDC2CP1IRg 5qb7ZpftWARv1Kj8SSlp9NKbbNOOJ2kVCxOim6oUKM1oQ8gJ2WZfXI8ow3+mHvKB0u95 NxQsu+gNfmviEFe+3IZI0k3l9+/pe8y3F+o/zYdd84E6Ftr/5xkEeAiZYYfyFZ+FHbWT S1ug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423191; x=1748027991; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rsqKzuSLf1x4EemD/tMjg1BQgSo1cJf+jNZhMbjjLok=; b=Iw+cYuzuBYIX1YndZT57cnshfWevcyGmE3P1w0zz7/KWNzT7KvqGnYRObh/H0jmigD A0H0uzgWxsRDNwNmQJ05s/tWErzznUGL7fmk9scV5ZEBfbx2dy1UwCLYGmGUyoGXQ8cj Z/bhiZTGlHs74uNQ4D/oIwozQ/QAqO+NwTDa5zRVtwKdserCXyLq+Plw6USRVfvcBlE1 UET29EdZ76XEls1AKqdvZ9fNgwAiHZELFNmMcWPRuBeuIYzQZ0ktfRudjHki85lqUuo3 9cDByAHR2uKUfc932L4TQidWidV+haqrPh67ILsLJ7tRfSICrJ08UfoJ+/X8aH20aLDo ghAA== X-Forwarded-Encrypted: i=1; AJvYcCW+xiotsvnbFdtlTicBIoqKGRyjAaND95PD6n9zGAWA6IooUHVyUj83gHZW9tGIYsuH+RwTADZ7DkR1xtRwEg0=@vger.kernel.org X-Gm-Message-State: AOJu0YxrTBuCuRr5TuwrNJAeV4j+AwlZ11zyMf/0nGUgehcdgJFqum69 JSL1x9EKjojfo/5QwvlfFEF3pcdBv2iRjiTsyIMnazblQg4cwFr/1qCS0YAUNgO/bNekipJgGdM RDbvMzLQylw== X-Google-Smtp-Source: AGHT+IGmB4tQYHflNeidIdzxEAiQVgVR06U6vmB3abdN612fldcUCZhwfNaA1J0qHtwaVn3gWHMxg6azF1vb X-Received: from pjbpv7.prod.google.com ([2002:a17:90b:3c87:b0:301:1ea9:63b0]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:5750:b0:30e:3737:7c71 with SMTP id 98e67ed59e1d1-30e7d57e4e6mr6123056a91.20.1747423191645; Fri, 16 May 2025 12:19:51 -0700 (PDT) Date: Fri, 16 May 2025 19:19:22 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <754b4898c3362050071f6dd09deb24f3c92a41c3.1747368092.git.afranji@google.com> Subject: [RFC PATCH v2 02/13] KVM: guest_memfd: Make guest mem use guest mem inodes instead of anonymous inodes From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Using guest mem inodes allows us to store metadata for the backing memory on the inode. Metadata will be added in a later patch to support HugeTLB pages. Metadata about backing memory should not be stored on the file, since the file represents a guest_memfd's binding with a struct kvm, and metadata about backing memory is not unique to a specific binding and struct kvm. Signed-off-by: Fuad Tabba Signed-off-by: Ackerley Tng --- include/uapi/linux/magic.h | 1 + virt/kvm/guest_memfd.c | 132 +++++++++++++++++++++++++++++++------ virt/kvm/kvm_main.c | 7 +- virt/kvm/kvm_mm.h | 9 ++- 4 files changed, 124 insertions(+), 25 deletions(-) diff --git a/include/uapi/linux/magic.h b/include/uapi/linux/magic.h index bb575f3ab45e..169dba2a6920 100644 --- a/include/uapi/linux/magic.h +++ b/include/uapi/linux/magic.h @@ -103,5 +103,6 @@ #define DEVMEM_MAGIC 0x454d444d /* "DMEM" */ #define SECRETMEM_MAGIC 0x5345434d /* "SECM" */ #define PID_FS_MAGIC 0x50494446 /* "PIDF" */ +#define GUEST_MEMORY_MAGIC 0x474d454d /* "GMEM" */ #endif /* __LINUX_MAGIC_H__ */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index b2aa6bf24d3a..2ee26695dc31 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -1,12 +1,16 @@ // SPDX-License-Identifier: GPL-2.0 +#include #include #include #include +#include #include #include #include "kvm_mm.h" +static struct vfsmount *kvm_gmem_mnt; + struct kvm_gmem { struct kvm *kvm; struct xarray bindings; @@ -318,9 +322,51 @@ static struct file_operations kvm_gmem_fops = { .fallocate = kvm_gmem_fallocate, }; -void kvm_gmem_init(struct module *module) +static const struct super_operations kvm_gmem_super_operations = { + .statfs = simple_statfs, +}; + +static int kvm_gmem_init_fs_context(struct fs_context *fc) +{ + struct pseudo_fs_context *ctx; + + if (!init_pseudo(fc, GUEST_MEMORY_MAGIC)) + return -ENOMEM; + + ctx = fc->fs_private; + ctx->ops = &kvm_gmem_super_operations; + + return 0; +} + +static struct file_system_type kvm_gmem_fs = { + .name = "kvm_guest_memory", + .init_fs_context = kvm_gmem_init_fs_context, + .kill_sb = kill_anon_super, +}; + +static int kvm_gmem_init_mount(void) +{ + kvm_gmem_mnt = kern_mount(&kvm_gmem_fs); + + if (WARN_ON_ONCE(IS_ERR(kvm_gmem_mnt))) + return PTR_ERR(kvm_gmem_mnt); + + kvm_gmem_mnt->mnt_flags |= MNT_NOEXEC; + return 0; +} + +int kvm_gmem_init(struct module *module) { kvm_gmem_fops.owner = module; + + return kvm_gmem_init_mount(); +} + +void kvm_gmem_exit(void) +{ + kern_unmount(kvm_gmem_mnt); + kvm_gmem_mnt = NULL; } static int kvm_gmem_migrate_folio(struct address_space *mapping, @@ -402,11 +448,71 @@ static const struct inode_operations kvm_gmem_iops = { .setattr = kvm_gmem_setattr, }; +static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, + loff_t size, u64 flags) +{ + struct inode *inode; + + inode = alloc_anon_secure_inode(kvm_gmem_mnt->mnt_sb, name); + if (IS_ERR(inode)) + return inode; + + inode->i_private = (void *)(unsigned long)flags; + inode->i_op = &kvm_gmem_iops; + inode->i_mapping->a_ops = &kvm_gmem_aops; + inode->i_mode |= S_IFREG; + inode->i_size = size; + mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); + mapping_set_inaccessible(inode->i_mapping); + /* Unmovable mappings are supposed to be marked unevictable as well. */ + WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); + + return inode; +} + +static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, + u64 flags) +{ + static const char *name = "[kvm-gmem]"; + struct inode *inode; + struct file *file; + int err; + + err = -ENOENT; + if (!try_module_get(kvm_gmem_fops.owner)) + goto err; + + inode = kvm_gmem_inode_make_secure_inode(name, size, flags); + if (IS_ERR(inode)) { + err = PTR_ERR(inode); + goto err_put_module; + } + + file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, + &kvm_gmem_fops); + if (IS_ERR(file)) { + err = PTR_ERR(file); + goto err_put_inode; + } + + file->f_flags |= O_LARGEFILE; + file->private_data = priv; + +out: + return file; + +err_put_inode: + iput(inode); +err_put_module: + module_put(kvm_gmem_fops.owner); +err: + file = ERR_PTR(err); + goto out; +} + static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { - const char *anon_name = "[kvm-gmem]"; struct kvm_gmem *gmem; - struct inode *inode; struct file *file; int fd, err; @@ -420,32 +526,16 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) goto err_fd; } - file = anon_inode_create_getfile(anon_name, &kvm_gmem_fops, gmem, - O_RDWR, NULL); + file = kvm_gmem_inode_create_getfile(gmem, size, flags); if (IS_ERR(file)) { err = PTR_ERR(file); goto err_gmem; } - file->f_flags |= O_LARGEFILE; - - inode = file->f_inode; - WARN_ON(file->f_mapping != inode->i_mapping); - - inode->i_private = (void *)(unsigned long)flags; - inode->i_op = &kvm_gmem_iops; - inode->i_mapping->a_ops = &kvm_gmem_aops; - inode->i_mode |= S_IFREG; - inode->i_size = size; - mapping_set_gfp_mask(inode->i_mapping, GFP_HIGHUSER); - mapping_set_inaccessible(inode->i_mapping); - /* Unmovable mappings are supposed to be marked unevictable as well. */ - WARN_ON_ONCE(!mapping_unevictable(inode->i_mapping)); - kvm_get_kvm(kvm); gmem->kvm = kvm; xa_init(&gmem->bindings); - list_add(&gmem->entry, &inode->i_mapping->i_private_list); + list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list); fd_install(fd, file); return fd; diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 69782df3617f..1e3fd81868bc 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -6412,7 +6412,9 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module) if (WARN_ON_ONCE(r)) goto err_vfio; - kvm_gmem_init(module); + r = kvm_gmem_init(module); + if (r) + goto err_gmem; r = kvm_init_virtualization(); if (r) @@ -6433,6 +6435,8 @@ int kvm_init(unsigned vcpu_size, unsigned vcpu_align, struct module *module) err_register: kvm_uninit_virtualization(); err_virt: + kvm_gmem_exit(); +err_gmem: kvm_vfio_ops_exit(); err_vfio: kvm_async_pf_deinit(); @@ -6464,6 +6468,7 @@ void kvm_exit(void) for_each_possible_cpu(cpu) free_cpumask_var(per_cpu(cpu_kick_mask, cpu)); kmem_cache_destroy(kvm_vcpu_cache); + kvm_gmem_exit(); kvm_vfio_ops_exit(); kvm_async_pf_deinit(); kvm_irqfd_exit(); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index acef3f5c582a..dcacb76b8f00 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -68,17 +68,20 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, #endif /* HAVE_KVM_PFNCACHE */ #ifdef CONFIG_KVM_PRIVATE_MEM -void kvm_gmem_init(struct module *module); +int kvm_gmem_init(struct module *module); +void kvm_gmem_exit(void); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); #else -static inline void kvm_gmem_init(struct module *module) +static inline int kvm_gmem_init(struct module *module) { - + return 0; } +static inline void kvm_gmem_exit(void) {}; + static inline int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) From patchwork Fri May 16 19:19:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890949 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF624280004 for ; Fri, 16 May 2025 19:19:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423195; cv=none; b=Bf74mXF8EXujNZxNODsRkmNz4crfxsbQMwgSFArMCs9QWVLyofLCZ6cSqSo7zjxuABNr2o8soER1+nq5FIdw3HR1myVyAD514iZGYYKFXkwEPJPdSgVrZuebIER7eNYdHi1CLerAcrMmdf0+JFRRWkAdB6IERQzt6QbLumw/6x8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423195; c=relaxed/simple; bh=QA09MJMyJxOFYe9+ikrFvPn2cBVci7MJUmojY5Mf1zw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZDFIBCtL1kDODOCLIR9cjxrW0r6CtTct1C6qUrFW/XVlX8T/4L/l40rVolWku2xMpvOKMdQ5ZRwFlKG4fVFMX/fAQoSJyP95RJm+R5ww7r7sqxmnMyAi86NyTDeMu5N8vxMgLM7e8Bqsf6dRl+1r/infr4OFKnpHNf/LJOuPmU4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=yh7rT+kN; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="yh7rT+kN" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b26fa2cac30so95831a12.2 for ; Fri, 16 May 2025 12:19:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423193; x=1748027993; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=xCETEWEGOjTalbSLIaZmyGje6hO8rkmPCiIeq57x+e0=; b=yh7rT+kNL2dgcRai85HYCwV00N18p/uX0l7wiKSZmv64GnjX6dnNUWLJxaXZ8L3tIG PzG/W+A6f658SKV2etU9pVU5B3VOet4tH/2dmGE78nvM17xXkrqLaxNMHl6iFdwiDtRD EWK6OJvWymp83OUgqPeUw0r8h8JCjy6NGuK6kTiDRiEQvJj9QREiV3Qop/d6SzAws+MD /A2y+QLvG02n0vsz8AdjQ2KzA1nr+vjZ2EAEF9ucjFTdNaab1deQ8URK2lg92SsEKjkP tBzyKkrwSxD8uVXHqIXAM0QG3I8xtXC8/AKrl2MFryEzsEQS/fBJ5lFzVBN/FbkXai/W TXXw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423193; x=1748027993; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=xCETEWEGOjTalbSLIaZmyGje6hO8rkmPCiIeq57x+e0=; b=waq6vCYPSAO9h7Xx8FFJHMQWFTcPMt/STAdTLaXgdph/oPUR9fCa0KrN6ffAsIppCR +252QPaRM6CY4dO5nw6Tkf9JR6cRfyD0Mf8n9ver0lgyRaiRmUGnlAVouHPbKMOAwlkt asdUFjq0f9qffCBLypZBRZgKmvIrmNQMm+XS/6BqEqK3QcPE8QtfS+LJE0Vz12+EC/Jw fc7qKsWW2w0Oy0jGR+tDFGGfow1cJegDjbPqO6IdxWE+SGo86rX7LEMO6Fx6ov5iw1DX G3kDPy8QH8DsMYQksc48XTr+yM+ce/7fjCkySzZ0XucCQORUkOYCdWTdSDj1u++9VZ5j 2JAQ== X-Forwarded-Encrypted: i=1; AJvYcCX/HEma0B0CdBRPVBYxgSKRMgZUuHCp9E3dlJLbo8lnznetgATtQYwIGD+cdlMCvyrAMc+8ISmW8ex9vpmrQhU=@vger.kernel.org X-Gm-Message-State: AOJu0Yy2LTWuVyBUebFwvl5+YmMouCHdmipj1q51lOgJJ96ak399+P8j EmfaJAO8idJpzxt2mFoOG2eMU1bSjZubDE57KZMRdVBLgaSTPOx3U0A49Yy5I0pS74kZr3gMCPc YZNbSTFUlMA== X-Google-Smtp-Source: AGHT+IFVdk8ag77NHV6klVaWmSuIMbULzB9iU8rEnVl7/mRah8MbE3K2zTeRWhT1xArBQ5v8lCjSdl+K3tTk X-Received: from pfaw14.prod.google.com ([2002:a05:6a00:ab8e:b0:739:8cd6:c16c]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:9f90:b0:216:5f67:2d90 with SMTP id adf61e73a8af0-2170ce0b11bmr6682312637.34.1747423193077; Fri, 16 May 2025 12:19:53 -0700 (PDT) Date: Fri, 16 May 2025 19:19:23 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <1322a07a0eaad85938446a561d868cfcad7b4ecb.1747368092.git.afranji@google.com> Subject: [RFC PATCH v2 03/13] KVM: guest_mem: Refactor out kvm_gmem_alloc_view() From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng kvm_gmem_alloc_view() will allocate and build a file out of an inode. Will be reused later by kvm_gmem_link() Signed-off-by: Ackerley Tng Co-developed-by: Ryan Afranji Signed-off-by: Ryan Afranji --- virt/kvm/guest_memfd.c | 61 +++++++++++++++++++----------------------- 1 file changed, 27 insertions(+), 34 deletions(-) diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index 2ee26695dc31..a3918d1695b9 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -470,49 +470,47 @@ static struct inode *kvm_gmem_inode_make_secure_inode(const char *name, return inode; } -static struct file *kvm_gmem_inode_create_getfile(void *priv, loff_t size, - u64 flags) +static struct file *kvm_gmem_alloc_view(struct kvm *kvm, struct inode *inode, + const char *name) { - static const char *name = "[kvm-gmem]"; - struct inode *inode; + struct kvm_gmem *gmem; struct file *file; - int err; - err = -ENOENT; if (!try_module_get(kvm_gmem_fops.owner)) - goto err; + return ERR_PTR(-ENOENT); - inode = kvm_gmem_inode_make_secure_inode(name, size, flags); - if (IS_ERR(inode)) { - err = PTR_ERR(inode); + gmem = kzalloc(sizeof(*gmem), GFP_KERNEL); + if (!gmem) { + file = ERR_PTR(-ENOMEM); goto err_put_module; } file = alloc_file_pseudo(inode, kvm_gmem_mnt, name, O_RDWR, &kvm_gmem_fops); - if (IS_ERR(file)) { - err = PTR_ERR(file); - goto err_put_inode; - } + if (IS_ERR(file)) + goto err_gmem; file->f_flags |= O_LARGEFILE; - file->private_data = priv; + file->private_data = gmem; + + kvm_get_kvm(kvm); + gmem->kvm = kvm; + xa_init(&gmem->bindings); + list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list); -out: return file; -err_put_inode: - iput(inode); +err_gmem: + kfree(gmem); err_put_module: module_put(kvm_gmem_fops.owner); -err: - file = ERR_PTR(err); - goto out; + return file; } static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) { - struct kvm_gmem *gmem; + static const char *name = "[kvm-gmem]"; + struct inode *inode; struct file *file; int fd, err; @@ -520,28 +518,23 @@ static int __kvm_gmem_create(struct kvm *kvm, loff_t size, u64 flags) if (fd < 0) return fd; - gmem = kzalloc(sizeof(*gmem), GFP_KERNEL); - if (!gmem) { - err = -ENOMEM; + inode = kvm_gmem_inode_make_secure_inode(name, size, flags); + if (IS_ERR(inode)) { + err = PTR_ERR(inode); goto err_fd; } - file = kvm_gmem_inode_create_getfile(gmem, size, flags); + file = kvm_gmem_alloc_view(kvm, inode, name); if (IS_ERR(file)) { err = PTR_ERR(file); - goto err_gmem; + goto err_put_inode; } - kvm_get_kvm(kvm); - gmem->kvm = kvm; - xa_init(&gmem->bindings); - list_add(&gmem->entry, &file_inode(file)->i_mapping->i_private_list); - fd_install(fd, file); return fd; -err_gmem: - kfree(gmem); +err_put_inode: + iput(inode); err_fd: put_unused_fd(fd); return err; From patchwork Fri May 16 19:19:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890758 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 21075280015 for ; Fri, 16 May 2025 19:19:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423196; cv=none; b=Lj/Z29XhnefGl4udLmOVXNOAJTOwFAT0OiXSaQkFc+ieWBJE08Bsl3chQ7IuUk8ELJ8PYksbRSjM4YpSYyPUkEhPrz2Z1e7qEcuNASRYB/RjLuqz9m6UPS0R6bEjaIwsIDrGjRAEyg0Rk5J2Vn4mEwpNQm5TlGVvHFIYFsQNpfc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423196; c=relaxed/simple; bh=MZaYn+nvo0B04wdZlv09EaHso5El8svtVs317b2BC04=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gjtulPvM4l4N35Ss76IaPTKxLUBdzzWDdHlIjvTNRaPTlCKJYXP6/2blP3fKdh6R8nQN218CODhW0LROhVs1K/thyfbflWif/CJBd4Cts8fMKacVL0m15K2Abn+r8z7G40jBwxN6UiaKZwmDwXYnkCmB2xxNSU6i0RJLx1EIJQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=d2u6BSPa; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="d2u6BSPa" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-30e8fc03575so461182a91.0 for ; Fri, 16 May 2025 12:19:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423194; x=1748027994; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oMFJ363USnG+FNszh8DLSKhNL8qceUKY5adrEM17BYQ=; b=d2u6BSPafAD90iS+6Dr4reF1EuHgIY5rDW/ZDrjtzul/r93E1AUqsu095yU8U1Rt3R XWFe8QRBaAl9aAUe8ZHS3f2y9sMh7lYcGe4D6+zaGLr6mD9eCVfGmE6a+ITS807AmnN5 0t1oxfPSC5nJ2DtmsfGAp6DdSOu4/0oZE/1waYPPSMZBohaoOva0aDJ41/1zITurrHH8 M1BOOEwPC/7uCMHxOBwuuuUD6Q7GeLAEwWkY+i5aClIX26CQYgIMNKlxNjuB7wo2G9IG ob+s11HW3bx/KPuCQ6drCDAKkWvYKYGKCfCvyE8o4ioXuLqN/Mr4f9zvHCz0pi7HNW9W udbA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423194; x=1748027994; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oMFJ363USnG+FNszh8DLSKhNL8qceUKY5adrEM17BYQ=; b=uMmLpDrwJAysfma3x87rD3k38W8LQBIATewheesCWCA/R5ft0tW5jvU7km51ZdW8VW zWT5BCSA8KFC271sLl/CMBCLpEqpJmKaIHEYz1J921UEf2haEabe9UjCujEvcomjVsw5 iDjuUgBr49AZqacEcwZfgmyv2cxVvaBFc41SC2hi50ZVZrOsSWO82IpKTsatFN1iKEK7 oWX/L0sR8WRKy9e+t0gUDDasLjd16T6HU4Xds6DqikV8XNoPg0VQFwnkmxnwFhCuKq0h 632qtNgmpnn6wH9lCHmGCfY1ffN0smMg1VEz9l6Kmdg6sdRBa8bhfgP4dc1hmhYPx27c KrdA== X-Forwarded-Encrypted: i=1; AJvYcCWNB1NljW3jsmh+N3s4DL4NGr20KZifRLTthugQn23qMcyAZ4VxY6PLoxcg7RziCu49jZhue0tveaP3y/8xPds=@vger.kernel.org X-Gm-Message-State: AOJu0YxmOovJol6oIMrrFa5Hr0PReNH/xR1QxmkylK1B5CIv+OFLGabY dx1O/4fQ3FQoRS9djRoQ5BR16SV6KZ173Szu8JAwrhxCrQDfqWIn449HQ6K+zvLCXZCJbQ261om xhahCbx5ZIw== X-Google-Smtp-Source: AGHT+IGI9ILyUKepnXkdGGm63vJAAGDjSs8x0J8EEnoDiBvLz3VF49AsRx4LzlHZ8XR+aVVknzB+VH1DaL5H X-Received: from pjbsv11.prod.google.com ([2002:a17:90b:538b:b0:2f9:dc36:b11]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:56cc:b0:2fe:a8b1:7d8 with SMTP id 98e67ed59e1d1-30e7d5acb07mr6669924a91.25.1747423194396; Fri, 16 May 2025 12:19:54 -0700 (PDT) Date: Fri, 16 May 2025 19:19:24 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 04/13] KVM: guest_mem: Add ioctl KVM_LINK_GUEST_MEMFD From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng KVM_LINK_GUEST_MEMFD will link a gmem fd's underlying inode to a new file (and fd). Signed-off-by: Ackerley Tng Co-developed-by: Ryan Afranji Signed-off-by: Ryan Afranji --- include/uapi/linux/kvm.h | 8 ++++++ virt/kvm/guest_memfd.c | 57 ++++++++++++++++++++++++++++++++++++++++ virt/kvm/kvm_main.c | 10 +++++++ virt/kvm/kvm_mm.h | 7 +++++ 4 files changed, 82 insertions(+) diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h index c6988e2c68d5..8f17f0b462aa 100644 --- a/include/uapi/linux/kvm.h +++ b/include/uapi/linux/kvm.h @@ -1583,4 +1583,12 @@ struct kvm_pre_fault_memory { __u64 padding[5]; }; +#define KVM_LINK_GUEST_MEMFD _IOWR(KVMIO, 0xd6, struct kvm_link_guest_memfd) + +struct kvm_link_guest_memfd { + __u64 fd; + __u64 flags; + __u64 reserved[6]; +}; + #endif /* __LINUX_KVM_H */ diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index a3918d1695b9..d76bd1119198 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -555,6 +555,63 @@ int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args) return __kvm_gmem_create(kvm, size, flags); } +int kvm_gmem_link(struct kvm *kvm, struct kvm_link_guest_memfd *args) +{ + static const char *name = "[kvm-gmem]"; + u64 flags = args->flags; + u64 valid_flags = 0; + struct file *dst_file, *src_file; + struct kvm_gmem *gmem; + struct timespec64 ts; + struct inode *inode; + struct fd f; + int ret, fd; + + if (flags & ~valid_flags) + return -EINVAL; + + f = fdget(args->fd); + src_file = fd_file(f); + if (!src_file) + return -EINVAL; + + ret = -EINVAL; + if (src_file->f_op != &kvm_gmem_fops) + goto out; + + /* Cannot link a gmem file with the same vm again */ + gmem = src_file->private_data; + if (gmem->kvm == kvm) + goto out; + + ret = fd = get_unused_fd_flags(0); + if (ret < 0) + goto out; + + inode = file_inode(src_file); + dst_file = kvm_gmem_alloc_view(kvm, inode, name); + if (IS_ERR(dst_file)) { + ret = PTR_ERR(dst_file); + goto out_fd; + } + + ts = inode_set_ctime_current(inode); + inode_set_atime_to_ts(inode, ts); + + inc_nlink(inode); + ihold(inode); + + fd_install(fd, dst_file); + fdput(f); + return fd; + +out_fd: + put_unused_fd(fd); +out: + fdput(f); + return ret; +} + int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) { diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c index 1e3fd81868bc..a9b01841a243 100644 --- a/virt/kvm/kvm_main.c +++ b/virt/kvm/kvm_main.c @@ -5285,6 +5285,16 @@ static long kvm_vm_ioctl(struct file *filp, r = kvm_gmem_create(kvm, &guest_memfd); break; } + case KVM_LINK_GUEST_MEMFD: { + struct kvm_link_guest_memfd params; + + r = -EFAULT; + if (copy_from_user(¶ms, argp, sizeof(params))) + goto out; + + r = kvm_gmem_link(kvm, ¶ms); + break; + } #endif default: r = kvm_arch_vm_ioctl(filp, ioctl, arg); diff --git a/virt/kvm/kvm_mm.h b/virt/kvm/kvm_mm.h index dcacb76b8f00..85baf8a7e0de 100644 --- a/virt/kvm/kvm_mm.h +++ b/virt/kvm/kvm_mm.h @@ -71,6 +71,7 @@ static inline void gfn_to_pfn_cache_invalidate_start(struct kvm *kvm, int kvm_gmem_init(struct module *module); void kvm_gmem_exit(void); int kvm_gmem_create(struct kvm *kvm, struct kvm_create_guest_memfd *args); +int kvm_gmem_link(struct kvm *kvm, struct kvm_link_guest_memfd *args); int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset); void kvm_gmem_unbind(struct kvm_memory_slot *slot); @@ -82,6 +83,12 @@ static inline int kvm_gmem_init(struct module *module) static inline void kvm_gmem_exit(void) {}; +static inline int kvm_gmem_link(struct kvm *kvm, + struct kvm_link_guest_memfd *args) +{ + return -EOPNOTSUPP; +} + static inline int kvm_gmem_bind(struct kvm *kvm, struct kvm_memory_slot *slot, unsigned int fd, loff_t offset) From patchwork Fri May 16 19:19:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890948 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C34B3280039 for ; Fri, 16 May 2025 19:19:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423198; cv=none; b=qQ40Bnq6KOIBI/qE0z9UC3zwpk0wu0xnG2N9B4mi+JVvCirAASFOTMTp9cm9cirREMqORkSWOg/SD+DVxdgky8ntJUqDLBXUMicQ41SaJderdwU95cJl3kquIAJ80bUOoKCt2GJFyZHynSwI4uioETo9XuQZdNRrkOxSyi41im8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423198; c=relaxed/simple; bh=bfcQAMQ9jjnW3Lu5W8t13tUmLA+8UB1NqYqKNZ64Ov0=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VkwkTxCGaV3E8lpzQduPQUya3bDiJH2brgiWkIwkL2S79XJ8JwJGiZqBz13noryTU9Cc7I4kKKbuvCczsegQrR+MJwItjxhelqiSJxzD6neMf34uYY+WF6t+ftWiIYcDFmQhP4o+z6lCntfOWqbF62qdzxK4AijC7HkXTn2CoQM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=hC5C1MpO; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="hC5C1MpO" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-7370e73f690so2830627b3a.3 for ; Fri, 16 May 2025 12:19:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423196; x=1748027996; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=fhioi7Gcj72mRexvdqlPLPc4DSIey+OidUjZOy0mNj0=; b=hC5C1MpOB3pSC/CggDkn/nuGVcxPM+7bQ7ENWFHCYJL7tRFOkfTE3cpLIH7lXfdyZg 81Rilrtd4HIrTlQXj8h/7IQ4ksRFt9PV/GdvDXSWKkTsfEatUE+DWkUthj5Bjp05NGii GU3FjHHMzcAkiaK4Zle/+3/AGOw17YPl7DTXzt5RNbEH9MCKGNiSTfdEIuByQWyVUXbt HRCBl5fWUys0wTRnIsMleRjdXPlYLsjTKMp1zf1mZV4vnvxz+snFCZAyw0jVoX+U3NQ8 ak2bgLK7ZBErh1wFVL67QG5GuWm+kAuZPRrkgbD/Kjs09Ssy3kxgFklNpIds89MzUybe OyZg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423196; x=1748027996; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=fhioi7Gcj72mRexvdqlPLPc4DSIey+OidUjZOy0mNj0=; b=VqPHDidVjqR1+mWLo+XgAdkmO1sj7zf1sDGUTQwWW/xpva/BWGAryMYWnOOzB1J3FP r+oJZ+cnqkOjx2uenvziaa2fGyWOf1oPP0LddJwFGCE+vybtH4OnyYXghYrRzGMfWRHN lVmmJOV2mhhkLI2LalMMLSNBOUegNiQ+YQ7exaSdsKHEp7JGo08dEyW+LxUiCvhDvjQJ MPkqVXO4/8SvT04fRYX//6gFhonleaJQa2aXaZPVTUH6wMVKx0f7P3YTGp3g7Fr7AH7u 3X/hjElCZfkhtOQIsBRm72SvZ47qR8NvorZy1vzUG90r4fx903mjfJCUxQDw3ZAtI2Ny eWmA== X-Forwarded-Encrypted: i=1; AJvYcCVZX10jYXrIM0597lLa/bZ3ooRxhsVr5OwUB3+qWoFcb+wPNLGZjS7S0h54gnxHOl1oRyJUmRniTyPnieL2XIQ=@vger.kernel.org X-Gm-Message-State: AOJu0YywUabIIa2ox/ttIHk7eGIDdVSEOYsn4DligD9tB3WkiBmWWRrZ DsAbuO1ppoZToyeG78fuZsuVxArFKtTOUi7MhAKRF2vF6W2ZWr1GbTi0H9w1IWyKSbauKSsq6MV CJuUApea24w== X-Google-Smtp-Source: AGHT+IEfkAWU/U59zbcJcvSExnx+OOOVI8NlhezCdt4M6QGEKDcwIKgF2uEEwit281pMZ3Ls6GbeSRldOOqo X-Received: from pfdf22.prod.google.com ([2002:aa7:8b16:0:b0:742:a71a:ea85]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:88cc:0:b0:742:a02e:dd82 with SMTP id d2e1a72fcca58-742a98dce7fmr5684560b3a.22.1747423195895; Fri, 16 May 2025 12:19:55 -0700 (PDT) Date: Fri, 16 May 2025 19:19:25 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <1ade7750adbfe39ca5b8e074ad5edb37a7bc7e54.1747368092.git.afranji@google.com> Subject: [RFC PATCH v2 05/13] KVM: selftests: Add tests for KVM_LINK_GUEST_MEMFD ioctl From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Test that + Invalid inputs should be rejected with EINVAL + Successful inputs return a new (destination) fd + Destination and source fds have the same inode number + No crash on program exit Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- .../testing/selftests/kvm/guest_memfd_test.c | 43 +++++++++++++++++++ .../testing/selftests/kvm/include/kvm_util.h | 18 ++++++++ 2 files changed, 61 insertions(+) diff --git a/tools/testing/selftests/kvm/guest_memfd_test.c b/tools/testing/selftests/kvm/guest_memfd_test.c index ce687f8d248f..9b2a58cd9b64 100644 --- a/tools/testing/selftests/kvm/guest_memfd_test.c +++ b/tools/testing/selftests/kvm/guest_memfd_test.c @@ -170,6 +170,48 @@ static void test_create_guest_memfd_multiple(struct kvm_vm *vm) close(fd1); } +static void test_link(struct kvm_vm *src_vm, int src_fd, size_t total_size) +{ + int ret; + int dst_fd; + struct kvm_vm *dst_vm; + struct stat src_stat; + struct stat dst_stat; + + dst_vm = vm_create_barebones(); + + /* Linking with a nonexistent fd */ + dst_fd = __vm_link_guest_memfd(dst_vm, 99, 0); + TEST_ASSERT_EQ(dst_fd, -1); + TEST_ASSERT_EQ(errno, EINVAL); + + /* Linking with a non-gmem fd */ + dst_fd = __vm_link_guest_memfd(dst_vm, 0, 1); + TEST_ASSERT_EQ(dst_fd, -1); + TEST_ASSERT_EQ(errno, EINVAL); + + /* Linking with invalid flags */ + dst_fd = __vm_link_guest_memfd(dst_vm, src_fd, 1); + TEST_ASSERT_EQ(dst_fd, -1); + TEST_ASSERT_EQ(errno, EINVAL); + + /* Linking with an already-associated vm */ + dst_fd = __vm_link_guest_memfd(src_vm, src_fd, 1); + TEST_ASSERT_EQ(dst_fd, -1); + TEST_ASSERT_EQ(errno, EINVAL); + + dst_fd = __vm_link_guest_memfd(dst_vm, src_fd, 0); + TEST_ASSERT(dst_vm > 0, "linking should succeed with valid inputs"); + TEST_ASSERT(src_fd != dst_fd, "linking should return a different fd"); + + ret = fstat(src_fd, &src_stat); + TEST_ASSERT_EQ(ret, 0); + ret = fstat(dst_fd, &dst_stat); + TEST_ASSERT_EQ(ret, 0); + TEST_ASSERT(src_stat.st_ino == dst_stat.st_ino, + "src and dst files should have the same inode number"); +} + int main(int argc, char *argv[]) { size_t page_size; @@ -194,6 +236,7 @@ int main(int argc, char *argv[]) test_file_size(fd, page_size, total_size); test_fallocate(fd, page_size, total_size); test_invalid_punch_hole(fd, page_size, total_size); + test_link(vm, fd, total_size); close(fd); } diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 373912464fb4..68faa658b69e 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -571,6 +571,24 @@ static inline int vm_create_guest_memfd(struct kvm_vm *vm, uint64_t size, return fd; } +static inline int __vm_link_guest_memfd(struct kvm_vm *vm, int fd, uint64_t flags) +{ + struct kvm_link_guest_memfd params = { + .fd = fd, + .flags = flags, + }; + + return __vm_ioctl(vm, KVM_LINK_GUEST_MEMFD, ¶ms); +} + +static inline int vm_link_guest_memfd(struct kvm_vm *vm, int fd, uint64_t flags) +{ + int new_fd = __vm_link_guest_memfd(vm, fd, flags); + + TEST_ASSERT(new_fd >= 0, KVM_IOCTL_ERROR(KVM_LINK_GUEST_MEMFD, new_fd)); + return new_fd; +} + void vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, uint64_t gpa, uint64_t size, void *hva); int __vm_set_user_memory_region(struct kvm_vm *vm, uint32_t slot, uint32_t flags, From patchwork Fri May 16 19:19:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890757 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1BB5528032F for ; Fri, 16 May 2025 19:19:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423199; cv=none; b=g3QPwuBpjlHwWcDWaLcREiL5cYdRO23iPIB4rPRtQ49ORwab0s0QFNkO1yyj0t70Yq6ukw6gqDpVtJ5PBidnVUD4GvF8fe/QqsMZ/lA+DwAHthfwOQBOyaK5err+30HCat0FunwdwQUemYqxLMDuzpj1bYbyJpDKMEfs36R9u5k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423199; c=relaxed/simple; bh=/R9vB/KqEvAjXWixKptCo/7lWCpS9NBKQF8SStvrMlc=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=gDMvUdZORQlHCkeFu9L/Bb7nmmeb0regudqXxEt0onY2nNKp69ySTPuUTlnw59udY756PumbDdT0A8iycmcm5KOKS7lc4yg6q4njrU69ziiLvrJkUc0iPMg7wQqVBn/kLzRRj1HZCxwThSekCEsjP6TVRoCCNQkCC4kbzjpc5FU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=JDXEf5Dr; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="JDXEf5Dr" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30a39fa0765so3435724a91.3 for ; Fri, 16 May 2025 12:19:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423197; x=1748027997; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Fj+O3835hB+mDLIwn62GyrN5K+Himp712LOqeVzKuMw=; b=JDXEf5DrhNAjKz3Pxbn2Wmciu0O/sMxPTAEWXU+/VEP7riA+/8rwR4YPbiz7pUEYu2 jMX2Wt41mRUlJDnhlzChVONUaDzkju/SZnrCbwndPYN+p4ckrjjCzCRXOjOzZ2xTJZbP MLwOkt9ElCqagjlly6FPoUPayJ/CCz09mgu5I/+4yydKGezq/oQU+6IrbSNh3Q3I1oas dQkmZ+P1uEzBaEoDpJo9s+HIvaR99n3qFEFIzRXQkYufjhTLIBMLaXMKx1loR6i8B5Dt F/TcnJTMYJycaG6v2a+ml2xwPfgy4boIb3w2+cdQ51WrBlENeY2QdT14UHteg3PtQJDs btCw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423197; x=1748027997; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Fj+O3835hB+mDLIwn62GyrN5K+Himp712LOqeVzKuMw=; b=KdF6pYQsPlnODOTKNk+2UUQPgQrfrQiG9hMSnjJORv6BHRuYrN1iI7IvH+LjmKGTk2 433Og8Z1OUjNhU1/zvbJH9iCO9q4/qxtbf/f2sJMtfCScRfTo8DiC7cfZcmYeVp2hbAn EqZ/w2X0R9S0mYTLXFFS4mbpvtGu/7TH7GcikvjSWUwndel2yZemQQvi0edA4brXpGr6 LacL3nRIGRtpm6rP0YP79SG8n/VpgcUz5F6/laqOkAFRlJ3Kw5yn6dJ11fuK8NgBcZ8E f3Mko+ZyOQw53Po+lijc0Lvx/sMp4UNBcJbsKyPqxH7j13uItTE9THl1EhcPOA5yQpaa WcYg== X-Forwarded-Encrypted: i=1; AJvYcCUWi8D4dI8B0AhsL3zA4enz7bvUmBuh4Qx3x4r1oGTwpLtOw8PMfUT3QfYImYm337/AI5hpUOpbgpnGgrts0/E=@vger.kernel.org X-Gm-Message-State: AOJu0YzjTSDt8mtU9tQXlPfGpDYZhpqjYpsfa94OcMz+/0ghGEveT7DB E8gQJaw5XqXmTSoO58YMaCeQ07HvBRNu6iFx56lZcjCgNHWk+495Ia6aBvt3OKAVk126p3V49oz hArtPF2Tnxg== X-Google-Smtp-Source: AGHT+IHsBpALK/e0rg+JB1mNGETo/0o1fwTI4vHuGOsLELeZyBF3WgvYCnwYUD0i5CByXpXpO2KcQaRcAv/0 X-Received: from pjbqn6.prod.google.com ([2002:a17:90b:3d46:b0:2fc:ccfe:368]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:e185:b0:2ff:6ac2:c5a5 with SMTP id 98e67ed59e1d1-30e7d5a8600mr5955317a91.26.1747423197397; Fri, 16 May 2025 12:19:57 -0700 (PDT) Date: Fri, 16 May 2025 19:19:26 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 06/13] KVM: selftests: Test transferring private memory to another VM From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- .../kvm/x86/private_mem_migrate_tests.c | 87 +++++++++++++++++++ 1 file changed, 87 insertions(+) create mode 100644 tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c diff --git a/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c b/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c new file mode 100644 index 000000000000..4226de3ebd41 --- /dev/null +++ b/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c @@ -0,0 +1,87 @@ +// SPDX-License-Identifier: GPL-2.0 +#include "kvm_util_base.h" +#include "test_util.h" +#include "ucall_common.h" +#include +#include + +#define TRANSFER_PRIVATE_MEM_TEST_SLOT 10 +#define TRANSFER_PRIVATE_MEM_GPA ((uint64_t)(1ull << 32)) +#define TRANSFER_PRIVATE_MEM_GVA TRANSFER_PRIVATE_MEM_GPA +#define TRANSFER_PRIVATE_MEM_VALUE 0xdeadbeef + +static void transfer_private_mem_guest_code_src(void) +{ + uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA; + + *ptr = TRANSFER_PRIVATE_MEM_VALUE; + + GUEST_SYNC1(*ptr); +} + +static void transfer_private_mem_guest_code_dst(void) +{ + uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA; + + GUEST_SYNC1(*ptr); +} + +static void test_transfer_private_mem(void) +{ + struct kvm_vm *src_vm, *dst_vm; + struct kvm_vcpu *src_vcpu, *dst_vcpu; + int src_memfd, dst_memfd; + struct ucall uc; + + const struct vm_shape shape = { + .mode = VM_MODE_DEFAULT, + .type = KVM_X86_SW_PROTECTED_VM, + }; + + /* Build the source VM, use it to write to private memory */ + src_vm = __vm_create_shape_with_one_vcpu( + shape, &src_vcpu, 0, transfer_private_mem_guest_code_src); + src_memfd = vm_create_guest_memfd(src_vm, SZ_4K, 0); + + vm_mem_add(src_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA, + TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE, + src_memfd, 0); + + virt_map(src_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1); + vm_set_memory_attributes(src_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K, + KVM_MEMORY_ATTRIBUTE_PRIVATE); + + vcpu_run(src_vcpu); + TEST_ASSERT_KVM_EXIT_REASON(src_vcpu, KVM_EXIT_IO); + get_ucall(src_vcpu, &uc); + TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE, + "Source VM should be able to write to private memory"); + + /* Build the destination VM with linked fd */ + dst_vm = __vm_create_shape_with_one_vcpu( + shape, &dst_vcpu, 0, transfer_private_mem_guest_code_dst); + dst_memfd = vm_link_guest_memfd(dst_vm, src_memfd, 0); + + vm_mem_add(dst_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA, + TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE, + dst_memfd, 0); + + virt_map(dst_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1); + vm_set_memory_attributes(dst_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K, + KVM_MEMORY_ATTRIBUTE_PRIVATE); + + vcpu_run(dst_vcpu); + TEST_ASSERT_KVM_EXIT_REASON(dst_vcpu, KVM_EXIT_IO); + get_ucall(dst_vcpu, &uc); + TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE, + "Destination VM should be able to read value transferred"); +} + +int main(int argc, char *argv[]) +{ + TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)); + + test_transfer_private_mem(); + + return 0; +} From patchwork Fri May 16 19:19:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890947 Received: from mail-pg1-f201.google.com (mail-pg1-f201.google.com [209.85.215.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8ECE6280CE7 for ; Fri, 16 May 2025 19:19:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423201; cv=none; b=n6dyBXFmkd2Gmhgbv7vdc9JTUtzmMwgx+al5z+k7Ym/fa362uzaoIvTm2S04jaerZbDuUEEKZE0NIFIG6ZjYUr+j7t1dXqMXLsQl+6eSl4Y1R/vIUYftiG7Ts1EqMammdObh27mNXjgTHeyGQNbzpotRIkyXWnQ1c0TyxmAsmJk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423201; c=relaxed/simple; bh=3OVBGplhuLyJXi91gK0acJyrlMFbObAXSg45TnNYvbw=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=ZQiijHeLeVRFeQ5yz5AGW5Yc7+cEVZlMs+YmdlhsQmkaCmY5Aw2VZrxF4d+UmziQizOPao16c+XT+uF/lOW4g2IjbUi/eKZSAo5WdJDd9akvng5CL/I61xUXon1othSBY1G+qML+ZjfBp0cJi2uW0Gm2TCXul6b3lBlZsDouw+E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=o4aaaQ9s; arc=none smtp.client-ip=209.85.215.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="o4aaaQ9s" Received: by mail-pg1-f201.google.com with SMTP id 41be03b00d2f7-b26e4fe0c08so1111454a12.3 for ; Fri, 16 May 2025 12:19:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423199; x=1748027999; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=7Ey+qHNz3Id949hQmd8IqzRnzJEXEhx4K2zZ7tQ+zaA=; b=o4aaaQ9sGG6wCtxro+Qc8JnHRiR6pDBRMUX4USXdy74+puTRJ89aVNbNjkQcOErHs5 k8FY7BH+w3EJBdGXttIl0DWOeOIsLGsUzcsAR/sAehTL80T5KVjVexv7R2T9YP2t3IQZ wccPBBXqM9cwZpJGSFMkg/lbpiDu0BMTtRnUSLSnA4XeKMptLoDhvlSQaZTvCLlozXVv mi7GwXGMLxanIxcPSHyyFwBGFzDyRli53AIQgTGyo6Cj4lmYA98vS3sPHALMGaaYPNAf lWZ+OIjytDZMy07GWlVSMeRHL7wZiUXw9poQq77Mr9pscSenf3WxEFNkpUEalQo/Bv6e kV/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423199; x=1748027999; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=7Ey+qHNz3Id949hQmd8IqzRnzJEXEhx4K2zZ7tQ+zaA=; b=iUW6Zbzu8XHm32B7MuYfAFxETEYwdMZXUj6Fu0l1D8Q9pUbifQsjzS7NumGjrSewtu z0Fbu4ebAVDlAzH0dtPEbck1WhNCF6WFd/uu1RddO6O7H0GE6AYAWcAOCDcyPUNb77qG aNvkWKaNTBp8PDOfF73y9ZRrNqrQLGfyZzmiF63aO5F8j76AOomN168ibYL49Lr6I8kN T+mERtzLqe3vqVgyRoyKrY96kkVCtvlQRQutt10KQosfYDHp0YutwOxMgFx5zkiRzjtI lRQxWQUSjgoeLKTYUHasZs47qHL2ImRHshjBEncJbtJdc1VBmD7g7NRCDba4jM5gUxiM HQKA== X-Forwarded-Encrypted: i=1; AJvYcCVGy4gPqi8RMEKJFXhstQyl+Yi42ED0rds08Asg7X1SM/OX9C7I6VB4sCLMzapDln9jfYz0TzO+bS9tZJ8yXZI=@vger.kernel.org X-Gm-Message-State: AOJu0Yy66LldBhYW90mnm64pAR5+jU2EaAD7+ydrx7Ew/2Zp+lEgcns9 ZFxXsbkeV53JnssymNCGJYHGc7sWF2digHRXmn1f9OqjrFA+XqbD2Y8SIBaP4WhKkp76THbdwWX tmIZqd2Z78w== X-Google-Smtp-Source: AGHT+IF3tOGDs3uMTUXFN04Lh1XuUUSb8FRl25FYpCEolQddp0RRUtONR8vJwzDBqRDLQzwKKqJKODObKV73 X-Received: from pjz15.prod.google.com ([2002:a17:90b:56cf:b0:2ef:786a:1835]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90a:d407:b0:30e:823f:ef25 with SMTP id 98e67ed59e1d1-30e823fefedmr5901603a91.28.1747423198880; Fri, 16 May 2025 12:19:58 -0700 (PDT) Date: Fri, 16 May 2025 19:19:27 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <6ef77cc986aa89c1799cf3709de30edd6e4e70ee.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 07/13] KVM: x86: Refactor sev's flag migration_in_progress to kvm struct From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng The migration_in_progress flag will also be needed for migration of non-sev VMs. Co-developed-by: Sagi Shahar Signed-off-by: Sagi Shahar Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/kvm/svm/sev.c | 17 ++++++----------- arch/x86/kvm/svm/svm.h | 1 - include/linux/kvm_host.h | 1 + 3 files changed, 7 insertions(+), 12 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 0bc708ee2788..89c06cfcc200 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1838,8 +1838,6 @@ static bool is_cmd_allowed_from_mirror(u32 cmd_id) static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) { - struct kvm_sev_info *dst_sev = to_kvm_sev_info(dst_kvm); - struct kvm_sev_info *src_sev = to_kvm_sev_info(src_kvm); int r = -EBUSY; if (dst_kvm == src_kvm) @@ -1849,10 +1847,10 @@ static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) * Bail if these VMs are already involved in a migration to avoid * deadlock between two VMs trying to migrate to/from each other. */ - if (atomic_cmpxchg_acquire(&dst_sev->migration_in_progress, 0, 1)) + if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) return -EBUSY; - if (atomic_cmpxchg_acquire(&src_sev->migration_in_progress, 0, 1)) + if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) goto release_dst; r = -EINTR; @@ -1865,21 +1863,18 @@ static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) unlock_dst: mutex_unlock(&dst_kvm->lock); release_src: - atomic_set_release(&src_sev->migration_in_progress, 0); + atomic_set_release(&src_kvm->migration_in_progress, 0); release_dst: - atomic_set_release(&dst_sev->migration_in_progress, 0); + atomic_set_release(&dst_kvm->migration_in_progress, 0); return r; } static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) { - struct kvm_sev_info *dst_sev = to_kvm_sev_info(dst_kvm); - struct kvm_sev_info *src_sev = to_kvm_sev_info(src_kvm); - mutex_unlock(&dst_kvm->lock); mutex_unlock(&src_kvm->lock); - atomic_set_release(&dst_sev->migration_in_progress, 0); - atomic_set_release(&src_sev->migration_in_progress, 0); + atomic_set_release(&dst_kvm->migration_in_progress, 0); + atomic_set_release(&src_kvm->migration_in_progress, 0); } /* vCPU mutex subclasses. */ diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index d4490eaed55d..35df8be621c5 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -107,7 +107,6 @@ struct kvm_sev_info { struct list_head mirror_vms; /* List of VMs mirroring */ struct list_head mirror_entry; /* Use as a list entry of mirrors */ struct misc_cg *misc_cg; /* For misc cgroup accounting */ - atomic_t migration_in_progress; void *snp_context; /* SNP guest context page */ void *guest_req_buf; /* Bounce buffer for SNP Guest Request input */ void *guest_resp_buf; /* Bounce buffer for SNP Guest Request output */ diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 1dedc421b3e3..0c1d637a6e7d 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -862,6 +862,7 @@ struct kvm { /* Protected by slots_locks (for writes) and RCU (for reads) */ struct xarray mem_attr_array; #endif + atomic_t migration_in_progress; char stats_id[KVM_STATS_NAME_SIZE]; }; From patchwork Fri May 16 19:19:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890756 Received: from mail-pf1-f201.google.com (mail-pf1-f201.google.com [209.85.210.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F208F28136F for ; Fri, 16 May 2025 19:20:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423202; cv=none; b=JiNCNfe3omJqrnGGtczKl/7uY4XzbTl/ponXFiS/zkTy2tDmSuns8F9t6lGBAVD8QBcboqhZsJshA8A1uhFmIj6sb2EIQafUzPISjSdWkjrv5JuXBYqBCQCZGXYUCFiERzWh2SD1IhU5YO8ctYL5AQF1+C9q+SsEHJL8jW3E0MM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423202; c=relaxed/simple; bh=Z+Ymw5KorVqxl1MWcDtTMy/41FachyKB8z3Xc1HhzcI=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=l/91l+wY0dCJhpxL4cHDEdtIJbLz+eG+9jtpeG0iNbsJ6yzB7scerzTom0oOtJyzmI67QZILhHnrGeM5mrnonf94SziUbbbuP0kKTCNiplSYjK9Gdt0FuQ0jXbjrfrCKukaXzIT+wSOodpe9TTToXDMMwsa3nBBT36RZ1PF24DU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=fOT4YVaE; arc=none smtp.client-ip=209.85.210.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="fOT4YVaE" Received: by mail-pf1-f201.google.com with SMTP id d2e1a72fcca58-740270e168aso2305915b3a.1 for ; Fri, 16 May 2025 12:20:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423200; x=1748028000; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=i+O0+h0SP1zdyUU/AUBngE1nKYK7W0UZEZfvwyilFUE=; b=fOT4YVaEe85yMPIq6UbKaGDTj3/KrNF4V3448D7qflMl+JSLnxJufSjNAtQ1huqJj8 UJKCcQbUb4Q+MRWwZV3EzH2HeriR4XGJAhVozwlykIZX60s9q3j5SCbuZCYa+hGaofKH 2RMPbjwkGnCcD7/wWXBc48/utGzQDYccVYWjs8pc8zx7SfOyuaeG2wm+rmBD9bqPMlJt GJF+8BE0T03MbnljiKYecHZqIGcv586gP1VQlkD+pbnZC6oim5YjVNhCJzoFsRTvpwG5 z9paIiiYep/nCq7pxCLAkyE8K1FwV22fvYyfPRsIw+P8OzWMpBV0RQpj9X3Z/so/LTMk gO0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423200; x=1748028000; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=i+O0+h0SP1zdyUU/AUBngE1nKYK7W0UZEZfvwyilFUE=; b=nWkXazD0nBdKZMnYsl8A/vdw7KvUYKuD2L+CrcbrzLyhT7tJZIbX82e0dDr3fpMyH1 SuSjWTY3j01FXhok8kYlcZgmg6tnvrrSESMpqM50OatgjlOuAqiOqskwtkBQeQaj+YWH XtrAKIbfmiPsaEnr2KF1J+9rfAFletqleAX4WUaRT23N53zSlnIhwecXg/sSRTVpIhdf XmFjmB55RFsRUxs9ZFA0fPZsgzzEPASonb3LbIu4N1edMkKXQCa2D3LfCnTgsdL6zI8Y SxK0OEV+4Uy60NiftDLWYBgwJXoI/fyqgE+D43Lr2Em4EPWEea0ABoInBOvWKmG4ePAd nH0Q== X-Forwarded-Encrypted: i=1; AJvYcCU41bbtkMct8NTxtrJrHcLj7T3DLzTL4DpMnlE/5To7DqWsPGeCVhZq2Adqk8yuvBo/AWebwRezGiSYtSb/lgI=@vger.kernel.org X-Gm-Message-State: AOJu0Yxb/8dLAhJbcbwQ4EgRs8VWjBg0C73WNnLfahM5pfl/ZorRAt4Y rVabWoMT1AFJRd5SyonjM5uVwGsE3O6ysUY+yzDIPFjgHiOiGxRQAyKAxh0FKC3np13eibJobyb Hy+J3BvtHTA== X-Google-Smtp-Source: AGHT+IG5/VnWZMp1URaxTB5fGhpyao745JN7FMyydoGLfEMB3xGNC9Y03XfePEOK3stFRMn8KrJaxzImVTNi X-Received: from pfux1.prod.google.com ([2002:a05:6a00:bc1:b0:741:2a97:6ae2]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:aa7:888e:0:b0:742:a23e:2a68 with SMTP id d2e1a72fcca58-742a98a2437mr5530195b3a.15.1747423200276; Fri, 16 May 2025 12:20:00 -0700 (PDT) Date: Fri, 16 May 2025 19:19:28 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: Subject: [RFC PATCH v2 08/13] KVM: x86: Refactor common code out of sev.c From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Split sev_lock_two_vms() into kvm_mark_migration_in_progress() and kvm_lock_two_vms() and refactor sev.c to use these two new functions. Co-developed-by: Sagi Shahar Signed-off-by: Sagi Shahar Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/kvm/svm/sev.c | 60 ++++++++++------------------------------ arch/x86/kvm/x86.c | 62 ++++++++++++++++++++++++++++++++++++++++++ arch/x86/kvm/x86.h | 6 ++++ 3 files changed, 82 insertions(+), 46 deletions(-) diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 89c06cfcc200..b3048ec411e2 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -1836,47 +1836,6 @@ static bool is_cmd_allowed_from_mirror(u32 cmd_id) return false; } -static int sev_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) -{ - int r = -EBUSY; - - if (dst_kvm == src_kvm) - return -EINVAL; - - /* - * Bail if these VMs are already involved in a migration to avoid - * deadlock between two VMs trying to migrate to/from each other. - */ - if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) - return -EBUSY; - - if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) - goto release_dst; - - r = -EINTR; - if (mutex_lock_killable(&dst_kvm->lock)) - goto release_src; - if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING)) - goto unlock_dst; - return 0; - -unlock_dst: - mutex_unlock(&dst_kvm->lock); -release_src: - atomic_set_release(&src_kvm->migration_in_progress, 0); -release_dst: - atomic_set_release(&dst_kvm->migration_in_progress, 0); - return r; -} - -static void sev_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) -{ - mutex_unlock(&dst_kvm->lock); - mutex_unlock(&src_kvm->lock); - atomic_set_release(&dst_kvm->migration_in_progress, 0); - atomic_set_release(&src_kvm->migration_in_progress, 0); -} - /* vCPU mutex subclasses. */ enum sev_migration_role { SEV_MIGRATION_SOURCE = 0, @@ -2057,9 +2016,12 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) return -EBADF; source_kvm = fd_file(f)->private_data; - ret = sev_lock_two_vms(kvm, source_kvm); + ret = kvm_mark_migration_in_progress(kvm, source_kvm); if (ret) return ret; + ret = kvm_lock_two_vms(kvm, source_kvm); + if (ret) + goto out_mark_migration_done; if (kvm->arch.vm_type != source_kvm->arch.vm_type || sev_guest(kvm) || !sev_guest(source_kvm)) { @@ -2105,7 +2067,9 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) put_misc_cg(cg_cleanup_sev->misc_cg); cg_cleanup_sev->misc_cg = NULL; out_unlock: - sev_unlock_two_vms(kvm, source_kvm); + kvm_unlock_two_vms(kvm, source_kvm); +out_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); return ret; } @@ -2779,9 +2743,12 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd) return -EBADF; source_kvm = fd_file(f)->private_data; - ret = sev_lock_two_vms(kvm, source_kvm); + ret = kvm_mark_migration_in_progress(kvm, source_kvm); if (ret) return ret; + ret = kvm_lock_two_vms(kvm, source_kvm); + if (ret) + goto e_mark_migration_done; /* * Mirrors of mirrors should work, but let's not get silly. Also @@ -2821,9 +2788,10 @@ int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd) * KVM contexts as the original, and they may have different * memory-views. */ - e_unlock: - sev_unlock_two_vms(kvm, source_kvm); + kvm_unlock_two_vms(kvm, source_kvm); +e_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); return ret; } diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index f6ce044b090a..422c66a033d2 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4502,6 +4502,68 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info) } EXPORT_SYMBOL_GPL(kvm_get_msr_common); +int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + int r; + + if (dst_kvm == src_kvm) + return -EINVAL; + + /* + * Bail if these VMs are already involved in a migration to avoid + * deadlock between two VMs trying to migrate to/from each other. + */ + r = -EBUSY; + if (atomic_cmpxchg_acquire(&dst_kvm->migration_in_progress, 0, 1)) + return r; + + if (atomic_cmpxchg_acquire(&src_kvm->migration_in_progress, 0, 1)) + goto release_dst; + + return 0; + +release_dst: + atomic_set_release(&dst_kvm->migration_in_progress, 0); + return r; +} +EXPORT_SYMBOL_GPL(kvm_mark_migration_in_progress); + +void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + atomic_set_release(&dst_kvm->migration_in_progress, 0); + atomic_set_release(&src_kvm->migration_in_progress, 0); +} +EXPORT_SYMBOL_GPL(kvm_mark_migration_done); + +int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + int r; + + if (dst_kvm == src_kvm) + return -EINVAL; + + r = -EINTR; + if (mutex_lock_killable(&dst_kvm->lock)) + return r; + + if (mutex_lock_killable_nested(&src_kvm->lock, SINGLE_DEPTH_NESTING)) + goto unlock_dst; + + return 0; + +unlock_dst: + mutex_unlock(&dst_kvm->lock); + return r; +} +EXPORT_SYMBOL_GPL(kvm_lock_two_vms); + +void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + mutex_unlock(&dst_kvm->lock); + mutex_unlock(&src_kvm->lock); +} +EXPORT_SYMBOL_GPL(kvm_unlock_two_vms); + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h index 88a9475899c8..508f9509546c 100644 --- a/arch/x86/kvm/x86.h +++ b/arch/x86/kvm/x86.h @@ -649,4 +649,10 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, int cpl, int kvm_emulate_hypercall(struct kvm_vcpu *vcpu); +int kvm_mark_migration_in_progress(struct kvm *dst_kvm, struct kvm *src_kvm); +void kvm_mark_migration_done(struct kvm *dst_kvm, struct kvm *src_kvm); + +int kvm_lock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm); +void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm); + #endif From patchwork Fri May 16 19:19:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890946 Received: from mail-pl1-f201.google.com (mail-pl1-f201.google.com [209.85.214.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A48328151E for ; Fri, 16 May 2025 19:20:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423204; cv=none; b=nfuyv34fKDecGkobEGWobMOjmkdj/RQu5M8rQdk2bwEhi2fABo+SpJYc1a9xcKUfhUTn9JQjCidXqeHg7Ql3GiNFzeunJLtAHZ60zNPayZ/UCySM4IbKTHuS5r4mwy4vLyO8tTcFfjgPNB1Xqsr7nkmamHnrKpmGHp1aCHX3fjE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423204; c=relaxed/simple; bh=NuB/f6nkq4OHFjVHs3gXXIOsYIstssVeMan9gyJklK8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=VsAqd7JgF/jEOwry2UJKN1WpDWfkSFBTMPnxo8Q8MOqlcnwq6CXpyUIYekf3rAoealtkQq90ECjAY30Hmo3b6QKeX7zINZd0U8U+MXlWjZA3YHR4nVGb8pu9Nv8zwVPLNjlFvgAgI8JUIdMg+M0OB81pUN2fjD5JZAgjpGSSRWQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=mjwW8P6m; arc=none smtp.client-ip=209.85.214.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="mjwW8P6m" Received: by mail-pl1-f201.google.com with SMTP id d9443c01a7336-231e059b34dso6636955ad.0 for ; Fri, 16 May 2025 12:20:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423202; x=1748028002; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=pcD6u0JZ2o5dpCGxWEFpq76ioIg39n0Fm+ILo2b7fTo=; b=mjwW8P6maoDIIbGFFNwK97/yj8bo7I0LSYbVCYRHIkRdmadi0sB2A+stpItNvu/amb XfMiDf9/M1UBoxk3h/um2ikuxsa2cyWBpssL6pN5v+ATJXvTQPlrxLQ5QYtJPf7VU9Mm 3yQLA2gsHV+YLaf1x5vDZpxfgzVCLctzeGLv2n/USSrEegGY009AEeobfhJcsUk13dkJ S0cSUmyUuLp2JkjarNbkujX9H8PvhvpOiHCIN8uGSNhHLIxa5Y17nzZe1lyBtlbJ7PRI X06dRR8vSfOVMdAuTnayTVxJUXhmyPpiPFIoFDJG88CeEgtaXPYinrGHi4KSGFzwRUr9 pRPw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423202; x=1748028002; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=pcD6u0JZ2o5dpCGxWEFpq76ioIg39n0Fm+ILo2b7fTo=; b=tlGA5QJ1NDkCqsp98URz0DaqPxOYmVAYX8MluFOhwo9xlLEqp9b21PI+hLWHyF/CMO b2q70VwY0o2jVh/YPFSAntXB0nO2Hg+7oFuGu8QI6OuMzpE/X1bQPlyKkWvJUX5RJz6S Y71aaoB18T+YG5PItIABanZhDe7pz8voF/75leQ1dwLGDp7AjMTvu0Wp72JBrmwqw3/T ax/vjqI9Z50KGscxXNHEdwW4Sn45aoUOii7leCxu4kjO7wBcbMfLhrb3bS0Cko3eZUd/ QmKIrL43jY0ybs/dPKYN66zzzMm7MnMmMiqrJIh/k9wWZfF4GVc2npl3ukOD0ES6PZma jg9A== X-Forwarded-Encrypted: i=1; AJvYcCWEaTctp9krgKTEOmpyQ4Tjn4Ss5N4sc+6Xt6WT6azYjDuLRYKGVj07HxhDgJsiot3tRLvGh8vBrRWWIcwvSek=@vger.kernel.org X-Gm-Message-State: AOJu0Yz1s2nbECNDmn+NTDLbVUIwjHKFwvbKENmd6Z4ewTifGuttsniY hTdOMR1iK8fal2Jq5oGYB1uphz0oc671q2BGrcqYci1hGsvVgb82lSa1Js1+8jKxcgcB3WWvkXZ ujxjoqQ3hXtCaILeqcCK1LsXO1/Na5YuadDeVISzbIfc/IvduOEJvCamW8YX1QYpmZqkHwuyJBg qJzv0= X-Google-Smtp-Source: AGHT+IHAks9p6aEl1daXh7zR8tIAiKXLPdB+oQKMTtgz7kcarMzyttyq7vXQeQX44ygO4jg4JLr2qMfpus3k X-Received: from plpe4.prod.google.com ([2002:a17:903:3c24:b0:22e:4288:ad7]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:903:40d0:b0:224:194c:694c with SMTP id d9443c01a7336-231de3764d3mr58245605ad.28.1747423201887; Fri, 16 May 2025 12:20:01 -0700 (PDT) Date: Fri, 16 May 2025 19:19:29 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <02fa2a32b0628bf9e8e9700a79fa02f0b13b2e90.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 09/13] KVM: x86: Refactor common migration preparation code out of sev_vm_move_enc_context_from From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com X-ccpol: medium From: Ackerley Tng Co-developed-by: Sagi Shahar Signed-off-by: Sagi Shahar Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/include/asm/kvm_host.h | 2 +- arch/x86/kvm/svm/sev.c | 29 +++--------------------- arch/x86/kvm/svm/svm.h | 2 +- arch/x86/kvm/x86.c | 39 ++++++++++++++++++++++++++++++++- 4 files changed, 43 insertions(+), 29 deletions(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 6c06f3d6e081..179618300270 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1871,7 +1871,7 @@ struct kvm_x86_ops { int (*mem_enc_register_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*mem_enc_unregister_region)(struct kvm *kvm, struct kvm_enc_region *argp); int (*vm_copy_enc_context_from)(struct kvm *kvm, unsigned int source_fd); - int (*vm_move_enc_context_from)(struct kvm *kvm, unsigned int source_fd); + int (*vm_move_enc_context_from)(struct kvm *kvm, struct kvm *source_kvm); void (*guest_memory_reclaimed)(struct kvm *kvm); int (*get_feature_msr)(u32 msr, u64 *data); diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index b3048ec411e2..689521d9e26f 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -2000,34 +2000,15 @@ static int sev_check_source_vcpus(struct kvm *dst, struct kvm *src) return 0; } -int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) +int sev_vm_move_enc_context_from(struct kvm *kvm, struct kvm *source_kvm) { struct kvm_sev_info *dst_sev = to_kvm_sev_info(kvm); struct kvm_sev_info *src_sev, *cg_cleanup_sev; - CLASS(fd, f)(source_fd); - struct kvm *source_kvm; bool charged = false; int ret; - if (fd_empty(f)) - return -EBADF; - - if (!file_is_kvm(fd_file(f))) - return -EBADF; - - source_kvm = fd_file(f)->private_data; - ret = kvm_mark_migration_in_progress(kvm, source_kvm); - if (ret) - return ret; - ret = kvm_lock_two_vms(kvm, source_kvm); - if (ret) - goto out_mark_migration_done; - - if (kvm->arch.vm_type != source_kvm->arch.vm_type || - sev_guest(kvm) || !sev_guest(source_kvm)) { - ret = -EINVAL; - goto out_unlock; - } + if (sev_guest(kvm) || !sev_guest(source_kvm)) + return -EINVAL; src_sev = to_kvm_sev_info(source_kvm); @@ -2066,10 +2047,6 @@ int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) sev_misc_cg_uncharge(cg_cleanup_sev); put_misc_cg(cg_cleanup_sev->misc_cg); cg_cleanup_sev->misc_cg = NULL; -out_unlock: - kvm_unlock_two_vms(kvm, source_kvm); -out_mark_migration_done: - kvm_mark_migration_done(kvm, source_kvm); return ret; } diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h index 35df8be621c5..7bd31c0b135a 100644 --- a/arch/x86/kvm/svm/svm.h +++ b/arch/x86/kvm/svm/svm.h @@ -757,7 +757,7 @@ int sev_mem_enc_register_region(struct kvm *kvm, int sev_mem_enc_unregister_region(struct kvm *kvm, struct kvm_enc_region *range); int sev_vm_copy_enc_context_from(struct kvm *kvm, unsigned int source_fd); -int sev_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd); +int sev_vm_move_enc_context_from(struct kvm *kvm, struct kvm *source_kvm); void sev_guest_memory_reclaimed(struct kvm *kvm); int sev_handle_vmgexit(struct kvm_vcpu *vcpu); diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 422c66a033d2..637540309456 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6597,6 +6597,43 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event, return 0; } +static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) +{ + int r; + struct kvm *source_kvm; + struct fd f = fdget(source_fd); + struct file *file = fd_file(f); + + r = -EBADF; + if (!file) + return r; + + if (!file_is_kvm(file)) + goto out_fdput; + + r = -EINVAL; + source_kvm = file->private_data; + if (kvm->arch.vm_type != source_kvm->arch.vm_type) + goto out_fdput; + + r = kvm_mark_migration_in_progress(kvm, source_kvm); + if (r) + goto out_fdput; + + r = kvm_lock_two_vms(kvm, source_kvm); + if (r) + goto out_mark_migration_done; + + r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm); + + kvm_unlock_two_vms(kvm, source_kvm); +out_mark_migration_done: + kvm_mark_migration_done(kvm, source_kvm); +out_fdput: + fdput(f); + return r; +} + int kvm_vm_ioctl_enable_cap(struct kvm *kvm, struct kvm_enable_cap *cap) { @@ -6738,7 +6775,7 @@ int kvm_vm_ioctl_enable_cap(struct kvm *kvm, if (!kvm_x86_ops.vm_move_enc_context_from) break; - r = kvm_x86_call(vm_move_enc_context_from)(kvm, cap->args[0]); + r = kvm_vm_move_enc_context_from(kvm, cap->args[0]); break; case KVM_CAP_EXIT_HYPERCALL: if (cap->args[0] & ~KVM_EXIT_HYPERCALL_VALID_MASK) { From patchwork Fri May 16 19:19:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890755 Received: from mail-pj1-f73.google.com (mail-pj1-f73.google.com [209.85.216.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B2CB2820B9 for ; Fri, 16 May 2025 19:20:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423205; cv=none; b=rNpWe0b7zLzwBSa+95vipQNR00ze+Hm/N5ImoeSZ58a8vZuUqDQjfzyJnGTQrWI+QH+mcJrSM2YwOy6f+WH7qpTJrvF0RIjKTEaeVwf7HOVviHBnYcF5E02YVDxrwxnUyqQsdCW4bZNC9AAjgF1cMQwifzJY5GiLnNv6byAa4k0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423205; c=relaxed/simple; bh=zI703iJCDbnk2s6d61LSt5o9bFxMF1iFazDfQ6c+5lo=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=jy3wXVwCprB9XloiTXGKhH0X/ZEWyd38tM9e45A6qoesMiwB82DZjMhIWzU/yeBs+3XVAYdRH0l+BhJOCSpbpQTDwbo7G97BL2mBUhpOqMWkUBeQKa/5hwI5gHzqw6YKJsdEMJZrTRbVOZDFB9s7fdhcGW/42ylgnqqS52adBvY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=kjBaW/fa; arc=none smtp.client-ip=209.85.216.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="kjBaW/fa" Received: by mail-pj1-f73.google.com with SMTP id 98e67ed59e1d1-30e8425926eso1410034a91.1 for ; Fri, 16 May 2025 12:20:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423203; x=1748028003; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Xw0kwmlCHmpdsX/DiL844LcPhYI529J79EzDxXuX1+o=; b=kjBaW/faJLsmRpkYDLcpuE44pBFxM774oWFWGs/U/exePEWWfWzkgCyILUbXl0VE0m RiVmZymxIHz7xgF/RhaBIji8SNXkNa0GspmhXEd7gC4ZmnYJnLM/CQLytwtNuqBmoMb7 24xrFyevRQ4MrIQDCfndBhBzF2DTo1K8Fx32xGJ6JORBiigVC9oHzC3z6vBm6qRVpejm TaQi7QWvqJtemLU56lf9CDA0BzwEMB0SjZ585RksO3a+VZ6Xbp5Wn0FBmPn5wV5o/bvN A9lgJH46+n8QXo0A13epSqB9XEO/+Q/bEDCJVkoFfdL4Uygplw6ehIoImmRYsZlQBKWK kqzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423203; x=1748028003; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Xw0kwmlCHmpdsX/DiL844LcPhYI529J79EzDxXuX1+o=; b=IfLfV+fNSKy8D2ncCMtPiNGFzje/2dSvGj7nqOMnMZmJYX5a0w0/zOk+ag3k7cV1Vl RamsTZ/qu8pN8wdD/k5KC6jTPSKs8ZTPFY/4dvC67RbDBAoEEuO9A4mOMLHCjKjFQVLC KwYXoWRJXMKdwFXstZ4Eg5hZQLR6hcIgeSr1GV6cx0sXskIfNcduG1/YVXki4yBK7NB7 bnaEv6FGE3VAF3S16pEEkGnITHztrJcHFw77AKINdWQ1NAOzZwjgjBnJjMaSc7kRAtYP fUcu1g4hDIjfMzbM109qCwqEhd43674mFG0aisHUQxIomv0ZF9vsYr7dEWi5yvgX3iHN TmtQ== X-Forwarded-Encrypted: i=1; AJvYcCXTnW1XJvEI5gEP4XbE7HFS3nesOCAyMO4prWWuqjNMjMFVGx0X96pkA0N2kjsIaJgeyUqgDawdYgRiU0iBW8w=@vger.kernel.org X-Gm-Message-State: AOJu0YwfQu7Axe1c+9HGJxQ7KQx08vslD8Vl1HPJ+ZI+FBP+HnAXHvJW FNBZlDBtaMvWGpB7IKHjFOXsmJf2+6u3nIyry/y2KMS3GECFCo4z2RQNPH303ceDo6v/6PHwJ52 hCNDCBHlzV6m8/YzHPrO8v880m0xAAOifm2OQ765abVgl3BqP6UFTpEa4MqSG9Jg4sDn3LcY0HR geHkk= X-Google-Smtp-Source: AGHT+IHuWIFRKFYUeYZ/E0Mn4pw8FwJ4Uk8VOlO7RIg7ETTqbymvwLUfY9oGU3oIEt0b7BaEDlNyVDhXpsne X-Received: from pjb12.prod.google.com ([2002:a17:90b:2f0c:b0:30a:a05c:6e7d]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:558e:b0:2ee:8ea0:6b9c with SMTP id 98e67ed59e1d1-30e830fb83cmr6780268a91.12.1747423203424; Fri, 16 May 2025 12:20:03 -0700 (PDT) Date: Fri, 16 May 2025 19:19:30 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <7c51d4ae251323ce8c224aa362a4be616b4cfeba.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 10/13] KVM: x86: Let moving encryption context be configurable From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com X-ccpol: medium From: Ackerley Tng SEV-capable VMs may also use the KVM_X86_SW_PROTECTED_VM type, but they will still need architecture-specific handling to move encryption context. Hence, we let moving of encryption context be configurable and store that configuration in a flag. Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/include/asm/kvm_host.h | 1 + arch/x86/kvm/svm/sev.c | 2 ++ arch/x86/kvm/x86.c | 9 ++++++++- 3 files changed, 11 insertions(+), 1 deletion(-) diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h index 179618300270..db37ce814611 100644 --- a/arch/x86/include/asm/kvm_host.h +++ b/arch/x86/include/asm/kvm_host.h @@ -1576,6 +1576,7 @@ struct kvm_arch { #define SPLIT_DESC_CACHE_MIN_NR_OBJECTS (SPTE_ENT_PER_PAGE + 1) struct kvm_mmu_memory_cache split_desc_cache; + bool use_vm_enc_ctxt_op; gfn_t gfn_direct_bits; /* diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c index 689521d9e26f..95083556d321 100644 --- a/arch/x86/kvm/svm/sev.c +++ b/arch/x86/kvm/svm/sev.c @@ -442,6 +442,8 @@ static int __sev_guest_init(struct kvm *kvm, struct kvm_sev_cmd *argp, if (ret) goto e_no_asid; + kvm->arch.use_vm_enc_ctxt_op = true; + init_args.probe = false; ret = sev_platform_init(&init_args); if (ret) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 637540309456..3a7e05c47aa8 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -6624,7 +6624,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) if (r) goto out_mark_migration_done; - r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm); + /* + * Different types of VMs will allow userspace to define if moving + * encryption context should be required. + */ + if (kvm->arch.use_vm_enc_ctxt_op && + kvm_x86_ops.vm_move_enc_context_from) { + r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm); + } kvm_unlock_two_vms(kvm, source_kvm); out_mark_migration_done: From patchwork Fri May 16 19:19:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890945 Received: from mail-pj1-f74.google.com (mail-pj1-f74.google.com [209.85.216.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8866428312C for ; Fri, 16 May 2025 19:20:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423207; cv=none; b=AvuFQehDfYUVzPtoXX0vhvCBIMW6Fd0Nu3f2ImefTdJmXVJdaetYbtf8DgqmuRyyXJwLCkbiO12KR9cz+9Y6fKI7nZzNTk6KDsiRurwxBaifJ7HQuWwTSy8717knCgn6ILB8/2xEMPEntFzG4L3a2v/XsAwe8KoyFz7vycApBaw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423207; c=relaxed/simple; bh=PJwIKr9/FUWqaO/m70ZkKnN0L9kCQe8nUiK6kBAipNU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=huK3l+WDU4uTF00Jzcr9fj0QG91A8fhY5MVxAPH30JA88tcuhUbZmzvp4S22ClYpXhubzKUwbDUxtf+eItcNVXgeRubDkY1MGwFxtSUu8x9EGA/irh55wnP0zH6i8eJGOlfalq92yesDBUZXr/RNDmXtpXuFY5dBEQ1zij0su+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Ib356kUe; arc=none smtp.client-ip=209.85.216.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Ib356kUe" Received: by mail-pj1-f74.google.com with SMTP id 98e67ed59e1d1-30e78145dc4so1778276a91.2 for ; Fri, 16 May 2025 12:20:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423205; x=1748028005; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=WvUe9DtS12CRd8R+tPT1Oi+Tlg1Y+CwC86YQZiCNiXs=; b=Ib356kUeUqXhHziBuwxaxvwfXjV2q9RShSK7l8Rwt1BWL0h5yNLVVuuGEcaGAB6FaI ZmtCyr/MGhvZoKJc8ekH/RFkuwguLoLKaHzgMugbElTI31uPzRVbsjGQVMqewYcAMSZc 8Kqbi7AB15FDZRzOTX/GgpL4zbNz0bIfowEACjqXcZ1l5eJRgxGu+GLZf/BG290fX0EE PksJ42ARCQQfHS1CewmZLPemZt69XUTdK3TzTimVYc0Y+Ok7B9GnI2knVsoQ4uGz6pis 9u/5r9ck9oiv4Teep+PFbulRFuPWLEA0wpF7me5ON97/2pW+RTid94OQK+TvxiDMxxQX x/qQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423205; x=1748028005; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=WvUe9DtS12CRd8R+tPT1Oi+Tlg1Y+CwC86YQZiCNiXs=; b=fztAoX3BBqRhTJWjMSikLCVeUgt2+2jpQ1wQ1I9Yl35rqdkbSYYG6icBrFDSD61H8J w+z2hm1bMCL4WNZhPoig9ZWzj4lNiL+PJJGPZbABPb6IRxc7UF55+0+mI7ec/xqrVqby sn869K6jj0n20B/G2D14IErks+5SLdMFNNryM4DSjU6o++mL8xlBxI7pL3bLszAZ88sz 1bBzMi/yY7nX5fC44YvMiBbgYgnMSpdu13oCX86dIVnRfEIrn9CKozsmmqJ4G7yblqCT JJuxbw5NoEUavMlHZukdnTWIEFtOUovWBPpxBiwoCeIglHW1S0WF/Cb1TQo7f81XlC/n negg== X-Forwarded-Encrypted: i=1; AJvYcCX3nUhnKxAEd3HGMllEyb0dpGRwUIscWoKaYQFRms7s25rK45pOpejJsUm3DvhFi6W02FGjdwpquWhRkMFhxws=@vger.kernel.org X-Gm-Message-State: AOJu0YxF/fZBU0Rixg4W9GUKoFlqtC+LvEDluSHQa9AuBHn/O6Z4yIWU u2vzXS68HV10uYNzNY2prJ6zwLZRHy212WpaPZPsggtBUhGPdo3asrP3taq1B7SLqnbWuZjBMWM GyH8s3y7YZw== X-Google-Smtp-Source: AGHT+IFe7wb4jKDnCBfjN1IPr6GM/6UYnn4ThD1t1Bl8AbwTaMAWMava3PN/z1prmrpDxfzbHRStPUVZPwkp X-Received: from pjboi16.prod.google.com ([2002:a17:90b:3a10:b0:2fc:2ee0:d38a]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:2d83:b0:30e:9349:2da4 with SMTP id 98e67ed59e1d1-30e934931cbmr3770525a91.12.1747423204953; Fri, 16 May 2025 12:20:04 -0700 (PDT) Date: Fri, 16 May 2025 19:19:31 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <50e8f0950e00ec11385e5ce26764f95db80a973a.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 11/13] KVM: x86: Handle moving of memory context for intra-host migration From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Migration of memory context involves moving lpage_info and mem_attr_array from source to destination VM. Co-developed-by: Sagi Shahar Signed-off-by: Sagi Shahar Co-developed-by: Vishal Annapurve Signed-off-by: Vishal Annapurve Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- arch/x86/kvm/x86.c | 110 +++++++++++++++++++++++++++++++++++++++ include/linux/kvm_host.h | 17 ++++++ virt/kvm/guest_memfd.c | 25 +++++++++ 3 files changed, 152 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index 3a7e05c47aa8..887702781465 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -4564,6 +4564,33 @@ void kvm_unlock_two_vms(struct kvm *dst_kvm, struct kvm *src_kvm) } EXPORT_SYMBOL_GPL(kvm_unlock_two_vms); +static int kvm_lock_vm_memslots(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + int r = -EINVAL; + + if (dst_kvm == src_kvm) + return r; + + r = -EINTR; + if (mutex_lock_killable(&dst_kvm->slots_lock)) + return r; + + if (mutex_lock_killable_nested(&src_kvm->slots_lock, SINGLE_DEPTH_NESTING)) + goto unlock_dst; + + return 0; + +unlock_dst: + mutex_unlock(&dst_kvm->slots_lock); + return r; +} + +static void kvm_unlock_vm_memslots(struct kvm *dst_kvm, struct kvm *src_kvm) +{ + mutex_unlock(&src_kvm->slots_lock); + mutex_unlock(&dst_kvm->slots_lock); +} + /* * Read or write a bunch of msrs. All parameters are kernel addresses. * @@ -6597,6 +6624,78 @@ int kvm_vm_ioctl_irq_line(struct kvm *kvm, struct kvm_irq_level *irq_event, return 0; } +static bool memslot_configurations_match(struct kvm_memslots *src_slots, + struct kvm_memslots *dst_slots) +{ + struct kvm_memslot_iter src_iter; + struct kvm_memslot_iter dst_iter; + + kvm_for_each_memslot_pair(&src_iter, src_slots, &dst_iter, dst_slots) { + if (src_iter.slot->base_gfn != dst_iter.slot->base_gfn || + src_iter.slot->npages != dst_iter.slot->npages || + src_iter.slot->flags != dst_iter.slot->flags) + return false; + + if (kvm_slot_can_be_private(dst_iter.slot) && + !kvm_gmem_params_match(src_iter.slot, dst_iter.slot)) + return false; + } + + /* There should be no more nodes to iterate if configurations match */ + return !src_iter.node && !dst_iter.node; +} + +static int kvm_move_memory_ctxt_from(struct kvm *dst, struct kvm *src) +{ + struct kvm_memslot_iter src_iter; + struct kvm_memslot_iter dst_iter; + struct kvm_memslots *src_slots, *dst_slots; + int i; + + /* TODO: Do we also need to check consistency for as_id == SMM? */ + src_slots = __kvm_memslots(src, 0); + dst_slots = __kvm_memslots(dst, 0); + + if (!memslot_configurations_match(src_slots, dst_slots)) + return -EINVAL; + + /* + * Transferring lpage_info is an optimization, lpage_info can be rebuilt + * by the destination VM. + */ + kvm_for_each_memslot_pair(&src_iter, src_slots, &dst_iter, dst_slots) { + for (i = 1; i < KVM_NR_PAGE_SIZES; ++i) { + unsigned long ugfn = dst_iter.slot->userspace_addr >> PAGE_SHIFT; + int level = i + 1; + + /* + * If the gfn and userspace address are not aligned wrt each + * other, skip migrating lpage_info. + */ + if ((dst_iter.slot->base_gfn ^ ugfn) & + (KVM_PAGES_PER_HPAGE(level) - 1)) + continue; + + kvfree(dst_iter.slot->arch.lpage_info[i - 1]); + dst_iter.slot->arch.lpage_info[i - 1] = + src_iter.slot->arch.lpage_info[i - 1]; + src_iter.slot->arch.lpage_info[i - 1] = NULL; + } + } + +#ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES + /* + * For VMs that don't use private memory, this will just be moving an + * empty xarray pointer. + */ + dst->mem_attr_array.xa_head = src->mem_attr_array.xa_head; + src->mem_attr_array.xa_head = NULL; +#endif + + kvm_vm_dead(src); + return 0; +} + static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) { int r; @@ -6624,6 +6723,14 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) if (r) goto out_mark_migration_done; + r = kvm_lock_vm_memslots(kvm, source_kvm); + if (r) + goto out_unlock; + + r = kvm_move_memory_ctxt_from(kvm, source_kvm); + if (r) + goto out_unlock_memslots; + /* * Different types of VMs will allow userspace to define if moving * encryption context should be required. @@ -6633,6 +6740,9 @@ static int kvm_vm_move_enc_context_from(struct kvm *kvm, unsigned int source_fd) r = kvm_x86_call(vm_move_enc_context_from)(kvm, source_kvm); } +out_unlock_memslots: + kvm_unlock_vm_memslots(kvm, source_kvm); +out_unlock: kvm_unlock_two_vms(kvm, source_kvm); out_mark_migration_done: kvm_mark_migration_done(kvm, source_kvm); diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h index 0c1d637a6e7d..99abe9879856 100644 --- a/include/linux/kvm_host.h +++ b/include/linux/kvm_host.h @@ -1197,6 +1197,16 @@ struct kvm_memory_slot *gfn_to_memslot(struct kvm *kvm, gfn_t gfn); struct kvm_memslots *kvm_vcpu_memslots(struct kvm_vcpu *vcpu); struct kvm_memory_slot *kvm_vcpu_gfn_to_memslot(struct kvm_vcpu *vcpu, gfn_t gfn); + +/* Iterate over a pair of memslots in gfn order until one of the trees end */ +#define kvm_for_each_memslot_pair(iter1, slots1, iter2, slots2) \ + for (kvm_memslot_iter_start(iter1, slots1, 0), \ + kvm_memslot_iter_start(iter2, slots2, 0); \ + kvm_memslot_iter_is_valid(iter1, U64_MAX) && \ + kvm_memslot_iter_is_valid(iter2, U64_MAX); \ + kvm_memslot_iter_next(iter1), \ + kvm_memslot_iter_next(iter2)) + /* * KVM_SET_USER_MEMORY_REGION ioctl allows the following operations: * - create a new memory slot @@ -2521,6 +2531,8 @@ static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn) int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, kvm_pfn_t *pfn, struct page **page, int *max_order); +bool kvm_gmem_params_match(struct kvm_memory_slot *slot1, + struct kvm_memory_slot *slot2); #else static inline int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, gfn_t gfn, @@ -2530,6 +2542,11 @@ static inline int kvm_gmem_get_pfn(struct kvm *kvm, KVM_BUG_ON(1, kvm); return -EIO; } +static inline bool kvm_gmem_params_match(struct kvm_memory_slot *slot1, + struct kvm_memory_slot *slot2) +{ + return false; +} #endif /* CONFIG_KVM_PRIVATE_MEM */ #ifdef CONFIG_HAVE_KVM_ARCH_GMEM_PREPARE diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c index d76bd1119198..1a4198c4a4dd 100644 --- a/virt/kvm/guest_memfd.c +++ b/virt/kvm/guest_memfd.c @@ -778,6 +778,31 @@ int kvm_gmem_get_pfn(struct kvm *kvm, struct kvm_memory_slot *slot, } EXPORT_SYMBOL_GPL(kvm_gmem_get_pfn); +bool kvm_gmem_params_match(struct kvm_memory_slot *slot1, + struct kvm_memory_slot *slot2) +{ + bool ret; + struct file *file1; + struct file *file2; + + if (slot1->gmem.pgoff != slot2->gmem.pgoff) + return false; + + file1 = kvm_gmem_get_file(slot1); + file2 = kvm_gmem_get_file(slot2); + + ret = (file1 && file2 && + file_inode(file1) == file_inode(file2)); + + if (file1) + fput(file1); + if (file2) + fput(file2); + + return ret; +} +EXPORT_SYMBOL_GPL(kvm_gmem_params_match); + #ifdef CONFIG_KVM_GENERIC_PRIVATE_MEM long kvm_gmem_populate(struct kvm *kvm, gfn_t start_gfn, void __user *src, long npages, kvm_gmem_populate_cb post_populate, void *opaque) From patchwork Fri May 16 19:19:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890754 Received: from mail-pg1-f202.google.com (mail-pg1-f202.google.com [209.85.215.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DA4E5283152 for ; Fri, 16 May 2025 19:20:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423208; cv=none; b=WGJfZPCoVsCnVMxMGg/X10iu0o8i8ndGIc7m6MwJB2zc3/mzWYqDBJgjHbSdUhiNUlOmtqHGw4kpqcN09dDLzZE//5T4sVx7sjNZ7ThG6svkSnpn1otxmUkOknZqH/lnVaKncvsKdzHgU0N4hdKIzmm8FcUMszc52NEdu8RZklw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423208; c=relaxed/simple; bh=3na7WazeRc+l31ZBLq7zu/XN2+gl6KLApYKpDXRt1DE=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=lZxSbOY7N87GjD5AYBIvyqy2iAyK6sFk2eAEJo+cpNQ+unKqwiAAHRmgnc9ll2p7BYylBBD3r1T/0sxvdfviDjDP/+9xEX4LJfzqt3rCYyxMLWfPb0GopoHEqJbyHEHscJGN7hWqzd6CtOmCV2jkQ0O0aBPLsC/VeM+jcrp3hb4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=G0LJvHI6; arc=none smtp.client-ip=209.85.215.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="G0LJvHI6" Received: by mail-pg1-f202.google.com with SMTP id 41be03b00d2f7-af8e645a1d1so1530578a12.3 for ; Fri, 16 May 2025 12:20:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423206; x=1748028006; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=EBw7wfCdEYmTTv4xHtm1PEmGnm1jL6beLce2NlL5Vts=; b=G0LJvHI6fg2EFuD7Dgnu6JG410A49xGI5HqV7DEQ6mKGc8bzqRrv9uo7lmWF1xPL/L H4yfefuKxG7dBWnkbajg5zSgAwp4zLNX8xoW9+TRTPdkFfOmFtcAo/nnKgMnI6oJ4Ybf RkAgEg13vy8CMWArUnNn23vAyrdnQq9OPEBigBBvl1d0RIA6pk6YvmZQ0oH33wEQjQcN YRe5EX46ifFBs8t6LwKQrNS0ycFd2u10/0L6Y7tm4wTlSXpmS1mo8krcYqA0L639vc4Y cNSxwC4PsJdGgUJrKUMcJNqx19L2qEMIfFEYPf2mhOi7pzSHGzbn+2WU3edEvprNr5jN 6gDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423206; x=1748028006; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=EBw7wfCdEYmTTv4xHtm1PEmGnm1jL6beLce2NlL5Vts=; b=BXvCL62NwtUbYS+jVUDaR2KaozYHwscbH9T/MfmQJpVriLSf1dO8qhveW1XrSH4dl/ 1vXUeR/8lJ4Y6Iuvx9r7eFhgsnaDnXeuAqkusO+Kth5oQOoGWrfFTbU9SwLkcp+gajje ryJlvZb7XgFQVivamLrm0c7KTGJt92R9kZ86UTgOHxLz33Gk5UeAdaMsZNTxtFdcHyz4 71+1Mu7GTbSIrTzNEhzO4LBXyt00gqifPjogDYP8GyINO9o3+/6q1Q1jGdGHVU8YP7L2 u3hpoTCDkDxCEKvf/nlW8wg1HpN4leZfvMP/7vaRkq2xEp1vu+NSRP/4+lKhSdmIKKJR RX/g== X-Forwarded-Encrypted: i=1; AJvYcCX9r0Er4XvFIHzGxSOPFVCORrILuNDd0f/W8tZNSBM33C+WcH1DGv5XKYL78Lx25j6tADSbxWXmvvxonhp9X0k=@vger.kernel.org X-Gm-Message-State: AOJu0YwzefBf7pRUYGjAfGyVIdUZegyeIvNsibTEvUtDR6bSavmI6NuZ Xp1GHzg0KywtWFrC6aeBf3OqW0ezu1uv9QdGoazoyLO+UJoAUt6zIiD6MrUl4VSrwJ34qV7tr0l XvwjVle5esg== X-Google-Smtp-Source: AGHT+IFy+jT+27vVJgz7R2oehTji+RlKiv3+uUc/6owtg1K9ubfV5NHggyHzsM0M3NoRAMEDy2BAnXhRN7jv X-Received: from pjwx4.prod.google.com ([2002:a17:90a:c2c4:b0:2fa:1fac:2695]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a17:90b:4a83:b0:2ee:8e75:4aeb with SMTP id 98e67ed59e1d1-30e8313d1d5mr6376058a91.17.1747423206206; Fri, 16 May 2025 12:20:06 -0700 (PDT) Date: Fri, 16 May 2025 19:19:32 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <8a479dcaa271976e784d8b592e75d883a2c7721a.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 12/13] KVM: selftests: Generalize migration functions from sev_migrate_tests.c From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng These functions will be used in private (guest mem) migration tests. Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- .../testing/selftests/kvm/include/kvm_util.h | 13 +++++ .../selftests/kvm/x86/sev_migrate_tests.c | 48 +++++++------------ 2 files changed, 30 insertions(+), 31 deletions(-) diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h index 68faa658b69e..80375d6456a5 100644 --- a/tools/testing/selftests/kvm/include/kvm_util.h +++ b/tools/testing/selftests/kvm/include/kvm_util.h @@ -378,6 +378,19 @@ static inline void vm_enable_cap(struct kvm_vm *vm, uint32_t cap, uint64_t arg0) vm_ioctl(vm, KVM_ENABLE_CAP, &enable_cap); } +static inline int __vm_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) +{ + return __vm_enable_cap(dst, KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, src->fd); +} + +static inline void vm_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) +{ + int ret; + + ret = __vm_migrate_from(dst, src); + TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d\n", ret, errno); +} + static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa, uint64_t size, uint64_t attributes) { diff --git a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c index 0a6dfba3905b..905cdf9b39b1 100644 --- a/tools/testing/selftests/kvm/x86/sev_migrate_tests.c +++ b/tools/testing/selftests/kvm/x86/sev_migrate_tests.c @@ -56,20 +56,6 @@ static struct kvm_vm *aux_vm_create(bool with_vcpus) return vm; } -static int __sev_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) -{ - return __vm_enable_cap(dst, KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM, src->fd); -} - - -static void sev_migrate_from(struct kvm_vm *dst, struct kvm_vm *src) -{ - int ret; - - ret = __sev_migrate_from(dst, src); - TEST_ASSERT(!ret, "Migration failed, ret: %d, errno: %d", ret, errno); -} - static void test_sev_migrate_from(bool es) { struct kvm_vm *src_vm; @@ -81,13 +67,13 @@ static void test_sev_migrate_from(bool es) dst_vms[i] = aux_vm_create(true); /* Initial migration from the src to the first dst. */ - sev_migrate_from(dst_vms[0], src_vm); + vm_migrate_from(dst_vms[0], src_vm); for (i = 1; i < NR_MIGRATE_TEST_VMS; i++) - sev_migrate_from(dst_vms[i], dst_vms[i - 1]); + vm_migrate_from(dst_vms[i], dst_vms[i - 1]); /* Migrate the guest back to the original VM. */ - ret = __sev_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]); + ret = __vm_migrate_from(src_vm, dst_vms[NR_MIGRATE_TEST_VMS - 1]); TEST_ASSERT(ret == -1 && errno == EIO, "VM that was migrated from should be dead. ret %d, errno: %d", ret, errno); @@ -109,7 +95,7 @@ static void *locking_test_thread(void *arg) for (i = 0; i < NR_LOCK_TESTING_ITERATIONS; ++i) { j = i % NR_LOCK_TESTING_THREADS; - __sev_migrate_from(input->vm, input->source_vms[j]); + __vm_migrate_from(input->vm, input->source_vms[j]); } return NULL; @@ -146,7 +132,7 @@ static void test_sev_migrate_parameters(void) vm_no_vcpu = vm_create_barebones(); vm_no_sev = aux_vm_create(true); - ret = __sev_migrate_from(vm_no_vcpu, vm_no_sev); + ret = __vm_migrate_from(vm_no_vcpu, vm_no_sev); TEST_ASSERT(ret == -1 && errno == EINVAL, "Migrations require SEV enabled. ret %d, errno: %d", ret, errno); @@ -160,25 +146,25 @@ static void test_sev_migrate_parameters(void) sev_es_vm_init(sev_es_vm_no_vmsa); __vm_vcpu_add(sev_es_vm_no_vmsa, 1); - ret = __sev_migrate_from(sev_vm, sev_es_vm); + ret = __vm_migrate_from(sev_vm, sev_es_vm); TEST_ASSERT( ret == -1 && errno == EINVAL, "Should not be able migrate to SEV enabled VM. ret: %d, errno: %d", ret, errno); - ret = __sev_migrate_from(sev_es_vm, sev_vm); + ret = __vm_migrate_from(sev_es_vm, sev_vm); TEST_ASSERT( ret == -1 && errno == EINVAL, "Should not be able migrate to SEV-ES enabled VM. ret: %d, errno: %d", ret, errno); - ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm); + ret = __vm_migrate_from(vm_no_vcpu, sev_es_vm); TEST_ASSERT( ret == -1 && errno == EINVAL, "SEV-ES migrations require same number of vCPUS. ret: %d, errno: %d", ret, errno); - ret = __sev_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa); + ret = __vm_migrate_from(vm_no_vcpu, sev_es_vm_no_vmsa); TEST_ASSERT( ret == -1 && errno == EINVAL, "SEV-ES migrations require UPDATE_VMSA. ret %d, errno: %d", @@ -331,14 +317,14 @@ static void test_sev_move_copy(void) sev_mirror_create(mirror_vm, sev_vm); - sev_migrate_from(dst_mirror_vm, mirror_vm); - sev_migrate_from(dst_vm, sev_vm); + vm_migrate_from(dst_mirror_vm, mirror_vm); + vm_migrate_from(dst_vm, sev_vm); - sev_migrate_from(dst2_vm, dst_vm); - sev_migrate_from(dst2_mirror_vm, dst_mirror_vm); + vm_migrate_from(dst2_vm, dst_vm); + vm_migrate_from(dst2_mirror_vm, dst_mirror_vm); - sev_migrate_from(dst3_mirror_vm, dst2_mirror_vm); - sev_migrate_from(dst3_vm, dst2_vm); + vm_migrate_from(dst3_mirror_vm, dst2_mirror_vm); + vm_migrate_from(dst3_vm, dst2_vm); kvm_vm_free(dst_vm); kvm_vm_free(sev_vm); @@ -360,8 +346,8 @@ static void test_sev_move_copy(void) sev_mirror_create(mirror_vm, sev_vm); - sev_migrate_from(dst_mirror_vm, mirror_vm); - sev_migrate_from(dst_vm, sev_vm); + vm_migrate_from(dst_mirror_vm, mirror_vm); + vm_migrate_from(dst_vm, sev_vm); kvm_vm_free(mirror_vm); kvm_vm_free(dst_mirror_vm); From patchwork Fri May 16 19:19:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ryan Afranji X-Patchwork-Id: 890944 Received: from mail-pf1-f202.google.com (mail-pf1-f202.google.com [209.85.210.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6BB0B283C86 for ; Fri, 16 May 2025 19:20:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423210; cv=none; b=dDH3+UTEd17Bl7iU6CrzbJcdUyO+aPx3g6Gi6n7YH2my6Cp0y9N4LG5X3UIs1gT0Q09IwklPaDm2vuCopvbnvNiBcnW5NwsUI7iE+2m+3AFMtMC+DoNQvejCNeeFI/sI5SEz9o8Ilv+k1RoiFM6yqhpIeoTUNZiPqizDg3bg4cQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747423210; c=relaxed/simple; bh=286azteypIeGDNxuuOVgHebViNvaqsxBRbBpJ41BQWU=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=a6/gFLqmOEkDslk46QhvCT9N9cTrsQfQ1F/b27Y8GKRcE3Mc7TdHTG2o+WvX+ntNlrchL+tQBvAlYPN48hcs0ErEOvq+gX1ZC0DVw0uY4Iji/ggdsCFUPJLfFfK2QIVRQyvOlyynDAr/03zRRUlMcOOha/gAboojQh+PHnp8bzs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=muzAEcx3; arc=none smtp.client-ip=209.85.210.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--afranji.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="muzAEcx3" Received: by mail-pf1-f202.google.com with SMTP id d2e1a72fcca58-7391d68617cso2776867b3a.0 for ; Fri, 16 May 2025 12:20:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1747423208; x=1748028008; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o/DXUzrkv3zGlmZlSMCdLZe68HkBWyu2pzyjcV2UND4=; b=muzAEcx35/+br5ctE1QZqiCykknGmCfP7lXVwzcZCervlpEGX5UzDr+8cKlr7NI/MS nzJegQPu/xGt8X0pJsvQjg/pVEsAb502R6432/4gxRRAZ6CH6nwv/ymWNvQIQssfIPOq SzyDIJ8YZbjeI7YKyrUOmeZN+RYy1sWEOYvT3LDWapdo38MHQQ7zrhvq0mjYn77y/Ffe Je2qwLcEdqrAb44tKotAPUCGOi++kUlg80FDp9WDHoTM535vD47VyzN6xmOBzHdP9C6a 0eXqJuvDtkilH5xyfg709qdO7TdW6roYniRQstRBFWvp9paHoclMD8iHX7NbOSqeFmBy fUNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747423208; x=1748028008; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o/DXUzrkv3zGlmZlSMCdLZe68HkBWyu2pzyjcV2UND4=; b=qXwQZR+1GkLpSsLrM/mtWFiHptlLOM2ClPLAjBtmzkNy474TqoG77xEdLmQHap2RF4 SyiPp03hd1SqAfqfZGgAk1FamRYgR9l0xDheJOSnmee3bx8VyYO9jiuR2Qie12wCUfHa 7DWsUaXat0OdjPOYf/ZdhWLmIuM86Z1zQZSsFO/Fot7uX9xKk9DC5CudivAfZd/WlvC5 W0aScYuhRQwwhQYNGPADhICJP3RJMYkBK2HNnU7ghg2S9pm5VEQ7ADJrYiQ1KEov2/ts N/4WSze+x31Tya526T51svFYwfW8kW20/H4jYloC4JfSu3a5NsFRRVdce+DSPtshD4oW 7vRw== X-Forwarded-Encrypted: i=1; AJvYcCXa7ufYwyCFf6G/8+rV9VeayFKPH5jbazIGLemSaHM8EsyQWHf4Y9yyoXMPS/jqQLDsy3FWNGKUGZ0aqpt8vGk=@vger.kernel.org X-Gm-Message-State: AOJu0Yz7uu+CItA6PHwcIgxHSkqZvnYzdb3B8bo3lZtK6KzWkGT9Vugf +3VXN73/QakwBjks8/2aIr3bFr6x/7Xc6k6Hi4oM49kKmnJJfuYhXPtQ3Pifee+EDzEosvB9OPN SipIhsWDDFg== X-Google-Smtp-Source: AGHT+IELettSJSAjPN0ajEgghq8cce8mY33OF/qwq78Jwt00Tyez7vqBw2SJQ3LE86dpcucnhoMJmApM4i8D X-Received: from pfdf22.prod.google.com ([2002:aa7:8b16:0:b0:740:63b6:1826]) (user=afranji job=prod-delivery.src-stubby-dispatcher) by 2002:a05:6a20:6f96:b0:1f3:26ae:7792 with SMTP id adf61e73a8af0-2165f87234fmr5820413637.18.1747423207727; Fri, 16 May 2025 12:20:07 -0700 (PDT) Date: Fri, 16 May 2025 19:19:33 +0000 In-Reply-To: Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: X-Mailer: git-send-email 2.49.0.1101.gccaa498523-goog Message-ID: <4310b9291b9662c1059ebcf50e267760bc8e1c6f.1747368093.git.afranji@google.com> Subject: [RFC PATCH v2 13/13] KVM: selftests: Add tests for migration of private mem From: Ryan Afranji To: afranji@google.com, ackerleytng@google.com, pbonzini@redhat.com, seanjc@google.com, tglx@linutronix.de, x86@kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org, tabba@google.com Cc: mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com, shuah@kernel.org, andrew.jones@linux.dev, ricarkol@google.com, chao.p.peng@linux.intel.com, jarkko@kernel.org, yu.c.zhang@linux.intel.com, vannapurve@google.com, erdemaktas@google.com, mail@maciej.szmigiero.name, vbabka@suse.cz, david@redhat.com, qperret@google.com, michael.roth@amd.com, wei.w.wang@intel.com, liam.merwick@oracle.com, isaku.yamahata@gmail.com, kirill.shutemov@linux.intel.com, sagis@google.com, jthoughton@google.com From: Ackerley Tng Tests that private mem (in guest_mem files) can be migrated. Also demonstrates the migration flow. Signed-off-by: Ackerley Tng Signed-off-by: Ryan Afranji --- tools/testing/selftests/kvm/Makefile.kvm | 1 + .../kvm/x86/private_mem_migrate_tests.c | 56 ++++++++++--------- 2 files changed, 32 insertions(+), 25 deletions(-) diff --git a/tools/testing/selftests/kvm/Makefile.kvm b/tools/testing/selftests/kvm/Makefile.kvm index f62b0a5aba35..e9d53ea6c6c8 100644 --- a/tools/testing/selftests/kvm/Makefile.kvm +++ b/tools/testing/selftests/kvm/Makefile.kvm @@ -85,6 +85,7 @@ TEST_GEN_PROGS_x86 += x86/platform_info_test TEST_GEN_PROGS_x86 += x86/pmu_counters_test TEST_GEN_PROGS_x86 += x86/pmu_event_filter_test TEST_GEN_PROGS_x86 += x86/private_mem_conversions_test +TEST_GEN_PROGS_x86 += x86/private_mem_migrate_tests TEST_GEN_PROGS_x86 += x86/private_mem_kvm_exits_test TEST_GEN_PROGS_x86 += x86/set_boot_cpu_id TEST_GEN_PROGS_x86 += x86/set_sregs_test diff --git a/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c b/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c index 4226de3ebd41..4ad94ea04b66 100644 --- a/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c +++ b/tools/testing/selftests/kvm/x86/private_mem_migrate_tests.c @@ -1,32 +1,32 @@ // SPDX-License-Identifier: GPL-2.0 -#include "kvm_util_base.h" +#include "kvm_util.h" #include "test_util.h" #include "ucall_common.h" #include #include -#define TRANSFER_PRIVATE_MEM_TEST_SLOT 10 -#define TRANSFER_PRIVATE_MEM_GPA ((uint64_t)(1ull << 32)) -#define TRANSFER_PRIVATE_MEM_GVA TRANSFER_PRIVATE_MEM_GPA -#define TRANSFER_PRIVATE_MEM_VALUE 0xdeadbeef +#define MIGRATE_PRIVATE_MEM_TEST_SLOT 10 +#define MIGRATE_PRIVATE_MEM_GPA ((uint64_t)(1ull << 32)) +#define MIGRATE_PRIVATE_MEM_GVA MIGRATE_PRIVATE_MEM_GPA +#define MIGRATE_PRIVATE_MEM_VALUE 0xdeadbeef -static void transfer_private_mem_guest_code_src(void) +static void migrate_private_mem_data_guest_code_src(void) { - uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA; + uint64_t volatile *const ptr = (uint64_t *)MIGRATE_PRIVATE_MEM_GVA; - *ptr = TRANSFER_PRIVATE_MEM_VALUE; + *ptr = MIGRATE_PRIVATE_MEM_VALUE; GUEST_SYNC1(*ptr); } -static void transfer_private_mem_guest_code_dst(void) +static void migrate_private_mem_guest_code_dst(void) { - uint64_t volatile *const ptr = (uint64_t *)TRANSFER_PRIVATE_MEM_GVA; + uint64_t volatile *const ptr = (uint64_t *)MIGRATE_PRIVATE_MEM_GVA; GUEST_SYNC1(*ptr); } -static void test_transfer_private_mem(void) +static void test_migrate_private_mem_data(bool migrate) { struct kvm_vm *src_vm, *dst_vm; struct kvm_vcpu *src_vcpu, *dst_vcpu; @@ -40,40 +40,43 @@ static void test_transfer_private_mem(void) /* Build the source VM, use it to write to private memory */ src_vm = __vm_create_shape_with_one_vcpu( - shape, &src_vcpu, 0, transfer_private_mem_guest_code_src); + shape, &src_vcpu, 0, migrate_private_mem_data_guest_code_src); src_memfd = vm_create_guest_memfd(src_vm, SZ_4K, 0); - vm_mem_add(src_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA, - TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE, + vm_mem_add(src_vm, DEFAULT_VM_MEM_SRC, MIGRATE_PRIVATE_MEM_GPA, + MIGRATE_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_GUEST_MEMFD, src_memfd, 0); - virt_map(src_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1); - vm_set_memory_attributes(src_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K, + virt_map(src_vm, MIGRATE_PRIVATE_MEM_GVA, MIGRATE_PRIVATE_MEM_GPA, 1); + vm_set_memory_attributes(src_vm, MIGRATE_PRIVATE_MEM_GPA, SZ_4K, KVM_MEMORY_ATTRIBUTE_PRIVATE); vcpu_run(src_vcpu); TEST_ASSERT_KVM_EXIT_REASON(src_vcpu, KVM_EXIT_IO); get_ucall(src_vcpu, &uc); - TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE, + TEST_ASSERT(uc.args[0] == MIGRATE_PRIVATE_MEM_VALUE, "Source VM should be able to write to private memory"); /* Build the destination VM with linked fd */ dst_vm = __vm_create_shape_with_one_vcpu( - shape, &dst_vcpu, 0, transfer_private_mem_guest_code_dst); + shape, &dst_vcpu, 0, migrate_private_mem_guest_code_dst); dst_memfd = vm_link_guest_memfd(dst_vm, src_memfd, 0); - vm_mem_add(dst_vm, DEFAULT_VM_MEM_SRC, TRANSFER_PRIVATE_MEM_GPA, - TRANSFER_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_PRIVATE, + vm_mem_add(dst_vm, DEFAULT_VM_MEM_SRC, MIGRATE_PRIVATE_MEM_GPA, + MIGRATE_PRIVATE_MEM_TEST_SLOT, 1, KVM_MEM_GUEST_MEMFD, dst_memfd, 0); - virt_map(dst_vm, TRANSFER_PRIVATE_MEM_GVA, TRANSFER_PRIVATE_MEM_GPA, 1); - vm_set_memory_attributes(dst_vm, TRANSFER_PRIVATE_MEM_GPA, SZ_4K, - KVM_MEMORY_ATTRIBUTE_PRIVATE); + virt_map(dst_vm, MIGRATE_PRIVATE_MEM_GVA, MIGRATE_PRIVATE_MEM_GPA, 1); + if (migrate) + vm_migrate_from(dst_vm, src_vm); + else + vm_set_memory_attributes(dst_vm, MIGRATE_PRIVATE_MEM_GPA, SZ_4K, + KVM_MEMORY_ATTRIBUTE_PRIVATE); vcpu_run(dst_vcpu); TEST_ASSERT_KVM_EXIT_REASON(dst_vcpu, KVM_EXIT_IO); get_ucall(dst_vcpu, &uc); - TEST_ASSERT(uc.args[0] == TRANSFER_PRIVATE_MEM_VALUE, + TEST_ASSERT(uc.args[0] == MIGRATE_PRIVATE_MEM_VALUE, "Destination VM should be able to read value transferred"); } @@ -81,7 +84,10 @@ int main(int argc, char *argv[]) { TEST_REQUIRE(kvm_check_cap(KVM_CAP_VM_TYPES) & BIT(KVM_X86_SW_PROTECTED_VM)); - test_transfer_private_mem(); + test_migrate_private_mem_data(false); + + if (kvm_check_cap(KVM_CAP_VM_MOVE_ENC_CONTEXT_FROM)) + test_migrate_private_mem_data(true); return 0; }