From patchwork Mon May 19 17:51:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892071 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 02FD61E32C3; Mon, 19 May 2025 17:54:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677259; cv=none; b=RTnZ5F3zw6hf400HGhCDd8fJCA+O+vEDVTxOMQSsiHPbgcJwI+uRJ49Z5eoR0wVtZEhq4sBHx1GVgl9cefaTm72gC1dM4D5MkNV3oINsw9/PCdoP5YKUqtUOF9GOu4bYS8H7a448NcH9Y+TZXVoVKxRMnPXn1dCqgZrL0FdvyLE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677259; c=relaxed/simple; bh=NiJgFDXLG5/9s0HxeZ4PHgjeCa/kx8dZzw0U3HxZuno=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lpDm2fh/+hNEUA+OWvVeJwOBRMnuT58M1YDBhxWz2Hrg36DU6t1CXxF+P2146pQkaNr3KAX2rDQe1tKyrBQdaY5eZdi0jq7tFXUg/7FZZQ35+b+g272PUOqiO9H0FtdXtqvevMWmc2+vJaOASWN29QIOxz0f/RtvRCoDNJzbzcA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=XAFoTIL/; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XAFoTIL/" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-23278ce78efso708915ad.2; Mon, 19 May 2025 10:54:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677257; x=1748282057; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=jQloOIJetvzox92MWHE37UU9O29zUE3Qi3B+AAxswms=; b=XAFoTIL/Hd1bUxihHfjfVZLxca0EdJfoM8M38fe0/fT6vQeHryiCmPnRAjoIuMFqVg ILrNWfZDlAfcJKt4SUrd+VpvC6NQp/DZUHgOZQYE/1nzXz1A0KeAJg+NF4+52Sde7Ohh v+J6aERjdjprNE+35X7uUNaJHoYiZaLf0E3EdVPJAwuLtLW83Ewt1acR/HHILY+WtH+i 5EbT4fuqHBSQQ7LfATIgCuC6NfbGSXmmm3DRHdWLbgSsWO/zhsZgl2UmAUvBYRhfQmtE kqVb7OdYFSDQftzTyhJtoewnPyeVprYVJ5LwnzjomR0VUGwID9Mp/nxQS0CVLnPbGtTr bx8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677257; x=1748282057; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jQloOIJetvzox92MWHE37UU9O29zUE3Qi3B+AAxswms=; b=cGCiOI8GqckrI7ZnvQ8cb7xhYVp4M5+1cb56ExTwfrrUbFqLR9hHgvod5wmM0yAQTd 4Yh7Adb9zIBvcbll+1He73e77l2E58AQdMqO6520GBd5FW4KERhlEMdt7x3kiNu15Tsk gnNVwlt620jPcsyjHIrIQHxNo6HT2g40cPM3FiqtqvgAIgQTr2r9csazfwmC9oAueq+d 9JJEP0yNlmUlCDqdRthT3ruDYzO6LewmS/JuTM3pVZ6vMUJPeyNIf8A64hFnvbXZ2I2a 1QIdBOpUx00D5RvusmKPSSHQYtWcc9piY7Af+nxQiDGNfkBB4UNgU+dQIf+Nu8hM2fD8 IZ5g== X-Forwarded-Encrypted: i=1; AJvYcCUEzJt31vBdD3VP3GOxMAWcMMdrQpaiqCR86rdNMqkCBMgo2TaNIUyJbIwWyJ8BBxUkkKxFMccMLIz1ukvP@vger.kernel.org, AJvYcCUGy+45rMem/gfVEjsXGs9HTFy6p3nbKC77LwQzbgq5ueSsifhTEIapMjaiqfO4AgKUI5jKwl+NKWHKT7Hu@vger.kernel.org X-Gm-Message-State: AOJu0Yy7EcnJZsp9fsAVPFKRFluY4eBEhGdWRX4OjOMRUmohYLVIMYxo NIQPps5jGrI/tcAHZwiD2W5bLuhT+spbayhMnt4JT2CAQLimqtOzXTKQNy0HDA== X-Gm-Gg: ASbGncvEShFydPFiB2vdnN+4BcDYWzksi08tqeeOD8JrGHROjiHqmKocXjFeK12CCAW vrnMFzeIHLMq3SQjNLvsHKDiIfh2/szs2rjWCSepEB2uhB2vgA5flMJgtzOPTyhKraY9hkJtD45 r8/ExspgLyBZGl2JtKwPr3BVQ/MBR4DFQNcBiCKWecIfizKIMHzvBwHCblEr5jr4VGRu0dS5gmQ IAuHLtHAO+A5WY3m7w3pQHblEvqwwb0zBlJ0TRyV2W7zHIpAJJidXrnYcEJcE8TwnW/aSNj3wxl A/E5EeY82/OhSM4gt3eFyNkER+JKuhF858rWRHK8063ZA6hPTbEBoIg62GrOQtO4lgB7fOLKXPL Z8J7e8Eawf7EkhwP7cciyeXxmnw== X-Google-Smtp-Source: AGHT+IGSuMLkx96sFMh7b0MMRKst8qe0vdboixG/NWsaJn7w9DWV2XIZ2dB4fo5OCm/Of6Nu2djl0w== X-Received: by 2002:a17:902:e78b:b0:223:fbc7:25f4 with SMTP id d9443c01a7336-231d44e641fmr203394735ad.14.1747677257237; Mon, 19 May 2025 10:54:17 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ac9fc8sm62898755ad.27.2025.05.19.10.54.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Danilo Krummrich , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 01/40] drm/gpuvm: Don't require obj lock in destructor path Date: Mon, 19 May 2025 10:51:24 -0700 Message-ID: <20250519175348.11924-2-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark See commit a414fe3a2129 ("drm/msm/gem: Drop obj lock in msm_gem_free_object()") for justification. Cc: Danilo Krummrich Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 7 +++++-- 1 file changed, 5 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index f9eb56f24bef..1e89a98caad4 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1511,7 +1511,9 @@ drm_gpuvm_bo_destroy(struct kref *kref) drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); + list_del(&vm_bo->list.entry.gem); if (ops && ops->vm_bo_free) @@ -1871,7 +1873,8 @@ drm_gpuva_unlink(struct drm_gpuva *va) if (unlikely(!obj)) return; - drm_gem_gpuva_assert_lock_held(obj); + if (kref_read(&obj->refcount) > 0) + drm_gem_gpuva_assert_lock_held(obj); list_del_init(&va->gem.entry); va->vm_bo = NULL; From patchwork Mon May 19 17:51:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891140 Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CD678289342; Mon, 19 May 2025 17:54:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677262; cv=none; b=lrBD28E6a1fgJDmpxxNByXLPHF55gZbvzbciJdaF37UKQfhDESrotWLtg3vZHW4uejKcTnt/SyvW16LRYGFQtOko29hij0zHcqb/RXoYEM2RxMuRFZZGTSshzFfzm3OjI7She7eEA+oep1Ti8rtyYaQ+EUlvwRfGaHlQ7fBaf+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677262; c=relaxed/simple; bh=tibBYXh+OHMaU82MthggxsnfGTDjqalhpzUjKNyekhM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LuZIYkf8QhrgJRdnR0ZlHzUdlxMXB8WE/0z3UAQqSz3vNZ6qvM0DgShDnyV2GkaZ0OASfmyM35S1FdkkiyO2JKRmUksy20iPbHW9PAuYkpcJPrMyCOODnzUm2CXNUooZLPSHBh3ZhdT5cXaTN3lRPsnkp5cwC1ky/6oKWsT8vm0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=E1vVtuu2; arc=none smtp.client-ip=209.85.210.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="E1vVtuu2" Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-7424ccbef4eso4806685b3a.2; Mon, 19 May 2025 10:54:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677259; x=1748282059; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=GtcNSFug1MEkA4buKrnQDhhH7gHYNLRZJbU0s44kbqs=; b=E1vVtuu2qOYopz//eUpl6EwDPPtNMVMV4cCDr3jaZ+sXFupyaVtgTTWW/SO23SgYI5 qANC//Pia/d04VjEg8KoG8mlXQSqpUNZaXSEmLUm46C2DIlIMzbsTNuI241Ytz94zeTI 3oG5Jlhzy1fpP8zC3nFtFvs7PDwlGA5ETvAkLaxjMM2T4HNFG10y/mHdglQq20ePK3Xj 6sBa0uELPV5xKbGzq5EkwpyrzPM1AVzYYFD7jCjwJstyilwzNAydKzt8j6L7nYqjeH/f I0czVYAJfqhYCOUsXUHIjUOx29480JrSC0+UnKdDLRvrmiOob32AfGgkJKos0h3tSOzz pkfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677259; x=1748282059; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GtcNSFug1MEkA4buKrnQDhhH7gHYNLRZJbU0s44kbqs=; b=MKLl0e7NsOJfW+ew2/mN9JRbpR7KpopwFxTinPHQ5FKF0o7Yc1ZKZw3AjkdUdzlRmN MZCk3PMAGTxPX0lgKyfXcUSCbVOxO1uD5kWI/RUw7nR44i9rA86k+Gli0xNPZVDL7BcB 3GLDkWrVUixm4W2UG9+gL2d7DJCZIFsKjk4IqbR5/di7REEBtsvsmJT2sqdKgQeHNWr6 +wEPOe0yIx1WdDPkn4hVmEqow+0YvjgTch0+nJNVLNhdQH4I/isVWwZnnBsU5ciH4hAV +ZNnX2fx6bCIB0zSqbT2dbxY3Un8yT2Q0aO8F99mBd8cRTKGMyw04+fPKX7jTJ0CBHUd jABQ== X-Forwarded-Encrypted: i=1; AJvYcCUk0Xg3QSfu+aQu935kmbBiixU3UZsA17k0bFYUxGpwHh1ogercjknp3Miw49OkbRyZbqE0ZqS5OYOxSoim@vger.kernel.org, AJvYcCWjKAvFM77fwUaDbsDWOLRBatsNIK43LSrrzxTHgeScg3e3s9X/ZJ2XgrkFt30P0k2jDiHsKPLLJD8451Hp@vger.kernel.org X-Gm-Message-State: AOJu0YxIwrHRfaUpUns+ifCPsIBcZoZXjJmjKRJxUAsvnzkfMGb2q3EE bpLNCEAjd4gkQRqxGi5Hd1qw8vaV0FKp8103czFszPLnURXspe7AQNPR/p5YQA== X-Gm-Gg: ASbGncvjJx5qDg8e2AHjidvIvfba0gJ70FFiMFdNYy+N/kH5MURM4FTvwRFXbwI1YLg dEE0ASf3BdBlzdX7bXqzrPSyvbLj2z5OjlYIEnlxCZptSSpdYBnn6Hz/BBF0P8Sjr16CfQ+dTxJ hfEl83FaldfD617qA5Ex1LDVawiPN0+kePoiT/BRxAU6iNElKJIdHjzkLh4dwqV6n6nR128I7rr 6+cQRU4PcgiiKm9fIQQgaY1RDuB2CIdROZnvlUjUOaTTOG59OZnkxwUJk8iYJUHltse8CMFkrwo 10ZjD/oD1yLIdoeQiM94QgdjKRMX2ODEBCdvczMSKG1lzlE42FbEHFqxm1QMeYWMUlmv6xzJFIR WTaqtww20h80zTu4GjPh1UMQ1Vw== X-Google-Smtp-Source: AGHT+IGLGnR2gjVBiS7uWt9d8fdk1foprJrMCPDgzog9mB8/+ApvxdoVytfH9URIqIITFimmH1FxGA== X-Received: by 2002:a05:6a21:3289:b0:1f5:8220:7452 with SMTP id adf61e73a8af0-216219356ddmr20309299637.24.1747677258870; Mon, 19 May 2025 10:54:18 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eb08450csm6611996a12.55.2025.05.19.10.54.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Danilo Krummrich , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 02/40] drm/gpuvm: Allow VAs to hold soft reference to BOs Date: Mon, 19 May 2025 10:51:25 -0700 Message-ID: <20250519175348.11924-3-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Eases migration for drivers where VAs don't hold hard references to their associated BO, avoiding reference loops. In particular, msm uses soft references to optimistically keep around mappings until the BO is distroyed. Which obviously won't work if the VA (the mapping) is holding a reference to the BO. By making this a per-VM flag, we can use normal hard-references for mappings in a "VM_BIND" managed VM, but soft references in other cases, such as kernel-internal VMs (for display scanout, etc). Cc: Danilo Krummrich Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gpuvm.c | 37 ++++++++++++++++++++++++++++++++----- include/drm/drm_gpuvm.h | 19 +++++++++++++++++-- 2 files changed, 49 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/drm_gpuvm.c b/drivers/gpu/drm/drm_gpuvm.c index 1e89a98caad4..892b62130ff8 100644 --- a/drivers/gpu/drm/drm_gpuvm.c +++ b/drivers/gpu/drm/drm_gpuvm.c @@ -1125,6 +1125,8 @@ __drm_gpuvm_prepare_objects(struct drm_gpuvm *gpuvm, LIST_HEAD(extobjs); int ret = 0; + WARN_ON(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); + for_each_vm_bo_in_list(gpuvm, extobj, &extobjs, vm_bo) { ret = exec_prepare_obj(exec, vm_bo->obj, num_fences); if (ret) @@ -1145,6 +1147,8 @@ drm_gpuvm_prepare_objects_locked(struct drm_gpuvm *gpuvm, struct drm_gpuvm_bo *vm_bo; int ret = 0; + WARN_ON(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); + drm_gpuvm_resv_assert_held(gpuvm); list_for_each_entry(vm_bo, &gpuvm->extobj.list, list.entry.extobj) { ret = exec_prepare_obj(exec, vm_bo->obj, num_fences); @@ -1386,6 +1390,7 @@ drm_gpuvm_validate_locked(struct drm_gpuvm *gpuvm, struct drm_exec *exec) struct drm_gpuvm_bo *vm_bo, *next; int ret = 0; + WARN_ON(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); drm_gpuvm_resv_assert_held(gpuvm); list_for_each_entry_safe(vm_bo, next, &gpuvm->evict.list, @@ -1482,7 +1487,9 @@ drm_gpuvm_bo_create(struct drm_gpuvm *gpuvm, vm_bo->vm = drm_gpuvm_get(gpuvm); vm_bo->obj = obj; - drm_gem_object_get(obj); + + if (!(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF)) + drm_gem_object_get(obj); kref_init(&vm_bo->kref); INIT_LIST_HEAD(&vm_bo->list.gpuva); @@ -1504,16 +1511,22 @@ drm_gpuvm_bo_destroy(struct kref *kref) const struct drm_gpuvm_ops *ops = gpuvm->ops; struct drm_gem_object *obj = vm_bo->obj; bool lock = !drm_gpuvm_resv_protected(gpuvm); + bool unref = !(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF); if (!lock) drm_gpuvm_resv_assert_held(gpuvm); + if (kref_read(&obj->refcount) > 0) { + drm_gem_gpuva_assert_lock_held(obj); + } else { + WARN_ON(!(gpuvm->flags & DRM_GPUVM_VA_WEAK_REF)); + WARN_ON(!list_empty(&vm_bo->list.entry.evict)); + WARN_ON(!list_empty(&vm_bo->list.entry.extobj)); + } + drm_gpuvm_bo_list_del(vm_bo, extobj, lock); drm_gpuvm_bo_list_del(vm_bo, evict, lock); - if (kref_read(&obj->refcount) > 0) - drm_gem_gpuva_assert_lock_held(obj); - list_del(&vm_bo->list.entry.gem); if (ops && ops->vm_bo_free) @@ -1522,7 +1535,8 @@ drm_gpuvm_bo_destroy(struct kref *kref) kfree(vm_bo); drm_gpuvm_put(gpuvm); - drm_gem_object_put(obj); + if (unref) + drm_gem_object_put(obj); } /** @@ -1678,6 +1692,12 @@ drm_gpuvm_bo_extobj_add(struct drm_gpuvm_bo *vm_bo) if (!lock) drm_gpuvm_resv_assert_held(gpuvm); + /* If the vm_bo doesn't hold a hard reference to the obj, then the + * driver is responsible for object tracking. + */ + if (gpuvm->flags & DRM_GPUVM_VA_WEAK_REF) + return; + if (drm_gpuvm_is_extobj(gpuvm, vm_bo->obj)) drm_gpuvm_bo_list_add(vm_bo, extobj, lock); } @@ -1699,6 +1719,13 @@ drm_gpuvm_bo_evict(struct drm_gpuvm_bo *vm_bo, bool evict) bool lock = !drm_gpuvm_resv_protected(gpuvm); dma_resv_assert_held(obj->resv); + + /* If the vm_bo doesn't hold a hard reference to the obj, then the + * driver must track evictions on it's own. + */ + if (gpuvm->flags & DRM_GPUVM_VA_WEAK_REF) + return; + vm_bo->evicted = evict; /* Can't add external objects to the evicted list directly if not using diff --git a/include/drm/drm_gpuvm.h b/include/drm/drm_gpuvm.h index 2a9629377633..652e0fb66413 100644 --- a/include/drm/drm_gpuvm.h +++ b/include/drm/drm_gpuvm.h @@ -205,10 +205,25 @@ enum drm_gpuvm_flags { */ DRM_GPUVM_RESV_PROTECTED = BIT(0), + /** + * @DRM_GPUVM_VA_WEAK_REF: + * + * Flag indicating that the &drm_gpuva (or more correctly, the + * &drm_gpuvm_bo) only holds a weak reference to the &drm_gem_object. + * This mode is intended to ease migration to drm_gpuvm for drivers + * where the GEM object holds a referece to the VA, rather than the + * other way around. + * + * In this mode, drm_gpuvm does not track evicted or external objects. + * It is intended for legacy mode, where the needed objects are attached + * to the command submission ioctl, therefore this tracking is unneeded. + */ + DRM_GPUVM_VA_WEAK_REF = BIT(1), + /** * @DRM_GPUVM_USERBITS: user defined bits */ - DRM_GPUVM_USERBITS = BIT(1), + DRM_GPUVM_USERBITS = BIT(2), }; /** @@ -651,7 +666,7 @@ struct drm_gpuvm_bo { /** * @obj: The &drm_gem_object being mapped in @vm. This is a reference - * counted pointer. + * counted pointer, unless the &DRM_GPUVM_VA_WEAK_REF flag is set. */ struct drm_gem_object *obj; From patchwork Mon May 19 17:51:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892070 Received: from mail-pl1-f174.google.com (mail-pl1-f174.google.com [209.85.214.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4C8C928934B; Mon, 19 May 2025 17:54:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677262; cv=none; b=sRHgxKYrdJQjkuz+anYDH1Pha1nC+RVsCx9N+/PeUFpqDflB1dwTvhAmKsmMZ6zSJBJhGpqGxiAj3WRx3rBTkfex8udxuAuUVTadMMb3f6o1id7F4D8xY4KtWvw5ewVSKZCa78vGwyywCRwwzp+Nx6gM0xgfmJwHlYBrOaYWBTo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677262; c=relaxed/simple; bh=IQEfu4AJlW5xRHYh0xrmnsJ1MIa2LYZtEiLXSVMB8hY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=C1/MW308GUpS78aGnUdhwH9897bZbEYGiX447lWaN/y8sgM6IZIRHzwrStXh+lPu0USZRgZOD00tlZhW/h1H2vwlsc8lcqaa8JdtsNQAE5vUEV4OAXVu66dTWntAsOZZfVbmaAjHv5jOtyxZxQ3VDcjnsM6P4Dws+xOgQSICh3c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hoOx/r3R; arc=none smtp.client-ip=209.85.214.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hoOx/r3R" Received: by mail-pl1-f174.google.com with SMTP id d9443c01a7336-22d95f0dda4so50782695ad.2; Mon, 19 May 2025 10:54:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677260; x=1748282060; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=m2GLRz8JGOHg3OGSOt87e8l0+TbpKundQEHAiJsUB0Q=; b=hoOx/r3RwOt8Rs8gqmnshRJM4NDOD6C+1Zw9RA/ZDHLJMS/ucBNjobb2tEoFbHHMa0 pGIs6OQKGq/xduFi/YpPtLZRlmOkF+hHBOtY+53WLTRCRZ8LHhyAJW+3VWJA9FjP2qjg JFGE7VRUGAbclipaOSom/nRUuXBjZP+STz9bPB8Qcs9dAIxrvRSvXQzpYgoLPKWfb25f sDRkQxQlWxxwsieYx4CW+wMCGa+XKJCr3gRI5Sz0mBr6hfQV2qDi55Yf2b55rBaoymuc VV7EsSONhx7wKzKufbXBv80gdc6dYiHYAQsZ0gStF/W8GyEyIG5Dn0pmM4Irb9yOSwzi b77w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677260; x=1748282060; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=m2GLRz8JGOHg3OGSOt87e8l0+TbpKundQEHAiJsUB0Q=; b=I5x4UEiOS/DYpuGrSoW01m3eiVSrQEb5ss+Grqd9S4ZwncWFSSrtfeawcFfD92YMTl PoxyqLVrwcgzPwAPCLrWTfEWLWLfZ7Zpx1Fa27buocqx0zxRUJ1F6x/pw89OWXgJdPhD frdt+mDfrO2Z6xQie2blMGmX5UjurE3JzyJvbkn0Hnfn35zb0KkepAwWA6gkThvKuRuB fxWI4u0mXwFtnkFNUl3iH5g7VN+zEUkIjsMsbTCP7O8kIoc1HJVhHhMYncDDFZaCStOS iyphxfWdqA0jGJE8tp1poUO9QtUku/BMxF2elb1rZTuVzlfNejVh4LGXCjGWegskSGLE SiNA== X-Forwarded-Encrypted: i=1; AJvYcCVwhKBBlRkF9RpXXfmD1zZ1lUYBLeJ05czhdIkmah5BZOjDwWQ/wWY/hr/8/Q9V5ZdxXRJRzi+bHzzkurMV@vger.kernel.org, AJvYcCXnLREcuPQ5T7zKZYB8j71bGPEiLudgf6H6drR2HJVEJR8w03MM4X/Xnq5e944hrzEJ3HB69dktFSVukjOX@vger.kernel.org X-Gm-Message-State: AOJu0YyHaqAEcTNbZzC1aP26HAq4faT6qSFeaZ0oQc+Cr5Lzy/mQn17z dSm94S03yADxYPjTFfuv7R6FYK24zkOMbD5NgTZbdPgnuRpZGvJTOI5n X-Gm-Gg: ASbGnctskIn3xyAban8Ej7WdSciF0djnqaS+mWNQfevDrliV0IklFSzPlOX1/foSKgF cp55INGTn/XB0aHzjXStAbqPfMuolQOwdZVTb3z3iZ4a69l24VpGCAbPz2KTH/QG0upMTsiY1uS iTFpSE94aHEvDTFjwbW7pXGtYilSIndxaeoMlpXcUHAZxTdF+n6ht6i2i/a1Ky4XG7IgmRtNquY MjJIejO8HJWeJOKXiIdBjn9H4E0PTfN81Af2JdjmqXTCAE7OdoT3UTO+TMw9IYT30l4B8bZMYvu UExZDHjZsta/7H3I24LyKq9J4NR6D9gfTF+BuLgUy7lxPIsu3ys9NfQuAW+1japRTKRAgBTTw75 7h/TPhOFVlPqCebtbcI7AMps2gg== X-Google-Smtp-Source: AGHT+IH2WqLWtOHrqBPUnezx4TsYoSZdJNpBSBzObXgLOEWBSEBtNE7MpDGnat/hGDNWJDd8wLzYpg== X-Received: by 2002:a17:902:e788:b0:22e:8560:46ff with SMTP id d9443c01a7336-231d452db9fmr167399335ad.27.1747677260497; Mon, 19 May 2025 10:54:20 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ac93besm62433505ad.12.2025.05.19.10.54.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 03/40] drm/gem: Add ww_acquire_ctx support to drm_gem_lru_scan() Date: Mon, 19 May 2025 10:51:26 -0700 Message-ID: <20250519175348.11924-4-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If the callback is going to have to attempt to grab more locks, it is useful to have an ww_acquire_ctx to avoid locking order problems. Why not use the drm_exec helper instead? Mainly because (a) where ww_acquire_init() is called is awkward, and (b) we don't really need to retry after backoff, we can just move on to the next object. Signed-off-by: Rob Clark --- drivers/gpu/drm/drm_gem.c | 14 +++++++++++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 24 +++++++++++++----------- include/drm/drm_gem.h | 10 ++++++---- 3 files changed, 30 insertions(+), 18 deletions(-) diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index c6240bab3fa5..c8f983571c70 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1460,12 +1460,14 @@ EXPORT_SYMBOL(drm_gem_lru_move_tail); * @nr_to_scan: The number of pages to try to reclaim * @remaining: The number of pages left to reclaim, should be initialized by caller * @shrink: Callback to try to shrink/reclaim the object. + * @ticket: Optional ww_acquire_ctx context to use for locking */ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned int nr_to_scan, unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)) + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket), + struct ww_acquire_ctx *ticket) { struct drm_gem_lru still_in_lru; struct drm_gem_object *obj; @@ -1498,17 +1500,20 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, */ mutex_unlock(lru->lock); + if (ticket) + ww_acquire_init(ticket, &reservation_ww_class); + /* * Note that this still needs to be trylock, since we can * hit shrinker in response to trying to get backing pages * for this obj (ie. while it's lock is already held) */ - if (!dma_resv_trylock(obj->resv)) { + if (!ww_mutex_trylock(&obj->resv->lock, ticket)) { *remaining += obj->size >> PAGE_SHIFT; goto tail; } - if (shrink(obj)) { + if (shrink(obj, ticket)) { freed += obj->size >> PAGE_SHIFT; /* @@ -1522,6 +1527,9 @@ drm_gem_lru_scan(struct drm_gem_lru *lru, dma_resv_unlock(obj->resv); + if (ticket) + ww_acquire_fini(ticket); + tail: drm_gem_object_put(obj); mutex_lock(lru->lock); diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 07ca4ddfe4e3..de185fc34084 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -44,7 +44,7 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) } static bool -purge(struct drm_gem_object *obj) +purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_purgeable(to_msm_bo(obj))) return false; @@ -58,7 +58,7 @@ purge(struct drm_gem_object *obj) } static bool -evict(struct drm_gem_object *obj) +evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (is_unevictable(to_msm_bo(obj))) return false; @@ -79,21 +79,21 @@ wait_for_idle(struct drm_gem_object *obj) } static bool -active_purge(struct drm_gem_object *obj) +active_purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; - return purge(obj); + return purge(obj, ticket); } static bool -active_evict(struct drm_gem_object *obj) +active_evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!wait_for_idle(obj)) return false; - return evict(obj); + return evict(obj, ticket); } static unsigned long @@ -102,7 +102,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) struct msm_drm_private *priv = shrinker->private_data; struct { struct drm_gem_lru *lru; - bool (*shrink)(struct drm_gem_object *obj); + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket); bool cond; unsigned long freed; unsigned long remaining; @@ -122,8 +122,9 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) continue; stages[i].freed = drm_gem_lru_scan(stages[i].lru, nr, - &stages[i].remaining, - stages[i].shrink); + &stages[i].remaining, + stages[i].shrink, + NULL); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; @@ -164,7 +165,7 @@ msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan) static const int vmap_shrink_limit = 15; static bool -vmap_shrink(struct drm_gem_object *obj) +vmap_shrink(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { if (!is_vunmapable(to_msm_bo(obj))) return false; @@ -192,7 +193,8 @@ msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) unmapped += drm_gem_lru_scan(lrus[idx], vmap_shrink_limit - unmapped, &remaining, - vmap_shrink); + vmap_shrink, + NULL); } *(unsigned long *)ptr += unmapped; diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index bcd54020d6ba..b611a9482abf 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -556,10 +556,12 @@ void drm_gem_lru_init(struct drm_gem_lru *lru, struct mutex *lock); void drm_gem_lru_remove(struct drm_gem_object *obj); void drm_gem_lru_move_tail_locked(struct drm_gem_lru *lru, struct drm_gem_object *obj); void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj); -unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, - unsigned int nr_to_scan, - unsigned long *remaining, - bool (*shrink)(struct drm_gem_object *obj)); +unsigned long +drm_gem_lru_scan(struct drm_gem_lru *lru, + unsigned int nr_to_scan, + unsigned long *remaining, + bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket), + struct ww_acquire_ctx *ticket); int drm_gem_evict(struct drm_gem_object *obj); From patchwork Mon May 19 17:51:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891139 Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5687328937B; Mon, 19 May 2025 17:54:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677265; cv=none; b=S71mbDv8r3yHOP2w0mvSInlwZKOVKVjqYrN+oGx+nwhgUBoFict2rBcPoxtL/qREPHf/LT2qWnWb/+mfGtbVw5gb/KpqMfZ7mZAGeNtbZJhLD8jkazbwDDyPLUPefP4oN7pLA/PGU1aGRvDREqD15CRMoKvyk4eZPJ5gvOp1Eow= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677265; c=relaxed/simple; bh=OCOkY5Kzx9fQHT33EYqh/192I4x5fhYXVcPq5AZ1/8w=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qKzQAx17uCknPbIJHaEsEc0nkxWUqXQL6xKNVxbKmVa8uqUAuI3DMgfHGUDwvhg0KQMnPIjfit+ulYHYxyqpzJJ6NXc5Q63li2jZem6pTZ3BjAsrVR7TDwpR2i/bk5H97Wn4jWqWc6c69EllAol24y6QkqfndMjRce5vDvQXpiw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MGkux4ab; arc=none smtp.client-ip=209.85.215.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MGkux4ab" Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-ae727e87c26so3059802a12.0; Mon, 19 May 2025 10:54:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677263; x=1748282063; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MECHGtnXB65b4MXNo29O0yS4gkyeIySNFhpqKFVb7PA=; b=MGkux4ab2qQ5Nd2CxfGx4+3cj/MIUmR3ePz0vsaDUXosnXVsi47pJGu9GZ31b+dfO/ wpP+GM1e3v2Hwuxc1qpWdtuW5BCY5JCnfEGloOOMqwa6tCpeO/HMwF+XZENmLe1rRIvJ uleezDz+4nu+nXNCOAQ7tyIKwVPNSCnuCkMQJvOWOX3VUiwah5ja0TekkAWpiDY0cqrJ I1zbsy0quzQKToNX7H4gl+8+8WMquQk/ULGujgncXDVtGGIcj7n9YwqQL0t9gJLVwHgu zQUZgYnWAKS2hqZ/JtCxJKpAO+N7Py7ZhNxbeMm2w58NJYbKW7t/RbbLjFPeE/0hWsPM Rphg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677263; x=1748282063; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MECHGtnXB65b4MXNo29O0yS4gkyeIySNFhpqKFVb7PA=; b=pPzNV4n2XJfW+evUwY0pG6tV2lvJJYTRr1PNVIY+BTA8XIshskK95ezgEwf+5KS3DX EQytYHeyqINSWAOhOALKGKcba+Tzf9e2onYtJEF7cLW1eoXu6a/m3HfLNLnrg2/xQDc9 Gp0TIUM/zch31XN1rUigYGVxeowl9H23RFiZEUzD8ntYcZtUrbiDH1y1bJjKc3uebyxa zhjHOWgBvBZsBgD1TIKxVKzksVWkNKqgFNRf3DM/7ioJxC4CuevNri0hJ41GOPv1Df7j wbInsop2Y/nhq4R2D61vYjayWuxp3BPoFY+Cdbbwv144N7cg5BkBinazNbyQZbnP178D rSxA== X-Forwarded-Encrypted: i=1; AJvYcCV1vhIh6EnMSppgklKbjTm7Nh8EFFhWFoiKod60PhOKektRnSI4w1w7eAXRbetMqxrhcDk9JvYQgwzcUkOQ@vger.kernel.org, AJvYcCWWSAHwC6Uf1XuZjKXmZaaNknV0I0+iIEsnCqUmdHdfEHVIL1f1LgiwzSx3YUkqRjRQJw5+JlWMfjAMoHYm@vger.kernel.org X-Gm-Message-State: AOJu0YwRdi9QuN7/Zf+37REneRwkzec7qBmyqxvktbQp9nuQ5Z/HsUZl rj9eNDx6An2s21AkcH28nZhBUceQEmRDivWCyTE2QAFrNJ3gV2LW23qqanlxgA== X-Gm-Gg: ASbGncsJz+/gmE5DWcmZ2WCLBx7XtV4bfkpKrLt83pSOW+bude+DFzy4QCSZu2vuQR3 ydXXCutFQU4mK7PxVdjlna4zIjuRoLm5ZYRiafsR2Gt0HVsI1mUUZzdhzuc8S38wCuR+8bW5YTW zhGspjiQ+0sbgOstwzRQc2qat0hpFE3RojT7Gli0OnVn/0otBiDoG6RY0jNFzt6cNPK8eKVgogM /Q13H6+dK7fKhLfGwkxoRR/3CZDk2x1k3RzJBdlowo8YXauPdhZqQD6T5rNPW+lTth2DIrad9xV NJU+g74dyZy/i/DMaVs5OdbpmZPDM9kQmPX4k8zJx1qSvf9m3acdl8hCrQ52l2ADCmYiOQxE/N9 5D7lXSe42p1Fpk/oeAycYbyP/gQ== X-Google-Smtp-Source: AGHT+IETEAy4l4Rs9OrAnwlLbsxGPdmd9hMmk8n/Oel19Ttvhga9l+j0mfP1uF4WLnTD1Aj7/dl/7g== X-Received: by 2002:a17:902:ce87:b0:223:653e:eb09 with SMTP id d9443c01a7336-231d438a294mr182291145ad.7.1747677262603; Mon, 19 May 2025 10:54:22 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebad26sm62936415ad.198.2025.05.19.10.54.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Philipp Stanner , Danilo Krummrich , Matthew Brost , Philipp Stanner , =?utf-8?q?Christian_K=C3=B6nig?= , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 04/40] drm/sched: Add enqueue credit limit Date: Mon, 19 May 2025 10:51:27 -0700 Message-ID: <20250519175348.11924-5-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Similar to the existing credit limit mechanism, but applying to jobs enqueued to the scheduler but not yet run. The use case is to put an upper bound on preallocated, and potentially unneeded, pgtable pages. When this limit is exceeded, pushing new jobs will block until the count drops below the limit. Cc: Philipp Stanner Cc: Danilo Krummrich Signed-off-by: Rob Clark --- drivers/gpu/drm/scheduler/sched_entity.c | 19 +++++++++++++++++-- drivers/gpu/drm/scheduler/sched_main.c | 3 +++ include/drm/gpu_scheduler.h | 24 +++++++++++++++++++++++- 3 files changed, 43 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c index bd39db7bb240..8e6b12563348 100644 --- a/drivers/gpu/drm/scheduler/sched_entity.c +++ b/drivers/gpu/drm/scheduler/sched_entity.c @@ -579,12 +579,25 @@ void drm_sched_entity_select_rq(struct drm_sched_entity *entity) * fence sequence number this function should be called with drm_sched_job_arm() * under common lock for the struct drm_sched_entity that was set up for * @sched_job in drm_sched_job_init(). + * + * If enqueue_credit_limit is used, this can return -ERESTARTSYS if the system + * call is interrupted. */ -void drm_sched_entity_push_job(struct drm_sched_job *sched_job) +int drm_sched_entity_push_job(struct drm_sched_job *sched_job) { struct drm_sched_entity *entity = sched_job->entity; + struct drm_gpu_scheduler *sched = sched_job->sched; bool first; ktime_t submit_ts; + int ret; + + ret = wait_event_interruptible( + sched->job_scheduled, + atomic_read(&sched->enqueue_credit_count) <= + sched->enqueue_credit_limit); + if (ret) + return ret; + atomic_add(sched_job->enqueue_credits, &sched->enqueue_credit_count); trace_drm_sched_job(sched_job, entity); atomic_inc(entity->rq->sched->score); @@ -609,7 +622,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) spin_unlock(&entity->lock); DRM_ERROR("Trying to push to a killed entity\n"); - return; + return -EINVAL; } rq = entity->rq; @@ -626,5 +639,7 @@ void drm_sched_entity_push_job(struct drm_sched_job *sched_job) drm_sched_wakeup(sched); } + + return 0; } EXPORT_SYMBOL(drm_sched_entity_push_job); diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c index cda1216adfa4..5f812253656a 100644 --- a/drivers/gpu/drm/scheduler/sched_main.c +++ b/drivers/gpu/drm/scheduler/sched_main.c @@ -1221,6 +1221,7 @@ static void drm_sched_run_job_work(struct work_struct *w) trace_drm_run_job(sched_job, entity); fence = sched->ops->run_job(sched_job); + atomic_sub(sched_job->enqueue_credits, &sched->enqueue_credit_count); complete_all(&entity->entity_idle); drm_sched_fence_scheduled(s_fence, fence); @@ -1257,6 +1258,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ sched->ops = args->ops; sched->credit_limit = args->credit_limit; + sched->enqueue_credit_limit = args->enqueue_credit_limit; sched->name = args->name; sched->timeout = args->timeout; sched->hang_limit = args->hang_limit; @@ -1312,6 +1314,7 @@ int drm_sched_init(struct drm_gpu_scheduler *sched, const struct drm_sched_init_ INIT_LIST_HEAD(&sched->pending_list); spin_lock_init(&sched->job_list_lock); atomic_set(&sched->credit_count, 0); + atomic_set(&sched->enqueue_credit_count, 0); INIT_DELAYED_WORK(&sched->work_tdr, drm_sched_job_timedout); INIT_WORK(&sched->work_run_job, drm_sched_run_job_work); INIT_WORK(&sched->work_free_job, drm_sched_free_job_work); diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h index da64232c989d..8ec5000f81e1 100644 --- a/include/drm/gpu_scheduler.h +++ b/include/drm/gpu_scheduler.h @@ -366,6 +366,19 @@ struct drm_sched_job { enum drm_sched_priority s_priority; u32 credits; + /** + * @enqueue_credits: the number of enqueue credits this job + * contributes to the drm_gpu_scheduler.enqueue_credit_count. + * + * The (optional) @enqueue_credits should be set before calling + * drm_sched_entity_push_job(). When sum of all the jobs pushed + * to the entity, but not yet having their run_job() callback + * called exceeds @drm_gpu_scheduler.enqueue_credit_limit, the + * drm_sched_entity_push_job() will block until the count drops + * back below the limit, providing a way to throttle the number + * of queued, but not yet run, jobs. + */ + u32 enqueue_credits; /** @last_dependency: tracks @dependencies as they signal */ unsigned int last_dependency; atomic_t karma; @@ -485,6 +498,10 @@ struct drm_sched_backend_ops { * @ops: backend operations provided by the driver. * @credit_limit: the credit limit of this scheduler * @credit_count: the current credit count of this scheduler + * @enqueue_credit_limit: the credit limit of jobs pushed to scheduler and not + * yet run + * @enqueue_credit_count: the current crdit count of jobs pushed to scheduler + * but not yet run * @timeout: the time after which a job is removed from the scheduler. * @name: name of the ring for which this scheduler is being used. * @num_rqs: Number of run-queues. This is at most DRM_SCHED_PRIORITY_COUNT, @@ -518,6 +535,8 @@ struct drm_gpu_scheduler { const struct drm_sched_backend_ops *ops; u32 credit_limit; atomic_t credit_count; + u32 enqueue_credit_limit; + atomic_t enqueue_credit_count; long timeout; const char *name; u32 num_rqs; @@ -550,6 +569,8 @@ struct drm_gpu_scheduler { * @num_rqs: Number of run-queues. This may be at most DRM_SCHED_PRIORITY_COUNT, * as there's usually one run-queue per priority, but may be less. * @credit_limit: the number of credits this scheduler can hold from all jobs + * @enqueue_credit_limit: the number of credits that can be enqueued before + * drm_sched_entity_push_job() blocks * @hang_limit: number of times to allow a job to hang before dropping it. * This mechanism is DEPRECATED. Set it to 0. * @timeout: timeout value in jiffies for submitted jobs. @@ -564,6 +585,7 @@ struct drm_sched_init_args { struct workqueue_struct *timeout_wq; u32 num_rqs; u32 credit_limit; + u32 enqueue_credit_limit; unsigned int hang_limit; long timeout; atomic_t *score; @@ -600,7 +622,7 @@ int drm_sched_job_init(struct drm_sched_job *job, struct drm_sched_entity *entity, u32 credits, void *owner); void drm_sched_job_arm(struct drm_sched_job *job); -void drm_sched_entity_push_job(struct drm_sched_job *sched_job); +int drm_sched_entity_push_job(struct drm_sched_job *sched_job); int drm_sched_job_add_dependency(struct drm_sched_job *job, struct dma_fence *fence); int drm_sched_job_add_syncobj_dependency(struct drm_sched_job *job, From patchwork Mon May 19 17:51:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892069 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61B4A28982C; Mon, 19 May 2025 17:54:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677268; cv=none; b=k1AW0BvLqCv6ZJFCGCgmkFVWYlwuHxISVydAUr7zhXyjAM+ffxzPqP97VsvHNHUElw/BYqEodvDAgnXWqOXqSCwGfwiTuIukzhSYH6Q2ovj2mw+SOSV8BcE0pwGdAMhe6ntosT5jxcHt4xy/Yl6C4sRW7IUgYItoLeKCxAx+CY4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677268; c=relaxed/simple; bh=/jfmZZExg+Wzp+4EZ6rojxHF6RgLrIAptiF8m9AlQpM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KK6At6OEMeW6cStsUyPuVFS83i5GW4heK+obxRU27boSt54LNAmTJwGWeUVwtKFtdd4zpPwoZs0arLJFQTbrHBxFVYHdmknZwx542DcG8cbH22FTHol8sSxubjkhPso/sRvFgyZf3oCzN+h7gcQbwoKbQM+nRMgM/33bO/Mf3UU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dxPsv+0K; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dxPsv+0K" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-2321c38a948so19505165ad.2; Mon, 19 May 2025 10:54:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677265; x=1748282065; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=dxPsv+0Kjdz4woZEllfEaA4PU+mJQXUr1UMGceGRVwLKEKlpEKAKabFzJmxJ9DLmYz RNYb/zqvqOd9Srh1JuJvlOciBcDm3X6nrXfnksmIyb6kLmrLc8vv+pAdCWCXR4oxCEDO +lOQBiANVV50paN5NLc0AXR1yTFk+v2cSdxm1ok+FIqWBKMduEc7u1/Bl33fE0L2d/m7 G/o/7aeegQfb5k0SiRte6SR+h2Ts7yAXpClveZHZsZ8pLO8D4VNsIDz5isQLsTyhyYse OCrNnRIm6R82McujNIpArc5O+3Uk3JlkmRpGcmQgeZ6XS0w0Mizptx2Xn3TF/uSHkOit h+4w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677265; x=1748282065; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=kXQcDRAe4EWVXNL5yTAKMixST4qgC4DZfdrTRe+HbCQ=; b=b7Ii4WOFRqdmoHIOCUMPeb42COpIIRmMAYeycZsysoYlCDTEkkNPvqzqB+7DtQ7WDK 8jb7E5mEGXtIRxozTbO9eVVJnKUuccxLmMybevCONgvpMmMqoZusodOlewSSxxyodicv 93vkVfuQMdU3EUuxYtI0/AGlGCb9JSqZoO7/0D7zuevuxeQw5aq4Z33RsJ0Q7W4UfeRO TxVhTBje49VtDxsrJ/NQPVQ3cBE0itPt2GuhNsgoT7DmSLoC6TCYNWIazKQbIspCVJjd GN8y91DT5tp6LPvMGeYnEcgRKfTMca1SqWC3JMNG1GimQYOUn3TSYjc4MnZc2vqWfmN+ 2Fdg== X-Forwarded-Encrypted: i=1; AJvYcCUtdziE8XAa/rZ6e2/lEkQAteOkhXZEwnmxFIDjhsCK699FOWyNX7EicTwVWtUl9CvzvEoP3X6x3oPV9J4a@vger.kernel.org, AJvYcCWHTLWrCoz+hTy92Yv+Ylfgwj5ntMvGN7I+fSEy6aWj+1ljx+XsdxHzeNZxhB8d273XrVio7fzcYFq4llNF@vger.kernel.org X-Gm-Message-State: AOJu0Yysh/aSt/YAN2jf0R6RgCjCzg1Yw3XlfCUxMPHbP7hLmlkkMzOO aWPxD/wC8mL7vWEqg8VR0lq1Ohcr2lP5w7d/QdoSwGPs6kjhWAmf91bU X-Gm-Gg: ASbGncsDYXhns+QVmQhk6dazRHQxfpi5Nypnu6pZoiYGNtKYO4/K2Pe4YI/q3XAe440 03ppAh7fDQu56IZprvO3snKU5YtLVLttwZRqm6HRwUO4UECm2+Uj3Qz5AFP1NZoSn9HCwGiSTyK sKaXuRTK5d7SIEK8GokBzgUGDApcM006qCuOB3WIgiJz3u2O7VaM2V5tCF2iAGD7FgrQ4YBtWwq 5gvVuXfHULC+kRllu0RWyU1GR6WQl+miqXCpv2iBI0n0MtPd2OvzoJLkdeNvDSBe5OjdlPbU9Iz 5F5Xup2lc2xOgPYC18QDiyVr48IkSyhHHOVIVoFt8FQwaQLoaki3rhS9EEypGISYV3xkvpDt9S4 FfpnY+Mhk0wMM7We0/2UWieTPFQ== X-Google-Smtp-Source: AGHT+IH9o+jDU8SCOf6p2dYQ3xG9pg+wpXFHI4AQ16YbWVrsnqHFvIYpaqIH85X0Xd/j1VubslNPjQ== X-Received: by 2002:a17:902:ea12:b0:231:c6d0:f784 with SMTP id d9443c01a7336-231de37623amr210055535ad.28.1747677264644; Mon, 19 May 2025 10:54:24 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e9605csm62543565ad.123.2025.05.19.10.54.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Robin Murphy , Will Deacon , Joerg Roedel , Jason Gunthorpe , Nicolin Chen , Kevin Tian , Joao Martins , linux-arm-kernel@lists.infradead.org (moderated list:ARM SMMU DRIVERS), iommu@lists.linux.dev (open list:IOMMU SUBSYSTEM), linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 05/40] iommu/io-pgtable-arm: Add quirk to quiet WARN_ON() Date: Mon, 19 May 2025 10:51:28 -0700 Message-ID: <20250519175348.11924-6-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In situations where mapping/unmapping sequence can be controlled by userspace, attempting to map over a region that has not yet been unmapped is an error. But not something that should spam dmesg. Now that there is a quirk, we can also drop the selftest_running flag, and use the quirk instead for selftests. Signed-off-by: Rob Clark Acked-by: Robin Murphy Signed-off-by: Rob Clark --- drivers/iommu/io-pgtable-arm.c | 27 ++++++++++++++------------- include/linux/io-pgtable.h | 8 ++++++++ 2 files changed, 22 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c index f27965caf6a1..a535d88f8943 100644 --- a/drivers/iommu/io-pgtable-arm.c +++ b/drivers/iommu/io-pgtable-arm.c @@ -253,8 +253,6 @@ static inline bool arm_lpae_concat_mandatory(struct io_pgtable_cfg *cfg, (data->start_level == 1) && (oas == 40); } -static bool selftest_running = false; - static dma_addr_t __arm_lpae_dma_addr(void *pages) { return (dma_addr_t)virt_to_phys(pages); @@ -373,7 +371,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable *data, for (i = 0; i < num_entries; i++) if (iopte_leaf(ptep[i], lvl, data->iop.fmt)) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } else if (iopte_type(ptep[i]) == ARM_LPAE_PTE_TYPE_TABLE) { /* @@ -475,7 +473,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova, cptep = iopte_deref(pte, data); } else if (pte) { /* We require an unmap first */ - WARN_ON(!selftest_running); + WARN_ON(!(cfg->quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); return -EEXIST; } @@ -649,8 +647,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data); ptep += unmap_idx_start; pte = READ_ONCE(*ptep); - if (WARN_ON(!pte)) - return 0; + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); + return -ENOENT; + } /* If the size matches this level, we're in the right place */ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) { @@ -660,8 +660,10 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data, /* Find and handle non-leaf entries */ for (i = 0; i < num_entries; i++) { pte = READ_ONCE(ptep[i]); - if (WARN_ON(!pte)) + if (!pte) { + WARN_ON(!(data->iop.cfg.quirks & IO_PGTABLE_QUIRK_NO_WARN_ON)); break; + } if (!iopte_leaf(pte, lvl, iop->fmt)) { __arm_lpae_clear_pte(&ptep[i], &iop->cfg, 1); @@ -976,7 +978,8 @@ arm_64_lpae_alloc_pgtable_s1(struct io_pgtable_cfg *cfg, void *cookie) if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS | IO_PGTABLE_QUIRK_ARM_TTBR1 | IO_PGTABLE_QUIRK_ARM_OUTER_WBWA | - IO_PGTABLE_QUIRK_ARM_HD)) + IO_PGTABLE_QUIRK_ARM_HD | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1079,7 +1082,8 @@ arm_64_lpae_alloc_pgtable_s2(struct io_pgtable_cfg *cfg, void *cookie) struct arm_lpae_io_pgtable *data; typeof(&cfg->arm_lpae_s2_cfg.vtcr) vtcr = &cfg->arm_lpae_s2_cfg.vtcr; - if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB)) + if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_S2FWB | + IO_PGTABLE_QUIRK_NO_WARN_ON)) return NULL; data = arm_lpae_alloc_pgtable(cfg); @@ -1320,7 +1324,6 @@ static void __init arm_lpae_dump_ops(struct io_pgtable_ops *ops) #define __FAIL(ops, i) ({ \ WARN(1, "selftest: test failed for fmt idx %d\n", (i)); \ arm_lpae_dump_ops(ops); \ - selftest_running = false; \ -EFAULT; \ }) @@ -1336,8 +1339,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) size_t size, mapped; struct io_pgtable_ops *ops; - selftest_running = true; - for (i = 0; i < ARRAY_SIZE(fmts); ++i) { cfg_cookie = cfg; ops = alloc_io_pgtable_ops(fmts[i], cfg, cfg); @@ -1426,7 +1427,6 @@ static int __init arm_lpae_run_tests(struct io_pgtable_cfg *cfg) free_io_pgtable_ops(ops); } - selftest_running = false; return 0; } @@ -1448,6 +1448,7 @@ static int __init arm_lpae_do_selftests(void) .tlb = &dummy_tlb_ops, .coherent_walk = true, .iommu_dev = &dev, + .quirks = IO_PGTABLE_QUIRK_NO_WARN_ON, }; /* __arm_lpae_alloc_pages() merely needs dev_to_node() to work */ diff --git a/include/linux/io-pgtable.h b/include/linux/io-pgtable.h index bba2a51c87d2..639b8f4fb87d 100644 --- a/include/linux/io-pgtable.h +++ b/include/linux/io-pgtable.h @@ -88,6 +88,13 @@ struct io_pgtable_cfg { * * IO_PGTABLE_QUIRK_ARM_HD: Enables dirty tracking in stage 1 pagetable. * IO_PGTABLE_QUIRK_ARM_S2FWB: Use the FWB format for the MemAttrs bits + * + * IO_PGTABLE_QUIRK_NO_WARN_ON: Do not WARN_ON() on conflicting + * mappings, but silently return -EEXISTS. Normally an attempt + * to map over an existing mapping would indicate some sort of + * kernel bug, which would justify the WARN_ON(). But for GPU + * drivers, this could be under control of userspace. Which + * deserves an error return, but not to spam dmesg. */ #define IO_PGTABLE_QUIRK_ARM_NS BIT(0) #define IO_PGTABLE_QUIRK_NO_PERMS BIT(1) @@ -97,6 +104,7 @@ struct io_pgtable_cfg { #define IO_PGTABLE_QUIRK_ARM_OUTER_WBWA BIT(6) #define IO_PGTABLE_QUIRK_ARM_HD BIT(7) #define IO_PGTABLE_QUIRK_ARM_S2FWB BIT(8) + #define IO_PGTABLE_QUIRK_NO_WARN_ON BIT(9) unsigned long quirks; unsigned long pgsize_bitmap; unsigned int ias; From patchwork Mon May 19 17:51:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891138 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C3344289E1B; Mon, 19 May 2025 17:54:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677269; cv=none; b=G8wk7S7mHkM9pitfa0OEtP1sVnkq2+VQPcdYBRXR1zxSi2+fb1zqJw8IAn2i8L73N3OrO/sisEqLqNPq9JNf3oxN7ZSJJpEoseV2V9x/VBL/RdtGCeOQmOKkhwi2Z3P4MkZc/P1lXUBsd1lRZrW92gUII8+6dYo8caQAey2p2xU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677269; c=relaxed/simple; bh=EbguejmXndTU3oBe1y/iJAaxP56jP1pnpH9c6r0BWO4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kdVXXkm/BvVWJ0uJTvJ7majhpmSYSb0QmJYSflKDwvZxsCuDEPBBkSJbtnoHu6DrG6FXyx1wxaeFOUnIipPwJSjNK0Gp3W8lolAyeG5DqCQVFACmSL0DF4GF7VMVeO9zZQrHW9VJirUE/MxSBtSUblMGwn6OdLQT6Mh3H8wHt68= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ETArp0dq; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ETArp0dq" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-736c277331eso5108912b3a.1; Mon, 19 May 2025 10:54:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677267; x=1748282067; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Noi1gT6T3ToMn4Ib3t8y5bXp3qFMx55Y7D+49FvjccM=; b=ETArp0dqn4vRaX1YVQiktDSfNQbclaLvPaj9eBC3AyC6qGf/zow9uNTXaENdJmO4Wx SxTHd0vKJv3xXRCDi+ngIlEAcG0fZ+olc3YK+giwQDsmOQwdxwe7xVa+7K1zTLbteUSR hEmcWgaf2IPhgkwc3hFQM3f07uxgmifQGQp3UaynarklhF940AaslndT5KlhqIVBMfP/ hgkPDlPjJEU2qPcnGLjBsj5VlqBGjwuPIq1oFs1bBHnTU9qlitkwGR9rrZwjsYsAXma/ guLOZz4HFy/Fd3Q7r15WUoBJJMsOR21ceqkSKmrFFF3C5saEIPswuy1zYHXeAK/TZdCn 7B8A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677267; x=1748282067; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Noi1gT6T3ToMn4Ib3t8y5bXp3qFMx55Y7D+49FvjccM=; b=StNpUTcQQiKeVJRGgTxniMPg/at6HbEJbL/TQ/VDsS0NWY/3ZjxN+7iTuOFrSTCbK/ x/Yzo+60RfNk7LFxxmDz7Y19UIEB6Bmwbv0jpL5kPFtbQfTDNooc1w+G8npDYs3HtSog GxtzLFDdjrKwVdruI9isAmkj1paJaZV27O+u9JygJKw0F0uNXarX4v1xU9skAjDg89Kb MlNvL6uzcHPfQ6FMbO8TQk1+BwLVR+5WeP8FsMVYLwpXsjbxKbPt97kb9Rt2PBdjvlaV /s63/O1GBnqgQr2nx5eLuUOB+j/QeRw477XI01H5yLJr1dPE6GE1SAs+XpDcPvihmJ+q anOA== X-Forwarded-Encrypted: i=1; AJvYcCUouF7qi9B165byOiYSR0mUmUpghLnqcIyIQeFg2kOMGJtNji8TK2QwyJ9I0HBRGUYbhJC+gQzaqEdZPVQz@vger.kernel.org, AJvYcCX9Ai48p/5l1RDuZ+kAYRf10X0k3R5onX9jHLDxPC1ElXoYNfgcfvL/th7Y3sRq+J2EBhbfEA++DZ+rrHJY@vger.kernel.org X-Gm-Message-State: AOJu0Yyn1G/iyvYyQz+G5nErpXSDNQzqvresd+7XOoPd825TnU5mp/y5 HHeleYMHuEKyQ2nfF3qOqZrFnsj+/VgF4IYNmlXv++ENCJbiMykVfw4q X-Gm-Gg: ASbGncvLP6kheyYoo+wNAZI3JyMBBEMsPZ7lW+czkIiFP+iHiRwG3AV+FGSotwZWrIW 09KF82b04RMMd/IP/Y9TiduA+DuBCRECWkKf8iQtd38EtvUkrBNG28HdA8uVX/sQZxIgD/wGLeE bnTaYa+vXoywNDiXnciocdrTZ2cMuXxhePcIcnGJP+OueLzfoJlxqmeklDYqwz11ozS1Sw4hOyT wCDSEnUtLclWwHse+kbKtbrKBHvnHHXjEv7SRSF2vaBpS/QPqAN+iLwDgqfSoGVafV8zHo8aXwk t5bX9z6MFuA3xWthibB+1iLuCXCFCQWkUIotC1C/WJvQ3JDBVj3MCUoKEZSXZ72PNYy1IzXTkWO ZAFST6S6dmKehWK9hfYVugwDgNANYxDTfTyPL X-Google-Smtp-Source: AGHT+IGirmJMjYXzik9DfKKOVo3qckdnT568uLkjRp9/isas2PCbrPwn77EnXsjhMN2A51ih0rWj6w== X-Received: by 2002:a05:6a20:43a5:b0:1fe:8f7c:c8e with SMTP id adf61e73a8af0-2165f84b78cmr19416881637.15.1747677266981; Mon, 19 May 2025 10:54:26 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9878b53sm6468428b3a.152.2025.05.19.10.54.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:26 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 06/40] drm/msm: Rename msm_file_private -> msm_context Date: Mon, 19 May 2025 10:51:29 -0700 Message-ID: <20250519175348.11924-7-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 6 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +-- drivers/gpu/drm/msm/msm_drv.c | 14 ++++----- drivers/gpu/drm/msm/msm_gem.c | 2 +- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.c | 4 +-- drivers/gpu/drm/msm/msm_gpu.h | 39 ++++++++++++------------- drivers/gpu/drm/msm/msm_submitqueue.c | 27 +++++++++-------- 9 files changed, 49 insertions(+), 51 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index fd64af6d0440..620a26638535 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -111,7 +111,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, struct msm_ringbuffer *ring, struct msm_gem_submit *submit) { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index d04657b77857..93fe26009511 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -356,7 +356,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -444,7 +444,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, } } -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len) { struct drm_device *drm = gpu->dev; @@ -490,7 +490,7 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, case MSM_PARAM_SYSPROF: if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); - return msm_file_private_set_sysprof(ctx, gpu, value); + return msm_context_set_sysprof(ctx, gpu, value); default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 2366a57b280f..fed9516da365 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -603,9 +603,9 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL u64 adreno_private_address_space_size(struct msm_gpu *gpu); -int adreno_get_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); -int adreno_set_param(struct msm_gpu *gpu, struct msm_file_private *ctx, +int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); const struct firmware *adreno_request_fw(struct adreno_gpu *adreno_gpu, const char *fwname); diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index c3588dc9e537..29ca24548c67 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -333,7 +333,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx; + struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); if (!ctx) @@ -363,23 +363,23 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) return context_init(dev, file); } -static void context_close(struct msm_file_private *ctx) +static void context_close(struct msm_context *ctx) { msm_submitqueue_close(ctx); - msm_file_private_put(ctx); + msm_context_put(ctx); } static void msm_postclose(struct drm_device *dev, struct drm_file *file) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; /* * It is not possible to set sysprof param to non-zero if gpu * is not initialized: */ if (priv->gpu) - msm_file_private_set_sysprof(ctx, priv->gpu, 0); + msm_context_set_sysprof(ctx, priv->gpu, 0); context_close(ctx); } @@ -511,7 +511,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, uint64_t *iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; @@ -531,7 +531,7 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, uint64_t iova) { struct msm_drm_private *priv = dev->dev_private; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; if (!priv->gpu) return -EINVAL; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index d2f38e1df510..fdeb6cf7eeb5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -48,7 +48,7 @@ static void update_device_mem(struct msm_drm_private *priv, ssize_t size) static void update_ctx_mem(struct drm_file *file, ssize_t size) { - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; uint64_t ctx_mem = atomic64_add_return(size, &ctx->ctx_mem); rcu_read_lock(); /* Locks file->pid! */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index d4f71bb54e84..3aabf7f1da6d 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -651,7 +651,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, { struct msm_drm_private *priv = dev->dev_private; struct drm_msm_gem_submit *args = data; - struct msm_file_private *ctx = file->driver_priv; + struct msm_context *ctx = file->driver_priv; struct msm_gem_submit *submit = NULL; struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index c380d9d9f5af..d786fcfad62f 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -148,7 +148,7 @@ int msm_gpu_pm_suspend(struct msm_gpu *gpu) return 0; } -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p) { drm_printf(p, "drm-engine-gpu:\t%llu ns\n", ctx->elapsed_ns); @@ -339,7 +339,7 @@ static void retire_submits(struct msm_gpu *gpu); static void get_comm_cmdline(struct msm_gem_submit *submit, char **comm, char **cmd) { - struct msm_file_private *ctx = submit->queue->ctx; + struct msm_context *ctx = submit->queue->ctx; struct task_struct *task; WARN_ON(!mutex_is_locked(&submit->gpu->lock)); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index e25009150579..957d6fb3469d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -22,7 +22,7 @@ struct msm_gem_submit; struct msm_gpu_perfcntr; struct msm_gpu_state; -struct msm_file_private; +struct msm_context; struct msm_gpu_config { const char *ioname; @@ -44,9 +44,9 @@ struct msm_gpu_config { * + z180_gpu */ struct msm_gpu_funcs { - int (*get_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*get_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); - int (*set_param)(struct msm_gpu *gpu, struct msm_file_private *ctx, + int (*set_param)(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t value, uint32_t len); int (*hw_init)(struct msm_gpu *gpu); @@ -347,7 +347,7 @@ struct msm_gpu_perfcntr { #define NR_SCHED_PRIORITIES (1 + DRM_SCHED_PRIORITY_LOW - DRM_SCHED_PRIORITY_HIGH) /** - * struct msm_file_private - per-drm_file context + * struct msm_context - per-drm_file context * * @queuelock: synchronizes access to submitqueues list * @submitqueues: list of &msm_gpu_submitqueue created by userspace @@ -357,7 +357,7 @@ struct msm_gpu_perfcntr { * @ref: reference count * @seqno: unique per process seqno */ -struct msm_file_private { +struct msm_context { rwlock_t queuelock; struct list_head submitqueues; int queueid; @@ -512,7 +512,7 @@ struct msm_gpu_submitqueue { u32 ring_nr; int faults; uint32_t last_fence; - struct msm_file_private *ctx; + struct msm_context *ctx; struct list_head node; struct idr fence_idr; struct spinlock idr_lock; @@ -608,33 +608,32 @@ static inline void gpu_write64(struct msm_gpu *gpu, u32 reg, u64 val) int msm_gpu_pm_suspend(struct msm_gpu *gpu); int msm_gpu_pm_resume(struct msm_gpu *gpu); -void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_file_private *ctx, +void msm_gpu_show_fdinfo(struct msm_gpu *gpu, struct msm_context *ctx, struct drm_printer *p); -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx); -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx); +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id); int msm_submitqueue_create(struct drm_device *drm, - struct msm_file_private *ctx, + struct msm_context *ctx, u32 prio, u32 flags, u32 *id); -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args); -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id); -void msm_submitqueue_close(struct msm_file_private *ctx); +int msm_submitqueue_remove(struct msm_context *ctx, u32 id); +void msm_submitqueue_close(struct msm_context *ctx); void msm_submitqueue_destroy(struct kref *kref); -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof); -void __msm_file_private_destroy(struct kref *kref); +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof); +void __msm_context_destroy(struct kref *kref); -static inline void msm_file_private_put(struct msm_file_private *ctx) +static inline void msm_context_put(struct msm_context *ctx) { - kref_put(&ctx->ref, __msm_file_private_destroy); + kref_put(&ctx->ref, __msm_context_destroy); } -static inline struct msm_file_private *msm_file_private_get( - struct msm_file_private *ctx) +static inline struct msm_context *msm_context_get( + struct msm_context *ctx) { kref_get(&ctx->ref); return ctx; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 7fed1de63b5d..1acc0fe36353 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -7,8 +7,7 @@ #include "msm_gpu.h" -int msm_file_private_set_sysprof(struct msm_file_private *ctx, - struct msm_gpu *gpu, int sysprof) +int msm_context_set_sysprof(struct msm_context *ctx, struct msm_gpu *gpu, int sysprof) { /* * Since pm_runtime and sysprof_active are both refcounts, we @@ -46,10 +45,10 @@ int msm_file_private_set_sysprof(struct msm_file_private *ctx, return 0; } -void __msm_file_private_destroy(struct kref *kref) +void __msm_context_destroy(struct kref *kref) { - struct msm_file_private *ctx = container_of(kref, - struct msm_file_private, ref); + struct msm_context *ctx = container_of(kref, + struct msm_context, ref); int i; for (i = 0; i < ARRAY_SIZE(ctx->entities); i++) { @@ -73,12 +72,12 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); - msm_file_private_put(queue->ctx); + msm_context_put(queue->ctx); kfree(queue); } -struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, +struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; @@ -101,7 +100,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_file_private *ctx, return NULL; } -void msm_submitqueue_close(struct msm_file_private *ctx) +void msm_submitqueue_close(struct msm_context *ctx) { struct msm_gpu_submitqueue *entry, *tmp; @@ -119,7 +118,7 @@ void msm_submitqueue_close(struct msm_file_private *ctx) } static struct drm_sched_entity * -get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, +get_sched_entity(struct msm_context *ctx, struct msm_ringbuffer *ring, unsigned ring_nr, enum drm_sched_priority sched_prio) { static DEFINE_MUTEX(entity_lock); @@ -155,7 +154,7 @@ get_sched_entity(struct msm_file_private *ctx, struct msm_ringbuffer *ring, return ctx->entities[idx]; } -int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, u32 prio, u32 flags, u32 *id) { struct msm_drm_private *priv = drm->dev_private; @@ -200,7 +199,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, write_lock(&ctx->queuelock); - queue->ctx = msm_file_private_get(ctx); + queue->ctx = msm_context_get(ctx); queue->id = ctx->queueid++; if (id) @@ -221,7 +220,7 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_file_private *ctx, * Create the default submit-queue (id==0), used for backwards compatibility * for userspace that pre-dates the introduction of submitqueues. */ -int msm_submitqueue_init(struct drm_device *drm, struct msm_file_private *ctx) +int msm_submitqueue_init(struct drm_device *drm, struct msm_context *ctx) { struct msm_drm_private *priv = drm->dev_private; int default_prio, max_priority; @@ -261,7 +260,7 @@ static int msm_submitqueue_query_faults(struct msm_gpu_submitqueue *queue, return ret ? -EFAULT : 0; } -int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, +int msm_submitqueue_query(struct drm_device *drm, struct msm_context *ctx, struct drm_msm_submitqueue_query *args) { struct msm_gpu_submitqueue *queue; @@ -282,7 +281,7 @@ int msm_submitqueue_query(struct drm_device *drm, struct msm_file_private *ctx, return ret; } -int msm_submitqueue_remove(struct msm_file_private *ctx, u32 id) +int msm_submitqueue_remove(struct msm_context *ctx, u32 id) { struct msm_gpu_submitqueue *entry; From patchwork Mon May 19 17:51:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892068 Received: from mail-pl1-f177.google.com (mail-pl1-f177.google.com [209.85.214.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 420B828934B; Mon, 19 May 2025 17:54:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677270; cv=none; b=JPT3aDE61HX2mfmGjF83jXWWU62MncIipmqNuPKiD1V6ioj8x22vQ/fy/bMTHNwQTNTBupSaQlugWM+NiDNaCLUuyGqNM3NRQwt5UFvP6sq/bAz7ex3KLfCHw9V5s02o8cYS3fBuNXQ+L02MpqTNc0JdJ3i7iQ8dIPc32KZePUM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677270; c=relaxed/simple; bh=hA0n3sJHSqtfCC0QQbm5Ra4aQsTlaYfz6FOXn+eApww=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fXfS/1gYlySJjFUGyhizZMz5Ssw30X996bzT4pajQlmvVBcnYnoGCZnGq6uoHFhfkL36HH1OC9iuyg91J+lFkRywy+9rowZBw6eOM+qaQPM6KMgmEoi8skkQOtM4pu9+mV5NgG0z9qUkuM6UbsQ//Xw+OCPcG7sXeFj4vOd8uus= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=aJfIRbB8; arc=none smtp.client-ip=209.85.214.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="aJfIRbB8" Received: by mail-pl1-f177.google.com with SMTP id d9443c01a7336-231e8553248so26634405ad.1; Mon, 19 May 2025 10:54:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677268; x=1748282068; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=P9wao8L9QpDXqghTRoSkoUr2w+hVzfvrL1RxfpMOf90=; b=aJfIRbB8xHocHZmMthwvFr7t/55fri60bt427IcYnkUGRBewlZYeP3o3gxLNY/Kz6F FLQTKYecbahXSbe3Lr9EczbdzglQ6x5XGFAn9XP+fc88KGwTCQxo+g0sy79iulL+ENZn WSiADkBWSz9p0x238fp0r5OITQtEhqYz4z06Hlku5lewlz6Zz9lNGg9nXLAnYhPQvTT5 jHxLDI4orUROUW0OtEb+A30vsWFb4kW8FgI1QK6avSiuNQSxgQAL7DL2DnzlqyUa9bdz Dz28MI2jTgl9Vz5TOvVscrfVN+241jSWXjqd9kMG9sYCMzQ5N41cuaONxXi2UYAfKqzE i6xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677268; x=1748282068; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=P9wao8L9QpDXqghTRoSkoUr2w+hVzfvrL1RxfpMOf90=; b=ck4wcxfbTUFtjKLCxCloSrNJDS9GBXUtbmt5Toiaih/V/ESvI1AZz7gFaJCPBHzFCm ycEguzu+g3VnrbRKQPRfxVAEWT0jeyuvLgfuhoe43fA1KV9VKNlgN073Tq2bNt6jLWkY 3iwAuUgMyfhVygpwFiSDQA2AfnAcfpaPBpFPdN2716qX4fKikwmquyxzIO43yGwLA9zi Q/krlhe+4VSR1C5CYfF+euQnDVGC6GJVRVupOAJuVMD8/p0ku1nVOvLSWHU9lXHGXyq/ eWxLI1ONBJ/5UiirnDtRb+MAHCYT/3feeZV9U7O0Vj1ixd+DKHh+UmuIQ34Cwj6LFy3d QZdA== X-Forwarded-Encrypted: i=1; AJvYcCUjee02xc1KAO5fIdt3D1URmHWPZ6KouSQpNYkyPsfUjp0Rvkq6HWAbCJq1Xgwlkj5+3rg6pTxuJG6+sv7P@vger.kernel.org, AJvYcCWGwJV4ILvAV7MmLq8i8gDJlQvf086PyuEvh99qnJqdAxN4fB+g3enEvMZ4kV5QyvDRRtLBIsiub12HqvLB@vger.kernel.org X-Gm-Message-State: AOJu0YyUv7QrEU3bQJ/lKE0xJaPFvCQoYj9AA8nnCpOwS+PZ4duMBydN 1Hihjb2YRYQh2vIFP4xgYg0mZPiJDp9ez3fwdeNCcDPw6wz07i77htEy X-Gm-Gg: ASbGncvPMUobWQOJpSLh+D9DbxD/5gP8kY9o0xjtKfLwoM9RTO1zoZMTHMEwQslBYv6 3cjUPXUzdGHeYbUTukPKH8sU9mVYsd2616k/BsQCFhwmSrKEC0P8YxfU17nLyCOf+MIBLvwqJ+I /SHFIsEgzWT04RG9uDMzqMsw7YXrUAZ8DFg37t4jWyrVe8yBtAYTT23kx+t47tWKXPzo81zQ/Ir eweolGFcGOI2Zt+i2SSMQpNydLa1bJHO1/ZUO/tQTSfAhePn0aVZd/KHxASs0yqwsB8qzlIdfCn UzalhZLft0UvaeVoXAtSJNDdf+DRD+9QzN7KwFa+zFMXw/YDg12DW1Mi8cyhChCEik7okChgYDC FDpLuwtQ8JL/QNj80oxwF6jDKaw== X-Google-Smtp-Source: AGHT+IHetx6JhjKJiiXTN6lkyo+n0uqidqeO6FYdb9monpI6GoV1ZNjHZwQtDbvkDnweEzmVsK67rQ== X-Received: by 2002:a17:903:11c6:b0:21f:7880:8472 with SMTP id d9443c01a7336-231de370156mr200750455ad.35.1747677268469; Mon, 19 May 2025 10:54:28 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e971cdsm62879295ad.134.2025.05.19.10.54.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 07/40] drm/msm: Improve msm_context comments Date: Mon, 19 May 2025 10:51:30 -0700 Message-ID: <20250519175348.11924-8-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Just some tidying up. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.h | 44 +++++++++++++++++++++++------------ 1 file changed, 29 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 957d6fb3469d..c699ce0c557b 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -348,25 +348,39 @@ struct msm_gpu_perfcntr { /** * struct msm_context - per-drm_file context - * - * @queuelock: synchronizes access to submitqueues list - * @submitqueues: list of &msm_gpu_submitqueue created by userspace - * @queueid: counter incremented each time a submitqueue is created, - * used to assign &msm_gpu_submitqueue.id - * @aspace: the per-process GPU address-space - * @ref: reference count - * @seqno: unique per process seqno */ struct msm_context { + /** @queuelock: synchronizes access to submitqueues list */ rwlock_t queuelock; + + /** @submitqueues: list of &msm_gpu_submitqueue created by userspace */ struct list_head submitqueues; + + /** + * @queueid: + * + * Counter incremented each time a submitqueue is created, used to + * assign &msm_gpu_submitqueue.id + */ int queueid; + + /** @aspace: the per-process GPU address-space */ struct msm_gem_address_space *aspace; + + /** @kref: the reference count */ struct kref ref; + + /** + * @seqno: + * + * A unique per-process sequence number. Used to detect context + * switches, without relying on keeping a, potentially dangling, + * pointer to the previous context. + */ int seqno; /** - * sysprof: + * @sysprof: * * The value of MSM_PARAM_SYSPROF set by userspace. This is * intended to be used by system profiling tools like Mesa's @@ -384,21 +398,21 @@ struct msm_context { int sysprof; /** - * comm: Overridden task comm, see MSM_PARAM_COMM + * @comm: Overridden task comm, see MSM_PARAM_COMM * * Accessed under msm_gpu::lock */ char *comm; /** - * cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE + * @cmdline: Overridden task cmdline, see MSM_PARAM_CMDLINE * * Accessed under msm_gpu::lock */ char *cmdline; /** - * elapsed: + * @elapsed: * * The total (cumulative) elapsed time GPU was busy with rendering * from this context in ns. @@ -406,7 +420,7 @@ struct msm_context { uint64_t elapsed_ns; /** - * cycles: + * @cycles: * * The total (cumulative) GPU cycles elapsed attributed to this * context. @@ -414,7 +428,7 @@ struct msm_context { uint64_t cycles; /** - * entities: + * @entities: * * Table of per-priority-level sched entities used by submitqueues * associated with this &drm_file. Because some userspace apps @@ -427,7 +441,7 @@ struct msm_context { struct drm_sched_entity *entities[NR_SCHED_PRIORITIES * MSM_GPU_MAX_RINGS]; /** - * ctx_mem: + * @ctx_mem: * * Total amount of memory of GEM buffers with handles attached for * this context. From patchwork Mon May 19 17:51:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892067 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A47C28A40B; Mon, 19 May 2025 17:54:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677288; cv=none; b=JJAuzi8VygBL08+qAgUHb0wafUnpnq4aW6aRu3YS17kdc/NdIZtKiFKS/4HumNKc/f927VVm2Yt5km75+K7onNqWfXQhqFG9+f07IkO09rXdL7hy+5VM8Y9eTHjJCMUl8rgEn9/O0RJRce9QWsgUQNq1vBcijnIPqw4enO8SNuo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677288; c=relaxed/simple; bh=bD0tae0CuH+QKk7YH9//2SIdBhpQodZc0JKCYsJxax0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BrlcWVrBnPyMqvMJQj3lPA3JaP76QAKH1uEg356uLmuzpM+INjnm5wUie4ckqTh7/b99u5swVGhv0OAAEMIi9mjiEnSggKTa2MScunKxPQucclg2q42efM7xLpyZwRonXp8A9W4Sk+novVxD9sbZrJMXzZFtzeIxHmeXkB2XI7Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LlMGOKtN; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LlMGOKtN" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-b0b2d0b2843so3640729a12.2; Mon, 19 May 2025 10:54:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677283; x=1748282083; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=adx9xLPA6Dg+HEpHUdP5g5AhN1NAIZLCyE9FRm7PGI4=; b=LlMGOKtNR8cAp7QJZlA+ZM/aRF2gNEiITq6a62yZQXTp1jEgpg0U4/mV5jd7e56iH0 FjmGzLRrXEt1YdzAj35iDtvT1y8JFcpNGNngpRv3DoEQxkh5ZdauJuwGD0zqmsapeEjE d8xmev81rWZztUP2zGss3+x+8RKb2oxmF/LMnXiBA9V1oNj2U0hBj9hHT9UxmZg6RCQ6 Bzle30XrAOma3tYP3Cyy0lK3zs2Kt9+IxGtce2Wfxvt3RBWXoEcEM640iIXePRLOP6uP 03Uik6Kg7HPCjZyNU+KaQ9ywetP6WDZCQS3ZLc4XV6GrR6F+mGdRls2vicWUP/dDsooa e9Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677283; x=1748282083; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=adx9xLPA6Dg+HEpHUdP5g5AhN1NAIZLCyE9FRm7PGI4=; b=tgdL2IPRXx9MxhOPGTW5U+OrnBizXzL40YmsNkAHsZfbkeJePAUw7kuAtupqdiZBO/ PEnd1phL8CJ0oMzSxGBq4OU/G0/KN4AxQ0dXbyQtPSI1uYzjhDCKfo8FHBB6CRL3Rk8z X0FT2yGTz3M3MpynvxwECQJXQR1EiE7kLnn/2ElVkmzXKMTa6uG9IBShewvuZiBNQZgt +mwMYRiIyakoQr2/Ll92/WmNPxYGoUMHK7jxEQsNFZgXzqRW5qbglS6BF37e2zJnaGf/ Z5/z4ZyaTElkXjJy2oCaiDowNKK+CZqk6Zp4P7QwrTMMw3tatMd3dJ66PcpKXxgiRVHz tZ1A== X-Forwarded-Encrypted: i=1; AJvYcCVRfK7s97g6/jDdsk3mTqJW+nFG2+52lOPDo9+hlDdtLCDmKMev/vj7cvENPxKg9x14OR4sYXUFG+yeKPUL@vger.kernel.org, AJvYcCXtIo70Yzks727pvf0HH0SNNZHADivWkfwIVHWaP8obm/tXVuSzYyonSpgMxrLyYETfpX2Y4MdY/6PQLSzZ@vger.kernel.org X-Gm-Message-State: AOJu0YwreqDqkIvTFnj+BomdBrcqXmlPa1eYItT8BX/Vipt/cSTYMjPF 7D95t+DepECUW5x+YGPuc7+bvj4Tr6CaBt0kghJFyxA2+bJjnq6FdzLj X-Gm-Gg: ASbGnctWJVRE2BXw84bKL3HwAeAuGbkgl46OcgVjHFmWAo36t/wyzYufQCyp92Mkfsg ERrU0JDt/FEA/MkqowKpvDmjfXQ1ZAl/69AepSbzauTGRY3n570uopdXXQKa4aBQi+xjxiwROoY dc/5Gqv4QC2SlGJJBcIYO6r95xIUeYE5E1gAlaxvDYgsj3LxbCB3ewOuRkWzsic78h3eoSbU9s/ i+UuoJS3POCJicZmtLI1zBwUAWmm3QFl6ybLtdletEF4ZWrQMwKbC5TUD5tDwBiXmL2oSxMeIKU t9X8b6hec77AOcT0uV4m3nEoWCOcDJBZrqS6/ZaQT0JWW84oU3SpKDpJqWLepR0s1KCae06v3Ju MnTKIeZNKlY8uGrkGF4aifxGRuQ== X-Google-Smtp-Source: AGHT+IHX2aaNSG2Q8bQMnhhEaGusdQu5/mRVyrjVe74VCCDT7tim/lFk9HhMWZ7/0L8O12aPo58XgQ== X-Received: by 2002:a17:902:c951:b0:224:1af1:87f4 with SMTP id d9443c01a7336-231d43bb822mr212322175ad.22.1747677282911; Mon, 19 May 2025 10:54:42 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebb0f5sm62684545ad.192.2025.05.19.10.54.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:42 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Dmitry Baryshkov , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , =?utf-8?b?QmFybmFiw6FzIEN6w6lt?= =?utf-8?b?w6Fu?= , Arnd Bergmann , Jun Nie , =?utf-8?q?Andr?= =?utf-8?q?=C3=A9_Almeida?= , Christopher Snowhill , Jonathan Marek , Krzysztof Kozlowski , Haoxiang Li , Eugene Lepshy , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 08/40] drm/msm: Rename msm_gem_address_space -> msm_gem_vm Date: Mon, 19 May 2025 10:51:31 -0700 Message-ID: <20250519175348.11924-9-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Re-aligning naming to better match drm_gpuvm terminology will make things less confusing at the end of the drm_gpuvm conversion. This is just rename churn, no functional change. Signed-off-by: Rob Clark Reviewed-by: Dmitry Baryshkov Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 18 ++-- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_debugfs.c | 4 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 22 ++--- drivers/gpu/drm/msm/adreno/a5xx_power.c | 2 +- drivers/gpu/drm/msm/adreno/a5xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 26 +++--- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 45 +++++---- drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 10 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 47 +++++----- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 18 ++-- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c | 14 +-- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 4 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c | 6 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 24 ++--- drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c | 12 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c | 4 +- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 18 ++-- drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 12 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 14 +-- drivers/gpu/drm/msm/msm_drv.c | 8 +- drivers/gpu/drm/msm/msm_drv.h | 10 +- drivers/gpu/drm/msm/msm_fb.c | 10 +- drivers/gpu/drm/msm/msm_fbdev.c | 2 +- drivers/gpu/drm/msm/msm_gem.c | 74 +++++++-------- drivers/gpu/drm/msm/msm_gem.h | 34 +++---- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 93 +++++++++---------- drivers/gpu/drm/msm/msm_gpu.c | 48 +++++----- drivers/gpu/drm/msm/msm_gpu.h | 16 ++-- drivers/gpu/drm/msm/msm_kms.c | 16 ++-- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_ringbuffer.c | 4 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 41 files changed, 349 insertions(+), 354 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 379a3d346c30..5eb063ed0b46 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->aspace->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,19 +466,19 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_address_space * -a2xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_gem_address_space_create(mmu, "gpu", SZ_16M, + vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, 0xfff * SZ_64K); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } static u32 a2xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -504,7 +504,7 @@ static const struct adreno_gpu_funcs funcs = { #endif .gpu_state_get = a2xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = a2xx_create_address_space, + .create_vm = a2xx_create_vm, .get_rptr = a2xx_get_rptr, }, }; @@ -551,7 +551,7 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->aspace) { + if (!gpu->vm) { dev_err(dev->dev, "No memory protection without MMU\n"); if (!allow_vram_carveout) { ret = -ENXIO; diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index b6df115bb567..434e6ededf83 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -526,7 +526,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a3xx_gpu_busy, .gpu_state_get = a3xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a3xx_get_rptr, }, }; @@ -581,7 +581,7 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index f1b18a6663f7..2c75debcfd84 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -645,7 +645,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a4xx_gpu_busy, .gpu_state_get = a4xx_gpu_state_get, .gpu_state_put = adreno_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a4xx_get_rptr, }, .get_timestamp = a4xx_get_timestamp, @@ -695,7 +695,7 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->aspace) { + if (!gpu->vm) { /* TODO we think it is possible to configure the GPU to * restrict access to VRAM carveout. But the required * registers are unknown. For now just bail out and diff --git a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c index 169b8fe688f8..625a4e787d8f 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_debugfs.c @@ -116,13 +116,13 @@ reset_set(void *data, u64 val) adreno_gpu->fw[ADRENO_FW_PFP] = NULL; if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); a5xx_gpu->pm4_bo = NULL; } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); a5xx_gpu->pfp_bo = NULL; } diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 60aef0796236..dc31bc0afca4 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -622,7 +622,7 @@ static int a5xx_ucode_load(struct msm_gpu *gpu) a5xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a5xx_gpu->shadow_bo, + gpu->vm, &a5xx_gpu->shadow_bo, &a5xx_gpu->shadow_iova); if (IS_ERR(a5xx_gpu->shadow)) @@ -1042,22 +1042,22 @@ static void a5xx_destroy(struct msm_gpu *gpu) a5xx_preempt_fini(gpu); if (a5xx_gpu->pm4_bo) { - msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pm4_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pm4_bo); } if (a5xx_gpu->pfp_bo) { - msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->pfp_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->pfp_bo); } if (a5xx_gpu->gpmu_bo) { - msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->gpmu_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->gpmu_bo); } if (a5xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a5xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a5xx_gpu->shadow_bo); } @@ -1457,7 +1457,7 @@ static int a5xx_crashdumper_init(struct msm_gpu *gpu, struct a5xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1557,7 +1557,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, if (a5xx_crashdumper_run(gpu, &dumper)) { kfree(a5xx_state->hlsqregs); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); return; } @@ -1565,7 +1565,7 @@ static void a5xx_gpu_state_get_hlsq_regs(struct msm_gpu *gpu, memcpy(a5xx_state->hlsqregs, dumper.ptr + (256 * SZ_1K), count * sizeof(u32)); - msm_gem_kernel_put(dumper.bo, gpu->aspace); + msm_gem_kernel_put(dumper.bo, gpu->vm); } static struct msm_gpu_state *a5xx_gpu_state_get(struct msm_gpu *gpu) @@ -1713,7 +1713,7 @@ static const struct adreno_gpu_funcs funcs = { .gpu_busy = a5xx_gpu_busy, .gpu_state_get = a5xx_gpu_state_get, .gpu_state_put = a5xx_gpu_state_put, - .create_address_space = adreno_create_address_space, + .create_vm = adreno_create_vm, .get_rptr = a5xx_get_rptr, }, .get_timestamp = a5xx_get_timestamp, @@ -1786,8 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, a5xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_power.c b/drivers/gpu/drm/msm/adreno/a5xx_power.c index 6b91e0bd1514..d6da7351cfbb 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_power.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_power.c @@ -363,7 +363,7 @@ void a5xx_gpmu_ucode_init(struct msm_gpu *gpu) bosize = (cmds_size + (cmds_size / TYPE4_MAX_PAYLOAD) + 1) << 2; ptr = msm_gem_kernel_new(drm, bosize, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &a5xx_gpu->gpmu_bo, &a5xx_gpu->gpmu_iova); if (IS_ERR(ptr)) return; diff --git a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c index 36f72c43eae8..e50221d4e6ee 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_preempt.c @@ -254,7 +254,7 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_RECORD_SIZE + A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -262,9 +262,9 @@ static int preempt_init_ring(struct a5xx_gpu *a5xx_gpu, /* The buffer to store counters needs to be unprivileged */ counters = msm_gem_kernel_new(gpu->dev, A5XX_PREEMPT_COUNTER_SIZE, - MSM_BO_WC, gpu->aspace, &counters_bo, &counters_iova); + MSM_BO_WC, gpu->vm, &counters_bo, &counters_iova); if (IS_ERR(counters)) { - msm_gem_kernel_put(bo, gpu->aspace); + msm_gem_kernel_put(bo, gpu->vm); return PTR_ERR(counters); } @@ -295,8 +295,8 @@ void a5xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) { - msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->aspace); - msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->aspace); + msm_gem_kernel_put(a5xx_gpu->preempt_bo[i], gpu->vm); + msm_gem_kernel_put(a5xx_gpu->preempt_counters_bo[i], gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 38c0f8ef85c3..848acc382b7d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,15 +1259,15 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { - msm_gem_kernel_put(gmu->hfi.obj, gmu->aspace); - msm_gem_kernel_put(gmu->debug.obj, gmu->aspace); - msm_gem_kernel_put(gmu->icache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dcache.obj, gmu->aspace); - msm_gem_kernel_put(gmu->dummy.obj, gmu->aspace); - msm_gem_kernel_put(gmu->log.obj, gmu->aspace); - - gmu->aspace->mmu->funcs->detach(gmu->aspace->mmu); - msm_gem_address_space_put(gmu->aspace); + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); + msm_gem_kernel_put(gmu->debug.obj, gmu->vm); + msm_gem_kernel_put(gmu->icache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dcache.obj, gmu->vm); + msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); + msm_gem_kernel_put(gmu->log.obj, gmu->vm); + + gmu->vm->mmu->funcs->detach(gmu->vm->mmu); + msm_gem_vm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, @@ -1296,7 +1296,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, if (IS_ERR(bo->obj)) return PTR_ERR(bo->obj); - ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->aspace, &bo->iova, + ret = msm_gem_get_and_pin_iova_range(bo->obj, gmu->vm, &bo->iova, range_start, range_end); if (ret) { drm_gem_object_put(bo->obj); @@ -1321,9 +1321,9 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->aspace = msm_gem_address_space_create(mmu, "gmu", 0x0, 0x80000000); - if (IS_ERR(gmu->aspace)) - return PTR_ERR(gmu->aspace); + gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + if (IS_ERR(gmu->vm)) + return PTR_ERR(gmu->vm); return 0; } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index 39fb8c774a79..cceda7d9c33a 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 620a26638535..d05c00624f74 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->aspace->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -957,7 +957,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) msm_gem_object_set_name(a6xx_gpu->sqe_bo, "sqefw"); if (!a6xx_ucode_check_version(a6xx_gpu, a6xx_gpu->sqe_bo)) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); a6xx_gpu->sqe_bo = NULL; @@ -974,7 +974,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->shadow = msm_gem_kernel_new(gpu->dev, sizeof(u32) * gpu->nr_rings, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->shadow_bo, + gpu->vm, &a6xx_gpu->shadow_bo, &a6xx_gpu->shadow_iova); if (IS_ERR(a6xx_gpu->shadow)) @@ -985,7 +985,7 @@ static int a6xx_ucode_load(struct msm_gpu *gpu) a6xx_gpu->pwrup_reglist_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV, - gpu->aspace, &a6xx_gpu->pwrup_reglist_bo, + gpu->vm, &a6xx_gpu->pwrup_reglist_bo, &a6xx_gpu->pwrup_reglist_iova); if (IS_ERR(a6xx_gpu->pwrup_reglist_ptr)) @@ -2198,12 +2198,12 @@ static void a6xx_destroy(struct msm_gpu *gpu) struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); if (a6xx_gpu->sqe_bo) { - msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->sqe_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->sqe_bo); } if (a6xx_gpu->shadow_bo) { - msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->aspace); + msm_gem_unpin_iova(a6xx_gpu->shadow_bo, gpu->vm); drm_gem_object_put(a6xx_gpu->shadow_bo); } @@ -2243,8 +2243,8 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_address_space * -a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) +static struct msm_gem_vm * +a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct a6xx_gpu *a6xx_gpu = to_a6xx_gpu(adreno_gpu); @@ -2258,22 +2258,22 @@ a6xx_create_address_space(struct msm_gpu *gpu, struct platform_device *pdev) !device_iommu_capable(&pdev->dev, IOMMU_CAP_CACHE_COHERENCY)) quirks |= IO_PGTABLE_QUIRK_ARM_OUTER_WBWA; - return adreno_iommu_create_address_space(gpu, pdev, quirks); + return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_address_space * -a6xx_create_private_address_space(struct msm_gpu *gpu) +static struct msm_gem_vm * +a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->aspace->mmu); + mmu = msm_iommu_pagetable_create(gpu->vm->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_address_space_create(mmu, + return msm_gem_vm_create(mmu, "gpu", ADRENO_VM_START, - adreno_private_address_space_size(gpu)); + adreno_private_vm_size(gpu)); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) @@ -2390,8 +2390,8 @@ static const struct adreno_gpu_funcs funcs = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2419,8 +2419,8 @@ static const struct adreno_gpu_funcs funcs_gmuwrapper = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2450,8 +2450,8 @@ static const struct adreno_gpu_funcs funcs_a7xx = { .gpu_state_get = a6xx_gpu_state_get, .gpu_state_put = a6xx_gpu_state_put, #endif - .create_address_space = a6xx_create_address_space, - .create_private_address_space = a6xx_create_private_address_space, + .create_vm = a6xx_create_vm, + .create_private_vm = a6xx_create_private_vm, .get_rptr = a6xx_get_rptr, .progress = a6xx_progress, }, @@ -2547,9 +2547,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->aspace) - msm_mmu_set_fault_handler(gpu->aspace->mmu, gpu, - a6xx_fault_handler); + if (gpu->vm) + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c index 341a72a67401..ff06bb75b76d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c @@ -132,7 +132,7 @@ static int a6xx_crashdumper_init(struct msm_gpu *gpu, struct a6xx_crashdumper *dumper) { dumper->ptr = msm_gem_kernel_new(gpu->dev, - SZ_1M, MSM_BO_WC, gpu->aspace, + SZ_1M, MSM_BO_WC, gpu->vm, &dumper->bo, &dumper->iova); if (!IS_ERR(dumper->ptr)) @@ -1619,7 +1619,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a7xx_get_clusters(gpu, a6xx_state, dumper); a7xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } a7xx_get_post_crashdumper_registers(gpu, a6xx_state); @@ -1631,7 +1631,7 @@ struct msm_gpu_state *a6xx_gpu_state_get(struct msm_gpu *gpu) a6xx_get_clusters(gpu, a6xx_state, dumper); a6xx_get_dbgahb_clusters(gpu, a6xx_state, dumper); - msm_gem_kernel_put(dumper->bo, gpu->aspace); + msm_gem_kernel_put(dumper->bo, gpu->vm); } } diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index 9b5e27d2373c..b14a7c630bd0 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -343,7 +343,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_RECORD_SIZE(adreno_gpu), - MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->aspace, &bo, &iova); + MSM_BO_WC | MSM_BO_MAP_PRIV, gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -361,7 +361,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, ptr = msm_gem_kernel_new(gpu->dev, PREEMPT_SMMU_INFO_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &bo, &iova); + gpu->vm, &bo, &iova); if (IS_ERR(ptr)) return PTR_ERR(ptr); @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->aspace->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; @@ -404,7 +404,7 @@ void a6xx_preempt_fini(struct msm_gpu *gpu) int i; for (i = 0; i < gpu->nr_rings; i++) - msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->aspace); + msm_gem_kernel_put(a6xx_gpu->preempt_bo[i], gpu->vm); } void a6xx_preempt_init(struct msm_gpu *gpu) @@ -430,7 +430,7 @@ void a6xx_preempt_init(struct msm_gpu *gpu) a6xx_gpu->preempt_postamble_ptr = msm_gem_kernel_new(gpu->dev, PAGE_SIZE, MSM_BO_WC | MSM_BO_MAP_PRIV | MSM_BO_GPU_READONLY, - gpu->aspace, &a6xx_gpu->preempt_postamble_bo, + gpu->vm, &a6xx_gpu->preempt_postamble_bo, &a6xx_gpu->preempt_postamble_iova); preempt_prepare_postamble(a6xx_gpu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 93fe26009511..b01d9efb8663 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev) +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev) { - return adreno_iommu_create_address_space(gpu, pdev, 0); + return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks) +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -224,16 +224,15 @@ adreno_iommu_create_address_space(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - aspace = msm_gem_address_space_create(mmu, "gpu", - start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); - if (IS_ERR(aspace) && !IS_ERR(mmu)) + if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); - return aspace; + return vm; } -u64 adreno_private_address_space_size(struct msm_gpu *gpu) +u64 adreno_private_vm_size(struct msm_gpu *gpu) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); @@ -274,7 +273,7 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) !READ_ONCE(gpu->crashstate)) { adreno_gpu->stall_enabled = true; - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, true); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -302,7 +301,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, if (adreno_gpu->stall_enabled) { adreno_gpu->stall_enabled = false; - gpu->aspace->mmu->funcs->set_stall(gpu->aspace->mmu, false); + gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); } adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -312,7 +311,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); } /* @@ -406,8 +405,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->aspace) - *value = gpu->global_faults + ctx->aspace->faults; + if (ctx->vm) + *value = gpu->global_faults + ctx->vm->faults; else *value = gpu->global_faults; return 0; @@ -415,14 +414,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_start; + *value = ctx->vm->va_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->aspace == gpu->aspace) + if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->aspace->va_size; + *value = ctx->vm->va_size; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; @@ -612,7 +611,7 @@ struct drm_gem_object *adreno_fw_create_bo(struct msm_gpu *gpu, void *ptr; ptr = msm_gem_kernel_new(gpu->dev, fw->size - 4, - MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->aspace, &bo, iova); + MSM_BO_WC | MSM_BO_GPU_READONLY, gpu->vm, &bo, iova); if (IS_ERR(ptr)) return ERR_CAST(ptr); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index fed9516da365..258c5c6dde2e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -602,7 +602,7 @@ static inline int adreno_is_a7xx(struct adreno_gpu *gpu) /* Put vm_start above 32b to catch issues with not setting xyz_BASE_HI */ #define ADRENO_VM_START 0x100000000ULL -u64 adreno_private_address_space_size(struct msm_gpu *gpu); +u64 adreno_private_vm_size(struct msm_gpu *gpu); int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len); int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, @@ -645,14 +645,14 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_address_space * -adreno_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev); - -struct msm_gem_address_space * -adreno_iommu_create_address_space(struct msm_gpu *gpu, - struct platform_device *pdev, - unsigned long quirks); +struct msm_gem_vm * +adreno_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev); + +struct msm_gem_vm * +adreno_iommu_create_vm(struct msm_gpu *gpu, + struct platform_device *pdev, + unsigned long quirks); int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 849fea580a4c..32e208ee946d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -576,13 +576,13 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc wb_enc->wb_job = job; wb_enc->wb_conn = job->connector; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; wb_cfg = &wb_enc->wb_cfg; memset(wb_cfg, 0, sizeof(struct dpu_hw_wb_cfg)); - ret = msm_framebuffer_prepare(job->fb, aspace, false); + ret = msm_framebuffer_prepare(job->fb, vm, false); if (ret) { DPU_ERROR("prep fb failed, %d\n", ret); return; @@ -596,7 +596,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc return; } - dpu_format_populate_addrs(aspace, job->fb, &wb_cfg->dest); + dpu_format_populate_addrs(vm, job->fb, &wb_cfg->dest); wb_cfg->dest.width = job->fb->width; wb_cfg->dest.height = job->fb->height; @@ -619,14 +619,14 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (!job->fb) return; - aspace = phys_enc->dpu_kms->base.aspace; + vm = phys_enc->dpu_kms->base.vm; - msm_framebuffer_cleanup(job->fb, aspace, false); + msm_framebuffer_cleanup(job->fb, vm, false); wb_enc->wb_job = NULL; wb_enc->wb_conn = NULL; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index 59c9427da7dd..d115b79af771 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -282,7 +282,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace uint32_t base_addr = 0; bool meta; - base_addr = msm_framebuffer_iova(fb, aspace, 0); + base_addr = msm_framebuffer_iova(fb, vm, 0); fmt = msm_framebuffer_format(fb); meta = MSM_FORMAT_IS_UBWC(fmt); @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_address_space *aspace } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspace, +static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -363,17 +363,17 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_address_space *aspa /* Populate addresses for simple formats here */ for (i = 0; i < layout->num_planes; ++i) - layout->plane_addr[i] = msm_framebuffer_iova(fb, aspace, i); -} + layout->plane_addr[i] = msm_framebuffer_iova(fb, vm, i); + } /** * dpu_format_populate_addrs - populate buffer addresses based on * mmu, fb, and format found in the fb - * @aspace: address space pointer + * @vm: address space pointer * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -384,7 +384,7 @@ void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, /* Populate the addresses given the fb */ if (MSM_FORMAT_IS_UBWC(fmt) || MSM_FORMAT_IS_TILE(fmt)) - _dpu_format_populate_addrs_ubwc(aspace, fb, layout); + _dpu_format_populate_addrs_ubwc(vm, fb, layout); else - _dpu_format_populate_addrs_linear(aspace, fb, layout); + _dpu_format_populate_addrs_linear(vm, fb, layout); } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index c6145d43aa3f..989f3e13c497 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_address_space *aspace, +void dpu_format_populate_addrs(struct msm_gem_vm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index 3305ad0623ca..bb5db6da636a 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1095,26 +1095,26 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) { struct msm_mmu *mmu; - if (!dpu_kms->base.aspace) + if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.aspace->mmu; + mmu = dpu_kms->base.vm->mmu; mmu->funcs->detach(mmu); - msm_gem_address_space_put(dpu_kms->base.aspace); + msm_gem_vm_put(dpu_kms->base.vm); - dpu_kms->base.aspace = NULL; + dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; - aspace = msm_kms_init_aspace(dpu_kms->dev); - if (IS_ERR(aspace)) - return PTR_ERR(aspace); + vm = msm_kms_init_vm(dpu_kms->dev); + if (IS_ERR(vm)) + return PTR_ERR(vm); - dpu_kms->base.aspace = aspace; + dpu_kms->base.vm = vm; return 0; } diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c index e03d6091f736..2640ab9e6e90 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.c @@ -71,7 +71,7 @@ static const uint32_t qcom_compressed_supported_formats[] = { /* * struct dpu_plane - local dpu plane structure - * @aspace: address space pointer + * @vm: address space pointer * @csc_ptr: Points to dpu_csc_cfg structure to use for current * @catalog: Points to dpu catalog structure * @revalidate: force revalidation of all the plane properties @@ -654,8 +654,8 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", fb->base.id); - /* cache aspace */ - pstate->aspace = kms->base.aspace; + /* cache vm */ + pstate->vm = kms->base.vm; /* * TODO: Need to sort out the msm_framebuffer_prepare() call below so @@ -664,9 +664,9 @@ static int dpu_plane_prepare_fb(struct drm_plane *plane, */ drm_gem_plane_helper_prepare_fb(plane, new_state); - if (pstate->aspace) { + if (pstate->vm) { ret = msm_framebuffer_prepare(new_state->fb, - pstate->aspace, pstate->needs_dirtyfb); + pstate->vm, pstate->needs_dirtyfb); if (ret) { DPU_ERROR("failed to prepare framebuffer\n"); return ret; @@ -689,7 +689,7 @@ static void dpu_plane_cleanup_fb(struct drm_plane *plane, DPU_DEBUG_PLANE(pdpu, "FB[%u]\n", old_state->fb->base.id); - msm_framebuffer_cleanup(old_state->fb, old_pstate->aspace, + msm_framebuffer_cleanup(old_state->fb, old_pstate->vm, old_pstate->needs_dirtyfb); } @@ -1353,7 +1353,7 @@ static void dpu_plane_sspp_atomic_update(struct drm_plane *plane, pstate->needs_qos_remap |= (is_rt_pipe != pdpu->is_rt_pipe); pdpu->is_rt_pipe = is_rt_pipe; - dpu_format_populate_addrs(pstate->aspace, new_state->fb, &pstate->layout); + dpu_format_populate_addrs(pstate->vm, new_state->fb, &pstate->layout); DPU_DEBUG_PLANE(pdpu, "FB[%u] " DRM_RECT_FP_FMT "->crtc%u " DRM_RECT_FMT ", %p4cc ubwc %d\n", fb->base.id, DRM_RECT_FP_ARG(&state->src), diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index acd5725175cd..3578f52048a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -17,7 +17,7 @@ /** * struct dpu_plane_state: Define dpu extension of drm plane state object * @base: base drm plane state object - * @aspace: pointer to address space for input/output buffers + * @vm: pointer to address space for input/output buffers * @pipe: software pipe description * @r_pipe: software pipe description of the second pipe * @pipe_cfg: software pipe configuration @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c index b8610aa806ea..0133c0c01a0b 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_crtc.c @@ -120,7 +120,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp4_kms *mdp4_kms = get_kms(&mdp4_crtc->base); struct msm_kms *kms = &mdp4_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -369,7 +369,7 @@ static void update_cursor(struct drm_crtc *crtc) if (next_bo) { /* take a obj ref + iova ref when we start scanning out: */ drm_gem_object_get(next_bo); - msm_gem_get_and_pin_iova(next_bo, kms->aspace, &iova); + msm_gem_get_and_pin_iova(next_bo, kms->vm, &iova); /* enable cursor: */ mdp4_write(mdp4_kms, REG_MDP4_DMA_CURSOR_SIZE(dma), @@ -427,7 +427,7 @@ static int mdp4_crtc_cursor_set(struct drm_crtc *crtc, } if (cursor_bo) { - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, &iova); + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &iova); if (ret) goto fail; } else { diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index c469e66cfc11..94fbc20b2fbd 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,15 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) - msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->aspace); + msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +380,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int ret; u32 major, minor; unsigned long max_clk; @@ -449,19 +449,19 @@ static int mdp4_kms_init(struct drm_device *dev) } else if (!mmu) { DRM_DEV_INFO(dev->dev, "no iommu, fallback to phys " "contig buffers for scanout\n"); - aspace = NULL; + vm = NULL; } else { - aspace = msm_gem_address_space_create(mmu, + vm = msm_gem_vm_create(mmu, "mdp4", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { + if (IS_ERR(vm)) { if (!IS_ERR(mmu)) mmu->funcs->destroy(mmu); - ret = PTR_ERR(aspace); + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; } ret = modeset_init(mdp4_kms); @@ -478,7 +478,7 @@ static int mdp4_kms_init(struct drm_device *dev) goto fail; } - ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(mdp4_kms->blank_cursor_bo, kms->vm, &mdp4_kms->blank_cursor_iova); if (ret) { DRM_DEV_ERROR(dev->dev, "could not pin blank-cursor bo: %d\n", ret); diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c index 3fefb2088008..7743be6167f8 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_plane.c @@ -87,7 +87,7 @@ static int mdp4_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, false); + return msm_framebuffer_prepare(new_state->fb, kms->vm, false); } static void mdp4_plane_cleanup_fb(struct drm_plane *plane, @@ -102,7 +102,7 @@ static void mdp4_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", mdp4_plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, false); + msm_framebuffer_cleanup(fb, kms->vm, false); } @@ -153,13 +153,13 @@ static void mdp4_plane_set_scanout(struct drm_plane *plane, MDP4_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP0_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP1_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP2_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp4_write(mdp4_kms, REG_MDP4_PIPE_SRCP3_BASE(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } static void mdp4_write_csc_config(struct mdp4_kms *mdp4_kms, diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c index 0f653e62b4a0..298861f373b0 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_crtc.c @@ -169,7 +169,7 @@ static void unref_cursor_worker(struct drm_flip_work *work, void *val) struct mdp5_kms *mdp5_kms = get_kms(&mdp5_crtc->base); struct msm_kms *kms = &mdp5_kms->base.base; - msm_gem_unpin_iova(val, kms->aspace); + msm_gem_unpin_iova(val, kms->vm); drm_gem_object_put(val); } @@ -993,7 +993,7 @@ static int mdp5_crtc_cursor_set(struct drm_crtc *crtc, if (!cursor_bo) return -ENOENT; - ret = msm_gem_get_and_pin_iova(cursor_bo, kms->aspace, + ret = msm_gem_get_and_pin_iova(cursor_bo, kms->vm, &mdp5_crtc->cursor.iova); if (ret) { drm_gem_object_put(cursor_bo); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 3fcca7a3d82e..9dca0385a42d 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,11 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_address_space *aspace = kms->aspace; + struct msm_gem_vm *vm = kms->vm; - if (aspace) { - aspace->mmu->funcs->detach(aspace->mmu); - msm_gem_address_space_put(aspace); + if (vm) { + vm->mmu->funcs->detach(vm->mmu); + msm_gem_vm_put(vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +500,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); @@ -534,13 +534,13 @@ static int mdp5_kms_init(struct drm_device *dev) } mdelay(16); - aspace = msm_kms_init_aspace(mdp5_kms->dev); - if (IS_ERR(aspace)) { - ret = PTR_ERR(aspace); + vm = msm_kms_init_vm(mdp5_kms->dev); + if (IS_ERR(vm)) { + ret = PTR_ERR(vm); goto fail; } - kms->aspace = aspace; + kms->vm = vm; pm_runtime_put_sync(&pdev->dev); diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c index bb1601921938..9f68a4747203 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c @@ -144,7 +144,7 @@ static int mdp5_plane_prepare_fb(struct drm_plane *plane, drm_gem_plane_helper_prepare_fb(plane, new_state); - return msm_framebuffer_prepare(new_state->fb, kms->aspace, needs_dirtyfb); + return msm_framebuffer_prepare(new_state->fb, kms->vm, needs_dirtyfb); } static void mdp5_plane_cleanup_fb(struct drm_plane *plane, @@ -159,7 +159,7 @@ static void mdp5_plane_cleanup_fb(struct drm_plane *plane, return; DBG("%s: cleanup: FB[%u]", plane->name, fb->base.id); - msm_framebuffer_cleanup(fb, kms->aspace, needed_dirtyfb); + msm_framebuffer_cleanup(fb, kms->vm, needed_dirtyfb); } static int mdp5_plane_atomic_check_with_state(struct drm_crtc_state *crtc_state, @@ -478,13 +478,13 @@ static void set_scanout_locked(struct mdp5_kms *mdp5_kms, MDP5_PIPE_SRC_STRIDE_B_P3(fb->pitches[3])); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC0_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 0)); + msm_framebuffer_iova(fb, kms->vm, 0)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC1_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 1)); + msm_framebuffer_iova(fb, kms->vm, 1)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC2_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 2)); + msm_framebuffer_iova(fb, kms->vm, 2)); mdp5_write(mdp5_kms, REG_MDP5_PIPE_SRC3_ADDR(pipe), - msm_framebuffer_iova(fb, kms->aspace, 3)); + msm_framebuffer_iova(fb, kms->vm, 3)); } /* Note: mdp5_plane->pipe_lock must be locked */ diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 4d75529c0e85..16335ebd21e4 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,10 +1146,10 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->aspace = msm_gem_address_space_get(priv->kms->aspace); + msm_host->vm = msm_gem_vm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, - msm_host->aspace, + msm_host->vm, &msm_host->tx_gem_obj, &iova); if (IS_ERR(data)) { @@ -1193,10 +1193,10 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) return; if (msm_host->tx_gem_obj) { - msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->aspace); - msm_gem_address_space_put(msm_host->aspace); + msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); + msm_gem_vm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; - msm_host->aspace = NULL; + msm_host->vm = NULL; } if (msm_host->tx_buf) @@ -1327,7 +1327,7 @@ int dsi_dma_base_get_6g(struct msm_dsi_host *msm_host, uint64_t *dma_base) return -EINVAL; return msm_gem_get_and_pin_iova(msm_host->tx_gem_obj, - priv->kms->aspace, dma_base); + priv->kms->vm, dma_base); } int dsi_dma_base_get_v2(struct msm_dsi_host *msm_host, uint64_t *dma_base) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 29ca24548c67..903abf3532e0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -345,7 +345,7 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->aspace = msm_gpu_create_private_address_space(priv->gpu, current); + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -523,7 +523,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->aspace, iova); + return msm_gem_get_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -537,13 +537,13 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->aspace == ctx->aspace) + if (priv->gpu->vm == ctx->vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->aspace, iova); + return msm_gem_set_iova(obj, ctx->vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index a65077855201..0e675c9a7f83 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,7 +48,7 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_address_space; +struct msm_gem_vm; struct msm_gem_vma; struct msm_disp_state; @@ -241,7 +241,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev); +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -263,11 +263,11 @@ int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needs_dirtyfb); + struct msm_gem_vm *vm, bool needs_dirtyfb); void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, bool needed_dirtyfb); + struct msm_gem_vm *vm, bool needed_dirtyfb); uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane); + struct msm_gem_vm *vm, int plane); struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 09268e416843..6df318b73534 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -76,7 +76,7 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -88,7 +88,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, atomic_inc(&msm_fb->prepare_count); for (i = 0; i < n; i++) { - ret = msm_gem_get_and_pin_iova(fb->obj[i], aspace, &msm_fb->iova[i]); + ret = msm_gem_get_and_pin_iova(fb->obj[i], vm, &msm_fb->iova[i]); drm_dbg_state(fb->dev, "FB[%u]: iova[%d]: %08llx (%d)\n", fb->base.id, i, msm_fb->iova[i], ret); if (ret) @@ -99,7 +99,7 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, } void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); @@ -109,14 +109,14 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, refcount_dec(&msm_fb->dirtyfb); for (i = 0; i < n; i++) - msm_gem_unpin_iova(fb->obj[i], aspace); + msm_gem_unpin_iova(fb->obj[i], vm); if (!atomic_dec_return(&msm_fb->prepare_count)) memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_address_space *aspace, int plane) + struct msm_gem_vm *vm, int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_fbdev.c b/drivers/gpu/drm/msm/msm_fbdev.c index c62249b1ab3d..b5969374d53f 100644 --- a/drivers/gpu/drm/msm/msm_fbdev.c +++ b/drivers/gpu/drm/msm/msm_fbdev.c @@ -122,7 +122,7 @@ int msm_fbdev_driver_fbdev_probe(struct drm_fb_helper *helper, * in panic (ie. lock-safe, etc) we could avoid pinning the * buffer now: */ - ret = msm_gem_get_and_pin_iova(bo, priv->kms->aspace, &paddr); + ret = msm_gem_get_and_pin_iova(bo, priv->kms->vm, &paddr); if (ret) { DRM_DEV_ERROR(dev->dev, "failed to get buffer obj iova: %d\n", ret); goto fail; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index fdeb6cf7eeb5..07a30d29248c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -402,14 +402,14 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = msm_gem_vma_new(aspace); + vma = msm_gem_vma_new(vm); if (!vma) return ERR_PTR(-ENOMEM); @@ -419,7 +419,7 @@ static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; @@ -427,7 +427,7 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace == aspace) + if (vma->vm == vm) return vma; } @@ -458,7 +458,7 @@ put_iova_spaces(struct drm_gem_object *obj, bool close) msm_gem_assert_locked(obj); list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->aspace) { + if (vma->vm) { msm_gem_vma_purge(vma); if (close) msm_gem_vma_close(vma); @@ -481,19 +481,19 @@ put_iova_vmas(struct drm_gem_object *obj) } static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, + struct msm_gem_vm *vm, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; msm_gem_assert_locked(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!vma) { int ret; - vma = add_vma(obj, aspace); + vma = add_vma(obj, vm); if (IS_ERR(vma)) return vma; @@ -569,13 +569,13 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) } struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - return get_vma_locked(obj, aspace, 0, U64_MAX); + return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { struct msm_gem_vma *vma; @@ -583,7 +583,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); - vma = get_vma_locked(obj, aspace, range_start, range_end); + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -601,13 +601,13 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { int ret; msm_gem_lock(obj); - ret = get_and_pin_iova_range_locked(obj, aspace, iova, range_start, range_end); + ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); msm_gem_unlock(obj); return ret; @@ -615,9 +615,9 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, /* get iova and pin it. Should have a matching put */ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { - return msm_gem_get_and_pin_iova_range(obj, aspace, iova, 0, U64_MAX); + return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } /* @@ -625,13 +625,13 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * valid for the life of the object */ int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova) + struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; int ret = 0; msm_gem_lock(obj); - vma = get_vma_locked(obj, aspace, 0, U64_MAX); + vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { @@ -643,9 +643,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, aspace); + struct msm_gem_vma *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -665,20 +665,20 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova) + struct msm_gem_vm *vm, uint64_t iova) { int ret = 0; msm_gem_lock(obj); if (!iova) { - ret = clear_iova(obj, aspace); + ret = clear_iova(obj, vm); } else { struct msm_gem_vma *vma; - vma = get_vma_locked(obj, aspace, iova, iova + obj->size); + vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else if (GEM_WARN_ON(vma->iova != iova)) { - clear_iova(obj, aspace); + clear_iova(obj, vm); ret = -EBUSY; } } @@ -693,12 +693,12 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * to get rid of it */ void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { struct msm_gem_vma *vma; msm_gem_lock(obj); - vma = lookup_vma(obj, aspace); + vma = lookup_vma(obj, vm); if (!GEM_WARN_ON(!vma)) { msm_gem_unpin_locked(obj); } @@ -1016,23 +1016,23 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, list_for_each_entry(vma, &msm_obj->vmas, list) { const char *name, *comm; - if (vma->aspace) { - struct msm_gem_address_space *aspace = vma->aspace; + if (vma->vm) { + struct msm_gem_vm *vm = vma->vm; struct task_struct *task = - get_pid_task(aspace->pid, PIDTYPE_PID); + get_pid_task(vm->pid, PIDTYPE_PID); if (task) { comm = kstrdup(task->comm, GFP_KERNEL); put_task_struct(task); } else { comm = NULL; } - name = aspace->name; + name = vm->name; } else { name = comm = NULL; } - seq_printf(m, " [%s%s%s: aspace=%p, %08llx,%s]", + seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", name, comm ? ":" : "", comm ? comm : "", - vma->aspace, vma->iova, + vma->vm, vma->iova, vma->mapped ? "mapped" : "unmapped"); kfree(comm); } @@ -1357,7 +1357,7 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, } void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova) { void *vaddr; @@ -1368,14 +1368,14 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, return ERR_CAST(obj); if (iova) { - ret = msm_gem_get_and_pin_iova(obj, aspace, iova); + ret = msm_gem_get_and_pin_iova(obj, vm, iova); if (ret) goto err; } vaddr = msm_gem_get_vaddr(obj); if (IS_ERR(vaddr)) { - msm_gem_unpin_iova(obj, aspace); + msm_gem_unpin_iova(obj, vm); ret = PTR_ERR(vaddr); goto err; } @@ -1392,13 +1392,13 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace) + struct msm_gem_vm *vm) { if (IS_ERR_OR_NULL(bo)) return; msm_gem_put_vaddr(bo); - msm_gem_unpin_iova(bo, aspace); + msm_gem_unpin_iova(bo, vm); drm_gem_object_put(bo); } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 85f0257e83da..d2f39a371373 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -22,7 +22,7 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ -struct msm_gem_address_space { +struct msm_gem_vm { const char *name; /* NOTE: mm managed at the page level, size is in # of pages * and position mm_node->start is in # of pages: @@ -47,13 +47,13 @@ struct msm_gem_address_space { uint64_t va_size; }; -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace); +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm); -void msm_gem_address_space_put(struct msm_gem_address_space *aspace); +void msm_gem_vm_put(struct msm_gem_vm *vm); -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size); struct msm_fence_context; @@ -61,12 +61,12 @@ struct msm_fence_context; struct msm_gem_vma { struct drm_mm_node node; uint64_t iova; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head list; /* node in msm_gem_object::vmas */ bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace); +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); @@ -127,18 +127,18 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t iova); + struct msm_gem_vm *vm, uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova, + struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end); int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace, uint64_t *iova); + struct msm_gem_vm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -160,10 +160,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_address_space *aspace, + uint32_t flags, struct msm_gem_vm *vm, struct drm_gem_object **bo, uint64_t *iova); void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_address_space *aspace); + struct msm_gem_vm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -257,7 +257,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 3aabf7f1da6d..a59816b6b6de 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->aspace = queue->ctx->aspace; + submit->vm = queue->ctx->vm; submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; @@ -311,7 +311,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) struct msm_gem_vma *vma; /* if locking succeeded, pin bo: */ - vma = msm_gem_get_vma_locked(obj, submit->aspace); + vma = msm_gem_get_vma_locked(obj, submit->vm); if (IS_ERR(vma)) { ret = PTR_ERR(vma); break; @@ -669,7 +669,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->aspace) && !capable(CAP_SYS_RAWIO)) { + if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); return -EPERM; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 11e842dda73c..9419692f0cc8 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -10,45 +10,44 @@ #include "msm_mmu.h" static void -msm_gem_address_space_destroy(struct kref *kref) +msm_gem_vm_destroy(struct kref *kref) { - struct msm_gem_address_space *aspace = container_of(kref, - struct msm_gem_address_space, kref); - - drm_mm_takedown(&aspace->mm); - if (aspace->mmu) - aspace->mmu->funcs->destroy(aspace->mmu); - put_pid(aspace->pid); - kfree(aspace); + struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + + drm_mm_takedown(&vm->mm); + if (vm->mmu) + vm->mmu->funcs->destroy(vm->mmu); + put_pid(vm->pid); + kfree(vm); } -void msm_gem_address_space_put(struct msm_gem_address_space *aspace) +void msm_gem_vm_put(struct msm_gem_vm *vm) { - if (aspace) - kref_put(&aspace->kref, msm_gem_address_space_destroy); + if (vm) + kref_put(&vm->kref, msm_gem_vm_destroy); } -struct msm_gem_address_space * -msm_gem_address_space_get(struct msm_gem_address_space *aspace) +struct msm_gem_vm * +msm_gem_vm_get(struct msm_gem_vm *vm) { - if (!IS_ERR_OR_NULL(aspace)) - kref_get(&aspace->kref); + if (!IS_ERR_OR_NULL(vm)) + kref_get(&vm->kref); - return aspace; + return vm; } /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; unsigned size = vma->node.size; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - aspace->mmu->funcs->unmap(aspace->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); vma->mapped = false; } @@ -58,7 +57,7 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; if (GEM_WARN_ON(!vma->iova)) @@ -69,7 +68,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!aspace) + if (!vm) return 0; /* @@ -81,7 +80,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = aspace->mmu->funcs->map(aspace->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); if (ret) { vma->mapped = false; @@ -93,21 +92,21 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; GEM_WARN_ON(vma->mapped); - spin_lock(&aspace->lock); + spin_lock(&vm->lock); if (vma->iova) drm_mm_remove_node(&vma->node); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); vma->iova = 0; - msm_gem_address_space_put(aspace); + msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) +struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) { struct msm_gem_vma *vma; @@ -115,7 +114,7 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) if (!vma) return NULL; - vma->aspace = aspace; + vma->vm = vm; return vma; } @@ -124,20 +123,20 @@ struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_address_space *aspace) int msm_gem_vma_init(struct msm_gem_vma *vma, int size, u64 range_start, u64 range_end) { - struct msm_gem_address_space *aspace = vma->aspace; + struct msm_gem_vm *vm = vma->vm; int ret; - if (GEM_WARN_ON(!aspace)) + if (GEM_WARN_ON(!vm)) return -EINVAL; if (GEM_WARN_ON(vma->iova)) return -EBUSY; - spin_lock(&aspace->lock); - ret = drm_mm_insert_node_in_range(&aspace->mm, &vma->node, + spin_lock(&vm->lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&aspace->lock); + spin_unlock(&vm->lock); if (ret) return ret; @@ -145,33 +144,33 @@ int msm_gem_vma_init(struct msm_gem_vma *vma, int size, vma->iova = vma->node.start; vma->mapped = false; - kref_get(&aspace->kref); + kref_get(&vm->kref); return 0; } -struct msm_gem_address_space * -msm_gem_address_space_create(struct msm_mmu *mmu, const char *name, +struct msm_gem_vm * +msm_gem_vm_create(struct msm_mmu *mmu, const char *name, u64 va_start, u64 size) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; if (IS_ERR(mmu)) return ERR_CAST(mmu); - aspace = kzalloc(sizeof(*aspace), GFP_KERNEL); - if (!aspace) + vm = kzalloc(sizeof(*vm), GFP_KERNEL); + if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&aspace->lock); - aspace->name = name; - aspace->mmu = mmu; - aspace->va_start = va_start; - aspace->va_size = size; + spin_lock_init(&vm->lock); + vm->name = name; + vm->mmu = mmu; + vm->va_start = va_start; + vm->va_size = size; - drm_mm_init(&aspace->mm, va_start, size); + drm_mm_init(&vm->mm, va_start, size); - kref_init(&aspace->kref); + kref_init(&vm->kref); - return aspace; + return vm; } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index d786fcfad62f..0d466a2e9b32 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = submit->aspace->mmu; + struct msm_mmu *mmu = submit->vm->mmu; msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -386,8 +386,8 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->aspace) - submit->aspace->faults++; + if (submit->vm) + submit->vm->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -492,7 +492,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->aspace->mmu->funcs->resume_translation(gpu->aspace->mmu); + gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); mutex_unlock(&gpu->lock); } @@ -829,10 +829,10 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task) +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_address_space *aspace = NULL; + struct msm_gem_vm *vm = NULL; if (!gpu) return NULL; @@ -840,16 +840,16 @@ msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *ta * If the target doesn't support private address spaces then return * the global one */ - if (gpu->funcs->create_private_address_space) { - aspace = gpu->funcs->create_private_address_space(gpu); - if (!IS_ERR(aspace)) - aspace->pid = get_pid(task_pid(task)); + if (gpu->funcs->create_private_vm) { + vm = gpu->funcs->create_private_vm(gpu); + if (!IS_ERR(vm)) + vm->pid = get_pid(task_pid(task)); } - if (IS_ERR_OR_NULL(aspace)) - aspace = msm_gem_address_space_get(gpu->aspace); + if (IS_ERR_OR_NULL(vm)) + vm = msm_gem_vm_get(gpu->vm); - return aspace; + return vm; } int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, @@ -945,18 +945,18 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->aspace = gpu->funcs->create_address_space(gpu, pdev); + gpu->vm = gpu->funcs->create_vm(gpu, pdev); - if (gpu->aspace == NULL) + if (gpu->vm == NULL) DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->aspace)) { - ret = PTR_ERR(gpu->aspace); + else if (IS_ERR(gpu->vm)) { + ret = PTR_ERR(gpu->vm); goto fail; } memptrs = msm_gem_kernel_new(drm, sizeof(struct msm_rbmemptrs) * nr_rings, - check_apriv(gpu, MSM_BO_WC), gpu->aspace, &gpu->memptrs_bo, + check_apriv(gpu, MSM_BO_WC), gpu->vm, &gpu->memptrs_bo, &memptrs_iova); if (IS_ERR(memptrs)) { @@ -1000,7 +1000,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); platform_set_drvdata(pdev, NULL); return ret; @@ -1017,11 +1017,11 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) gpu->rb[i] = NULL; } - msm_gem_kernel_put(gpu->memptrs_bo, gpu->aspace); + msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); - if (!IS_ERR_OR_NULL(gpu->aspace)) { - gpu->aspace->mmu->funcs->detach(gpu->aspace->mmu); - msm_gem_address_space_put(gpu->aspace); + if (!IS_ERR_OR_NULL(gpu->vm)) { + gpu->vm->mmu->funcs->detach(gpu->vm->mmu); + msm_gem_vm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index c699ce0c557b..1f26ba00f773 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,10 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_address_space *(*create_address_space) - (struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_address_space *(*create_private_address_space) - (struct msm_gpu *gpu); + struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -236,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -364,8 +362,8 @@ struct msm_context { */ int queueid; - /** @aspace: the per-process GPU address-space */ - struct msm_gem_address_space *aspace; + /** @vm: the per-process GPU address-space */ + struct msm_gem_vm *vm; /** @kref: the reference count */ struct kref ref; @@ -675,8 +673,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_address_space * -msm_gpu_create_private_address_space(struct msm_gpu *gpu, struct task_struct *task); +struct msm_gem_vm * +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 35d5397e73b4..88504c4b842f 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void return -ENOSYS; } -struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) +struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -204,17 +204,17 @@ struct msm_gem_address_space *msm_kms_init_aspace(struct drm_device *dev) return NULL; } - aspace = msm_gem_address_space_create(mmu, "mdp_kms", + vm = msm_gem_vm_create(mmu, "mdp_kms", 0x1000, 0x100000000 - 0x1000); - if (IS_ERR(aspace)) { - dev_err(mdp_dev, "aspace create, error %pe\n", aspace); + if (IS_ERR(vm)) { + dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); - return aspace; + return vm; } - msm_mmu_set_fault_handler(aspace->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); - return aspace; + return vm; } void msm_drm_kms_uninit(struct device *dev) diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index 43b58d052ee6..f45996a03e15 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_address_space *aspace; + struct msm_gem_vm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index c5651c39ac2a..bbf8503f6bb5 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -84,7 +84,7 @@ struct msm_ringbuffer *msm_ringbuffer_new(struct msm_gpu *gpu, int id, ring->start = msm_gem_kernel_new(gpu->dev, MSM_GPU_RINGBUFFER_SZ, check_apriv(gpu, MSM_BO_WC | MSM_BO_GPU_READONLY), - gpu->aspace, &ring->bo, &ring->iova); + gpu->vm, &ring->bo, &ring->iova); if (IS_ERR(ring->start)) { ret = PTR_ERR(ring->start); @@ -131,7 +131,7 @@ void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) msm_fence_context_free(ring->fctx); - msm_gem_kernel_put(ring->bo, ring->gpu->aspace); + msm_gem_kernel_put(ring->bo, ring->gpu->vm); kfree(ring); } diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 1acc0fe36353..6298233c3568 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_address_space_put(ctx->aspace); + msm_gem_vm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Mon May 19 17:51:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891137 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0EF9728A701; Mon, 19 May 2025 17:54:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677288; cv=none; b=j2ufONuylP7e0Vsn9GFVoesjeOEgYQfJuyZu68UX8UbJDAPmEw9ZsNLr+0lnYtG1XTyYek8/ZbQUuyaKtmr3EYqnj6Ki6vnEbBFcv9gs8HHOPXvdcHEBKc2eeFONIZbp8r13tB+l/JafuA6LIdEMIHkU5ZN9k14JDlyVXa0acyk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677288; c=relaxed/simple; bh=H3c5gOKGDJs0tTFYfyC6NctvT5hMVMGR0OoAtgWxTdY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Kprowuc7iELYib7K94a5LG5TCAw2Xv+lsJ86naMxLf2d2qoqYYDs1+m21iZLDC6u5CzWk20tuZmGDbC/f8S0EFEft+jAr6zCGZMESAeCK2zizkSVsRb2Hyrx+NdcQw5QYn/vqRpNBYVD2zVbEn72Iud3np97wOk3X+/8mn0b45w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MXlepzMn; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MXlepzMn" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-742c27df0daso1689263b3a.1; Mon, 19 May 2025 10:54:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677285; x=1748282085; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=VBeAL4P3/3FskyCROaf1mNKnMOdCBxO36TKmoza4VKQ=; b=MXlepzMnWg6k4nosWDxcpejflIqZnYkHiqGmtTt9tWdJtlrYaYNBc1Ne/E+QO5+Pf/ teDjHd1DFmj4oo9vRtuKzSbrv8SdGA4gpZttaZvN5ZdL4UrdDmV7EdRh4ORLYPWpGCZc OqhUrMgcQWp6GFE4jeYFUScIcUbFw3GOR2Cr1LxowqUEe1NIkCoxZC/oi1GfqwBhme/S AX45v5pQlBzCmRcwBM62FcA5aOcfYK9SUkYpJfblXw24tHIkuBh4lpr+50lWyA5WgtLD 5DvDAyhZxoGQdx9Bv/nh3ieWqfWVKAQzA7EHrfEUX5FbGjTGo3c/HWcq92fIdPq4VdMr uqfw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677285; x=1748282085; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=VBeAL4P3/3FskyCROaf1mNKnMOdCBxO36TKmoza4VKQ=; b=rNLbK4p4MK0ehgjeOkh2kAAgIn6c88OeIH2x81y4bR0xgqgOcrFK6sexVpTYOQ3w4Y dvaDt/g11I8RIjBXQUSST5v36MxCsgFTJyV4h6xsACnzfk4TyYI+dYscNPOt8sBHqnyo dh8SP6Q2lafNYDiNqOGkWa4F0fCFsrwuWKEX3o5xhm2cXj3sqc2jjdJDxJcTbZp4a/TV IKGnYou+Jio5c1rnaMVRudJ+9ecC/YJ4SVtUPzTuDsKOT1lrLNGLDGBdsjcvYS7C4ROQ Lc19NcO5zevpUM5bwftpnoPaWcAbol/rEWKxru+6DbJff977UCwwcgUN9ruQ6c0P2sCp XmqA== X-Forwarded-Encrypted: i=1; AJvYcCV+FMOPHCRzl/Xb+CNC8/8gbTnn7Ah0TkQV1+WjQ++hVRDGewu6DZ310qcdSLbLcqlCs1FXVyEIW0FkoWqY@vger.kernel.org, AJvYcCVYfudpcKJGnARWgO4J9k0k4vax2ZgtkkOalIePo3XG5rmyqD7XyIaqZO1oWBL411+LW6W+4rctC5CJKJXu@vger.kernel.org X-Gm-Message-State: AOJu0Yxq1lep04quRInIQeLF8j11B4/nEtd7tokuZQqJ1oegFF41zdQ6 bb5Zi9DqrU/RGyXacCzj0JlVG7qcqaW6Se9dBmQ4MYuLTYYuLWLCaVOt X-Gm-Gg: ASbGncvmx2ZKp4lFuGbYStPsdxYoy6N56tSHAJjXTAj+T0mREteE7V5OwTZK8RPkW3F GJ1KLh4xRliGDext22aMpRL7ShcmBqeqkZPljFL1lXZSOxd50fwZy5uS6WzlnPFbOtvuLsdmL0T zfOrTnxs/7+3sTEkxeFMTfSJHIntqJeJSN0UBuYtqVL0hsCIQrp71fMGjo4cfxVQiawoN0HPJiB owNn8ZoblB69qpatPUCGMmrFr6ESIdrYxdtXuxRGLzJJRYRDn2Az86YgAEFNinCJIwmYVh5THrS 7U3IdV1AjIyh7XtbORjG8JD/Ab1/Fjp/Mr4AF5M03V3RSNBefwbPNq8zfUxHY1UksVfztYtzau1 vMJx0W1o23pn+RttzYgUDDGdcZg== X-Google-Smtp-Source: AGHT+IEn98jwZRSCDfZV273eB6aHlunvfL9TH1DDeqeNEPJuf4tzZGmwuQl0aGZRP04h0W4TVUSjbA== X-Received: by 2002:a05:6a20:a114:b0:1f5:7366:2a01 with SMTP id adf61e73a8af0-216219ece2fmr22911075637.37.1747677285283; Mon, 19 May 2025 10:54:45 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9829cc7sm6755784b3a.106.2025.05.19.10.54.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:44 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 09/40] drm/msm: Remove vram carveout support Date: Mon, 19 May 2025 10:51:32 -0700 Message-ID: <20250519175348.11924-10-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark It is standing in the way of drm_gpuvm / VM_BIND support. Not to mention frequently broken and rarely tested. And I think only needed for a 10yr old not quite upstream SoC (msm8974). Maybe we can add support back in later, but I'm doubtful. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 8 -- drivers/gpu/drm/msm/adreno/a3xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a4xx_gpu.c | 15 --- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/adreno_device.c | 4 - drivers/gpu/drm/msm/adreno/adreno_gpu.c | 4 +- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 1 - drivers/gpu/drm/msm/msm_drv.c | 117 +----------------- drivers/gpu/drm/msm/msm_drv.h | 11 -- drivers/gpu/drm/msm/msm_gem.c | 131 ++------------------- drivers/gpu/drm/msm/msm_gem.h | 5 - drivers/gpu/drm/msm/msm_gem_submit.c | 5 - drivers/gpu/drm/msm/msm_gpu.c | 6 +- 14 files changed, 19 insertions(+), 309 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 5eb063ed0b46..095bae92e3e8 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -551,14 +551,6 @@ struct msm_gpu *a2xx_gpu_init(struct drm_device *dev) else adreno_gpu->registers = a220_registers; - if (!gpu->vm) { - dev_err(dev->dev, "No memory protection without MMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - return gpu; fail: diff --git a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c index 434e6ededf83..a956cd79195e 100644 --- a/drivers/gpu/drm/msm/adreno/a3xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a3xx_gpu.c @@ -581,21 +581,6 @@ struct msm_gpu *a3xx_gpu_init(struct drm_device *dev) goto fail; } - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c index 2c75debcfd84..83f6329accba 100644 --- a/drivers/gpu/drm/msm/adreno/a4xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a4xx_gpu.c @@ -695,21 +695,6 @@ struct msm_gpu *a4xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0xffff0000ffff0000ull; - if (!gpu->vm) { - /* TODO we think it is possible to configure the GPU to - * restrict access to VRAM carveout. But the required - * registers are unknown. For now just bail out and - * limp along with just modesetting. If it turns out - * to not be possible to restrict access, then we must - * implement a cmdstream validator. - */ - DRM_DEV_ERROR(dev->dev, "No memory protection without IOMMU\n"); - if (!allow_vram_carveout) { - ret = -ENXIO; - goto fail; - } - } - icc_path = devm_of_icc_get(&pdev->dev, "gfx-mem"); if (IS_ERR(icc_path)) { ret = PTR_ERR(icc_path); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index dc31bc0afca4..04138a06724b 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,8 +1786,7 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index d05c00624f74..f4d9cdbc5602 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2547,8 +2547,7 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - if (gpu->vm) - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/adreno_device.c b/drivers/gpu/drm/msm/adreno/adreno_device.c index f4552b8c6767..6b0390c38bff 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_device.c +++ b/drivers/gpu/drm/msm/adreno/adreno_device.c @@ -16,10 +16,6 @@ bool snapshot_debugbus = false; MODULE_PARM_DESC(snapshot_debugbus, "Include debugbus sections in GPU devcoredump (if not fused off)"); module_param_named(snapshot_debugbus, snapshot_debugbus, bool, 0600); -bool allow_vram_carveout = false; -MODULE_PARM_DESC(allow_vram_carveout, "Allow using VRAM Carveout, in place of IOMMU"); -module_param_named(allow_vram_carveout, allow_vram_carveout, bool, 0600); - int enable_preemption = -1; MODULE_PARM_DESC(enable_preemption, "Enable preemption (A7xx only) (1=on , 0=disable, -1=auto (default))"); module_param(enable_preemption, int, 0600); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b01d9efb8663..35a99c81f7e0 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -209,7 +209,9 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); - if (IS_ERR_OR_NULL(mmu)) + if (!mmu) + return ERR_PTR(-ENODEV); + else if (IS_ERR_OR_NULL(mmu)) return ERR_CAST(mmu); geometry = msm_iommu_get_geometry(mmu); diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index 258c5c6dde2e..bbd7e664286e 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -18,7 +18,6 @@ #include "adreno_pm4.xml.h" extern bool snapshot_debugbus; -extern bool allow_vram_carveout; enum { ADRENO_FW_PM4 = 0, diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 903abf3532e0..978f1d355b42 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -46,12 +46,6 @@ #define MSM_VERSION_MINOR 12 #define MSM_VERSION_PATCHLEVEL 0 -static void msm_deinit_vram(struct drm_device *ddev); - -static char *vram = "16m"; -MODULE_PARM_DESC(vram, "Configure VRAM size (for devices without IOMMU/GPUMMU)"); -module_param(vram, charp, 0); - bool dumpstate; MODULE_PARM_DESC(dumpstate, "Dump KMS state on errors"); module_param(dumpstate, bool, 0600); @@ -97,8 +91,6 @@ static int msm_drm_uninit(struct device *dev) if (priv->kms) msm_drm_kms_uninit(dev); - msm_deinit_vram(ddev); - component_unbind_all(dev, ddev); ddev->dev_private = NULL; @@ -109,107 +101,6 @@ static int msm_drm_uninit(struct device *dev) return 0; } -bool msm_use_mmu(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - - /* - * a2xx comes with its own MMU - * On other platforms IOMMU can be declared specified either for the - * MDP/DPU device or for its parent, MDSS device. - */ - return priv->is_a2xx || - device_iommu_mapped(dev->dev) || - device_iommu_mapped(dev->dev->parent); -} - -static int msm_init_vram(struct drm_device *dev) -{ - struct msm_drm_private *priv = dev->dev_private; - struct device_node *node; - unsigned long size = 0; - int ret = 0; - - /* In the device-tree world, we could have a 'memory-region' - * phandle, which gives us a link to our "vram". Allocating - * is all nicely abstracted behind the dma api, but we need - * to know the entire size to allocate it all in one go. There - * are two cases: - * 1) device with no IOMMU, in which case we need exclusive - * access to a VRAM carveout big enough for all gpu - * buffers - * 2) device with IOMMU, but where the bootloader puts up - * a splash screen. In this case, the VRAM carveout - * need only be large enough for fbdev fb. But we need - * exclusive access to the buffer to avoid the kernel - * using those pages for other purposes (which appears - * as corruption on screen before we have a chance to - * load and do initial modeset) - */ - - node = of_parse_phandle(dev->dev->of_node, "memory-region", 0); - if (node) { - struct resource r; - ret = of_address_to_resource(node, 0, &r); - of_node_put(node); - if (ret) - return ret; - size = r.end - r.start + 1; - DRM_INFO("using VRAM carveout: %lx@%pa\n", size, &r.start); - - /* if we have no IOMMU, then we need to use carveout allocator. - * Grab the entire DMA chunk carved out in early startup in - * mach-msm: - */ - } else if (!msm_use_mmu(dev)) { - DRM_INFO("using %s VRAM carveout\n", vram); - size = memparse(vram, NULL); - } - - if (size) { - unsigned long attrs = 0; - void *p; - - priv->vram.size = size; - - drm_mm_init(&priv->vram.mm, 0, (size >> PAGE_SHIFT) - 1); - spin_lock_init(&priv->vram.lock); - - attrs |= DMA_ATTR_NO_KERNEL_MAPPING; - attrs |= DMA_ATTR_WRITE_COMBINE; - - /* note that for no-kernel-mapping, the vaddr returned - * is bogus, but non-null if allocation succeeded: - */ - p = dma_alloc_attrs(dev->dev, size, - &priv->vram.paddr, GFP_KERNEL, attrs); - if (!p) { - DRM_DEV_ERROR(dev->dev, "failed to allocate VRAM\n"); - priv->vram.paddr = 0; - return -ENOMEM; - } - - DRM_DEV_INFO(dev->dev, "VRAM: %08x->%08x\n", - (uint32_t)priv->vram.paddr, - (uint32_t)(priv->vram.paddr + size)); - } - - return ret; -} - -static void msm_deinit_vram(struct drm_device *ddev) -{ - struct msm_drm_private *priv = ddev->dev_private; - unsigned long attrs = DMA_ATTR_NO_KERNEL_MAPPING; - - if (!priv->vram.paddr) - return; - - drm_mm_takedown(&priv->vram.mm); - dma_free_attrs(ddev->dev, priv->vram.size, NULL, priv->vram.paddr, - attrs); -} - static int msm_drm_init(struct device *dev, const struct drm_driver *drv) { struct msm_drm_private *priv = dev_get_drvdata(dev); @@ -256,16 +147,12 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) goto err_destroy_wq; } - ret = msm_init_vram(ddev); - if (ret) - goto err_destroy_wq; - dma_set_max_seg_size(dev, UINT_MAX); /* Bind all our sub-components: */ ret = component_bind_all(dev, ddev); if (ret) - goto err_deinit_vram; + goto err_destroy_wq; ret = msm_gem_shrinker_init(ddev); if (ret) @@ -302,8 +189,6 @@ static int msm_drm_init(struct device *dev, const struct drm_driver *drv) return ret; -err_deinit_vram: - msm_deinit_vram(ddev); err_destroy_wq: destroy_workqueue(priv->wq); err_put_dev: diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index 0e675c9a7f83..ad509403f072 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -183,17 +183,6 @@ struct msm_drm_private { struct msm_drm_thread event_thread[MAX_CRTCS]; - /* VRAM carveout, used when no IOMMU: */ - struct { - unsigned long size; - dma_addr_t paddr; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: - */ - struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ - } vram; - struct notifier_block vmap_notifier; struct shrinker *shrinker; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 07a30d29248c..621fb4e17a2e 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -17,24 +17,8 @@ #include #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_gpu.h" -#include "msm_mmu.h" - -static dma_addr_t physaddr(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - return (((dma_addr_t)msm_obj->vram_node->start) << PAGE_SHIFT) + - priv->vram.paddr; -} - -static bool use_pages(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - return !msm_obj->vram_node; -} static int pgprot = 0; module_param(pgprot, int, 0600); @@ -139,36 +123,6 @@ static void update_lru(struct drm_gem_object *obj) mutex_unlock(&priv->lru.lock); } -/* allocate pages from VRAM carveout, used when no IOMMU: */ -static struct page **get_pages_vram(struct drm_gem_object *obj, int npages) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - dma_addr_t paddr; - struct page **p; - int ret, i; - - p = kvmalloc_array(npages, sizeof(struct page *), GFP_KERNEL); - if (!p) - return ERR_PTR(-ENOMEM); - - spin_lock(&priv->vram.lock); - ret = drm_mm_insert_node(&priv->vram.mm, msm_obj->vram_node, npages); - spin_unlock(&priv->vram.lock); - if (ret) { - kvfree(p); - return ERR_PTR(ret); - } - - paddr = physaddr(obj); - for (i = 0; i < npages; i++) { - p[i] = pfn_to_page(__phys_to_pfn(paddr)); - paddr += PAGE_SIZE; - } - - return p; -} - static struct page **get_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -180,10 +134,7 @@ static struct page **get_pages(struct drm_gem_object *obj) struct page **p; int npages = obj->size >> PAGE_SHIFT; - if (use_pages(obj)) - p = drm_gem_get_pages(obj); - else - p = get_pages_vram(obj, npages); + p = drm_gem_get_pages(obj); if (IS_ERR(p)) { DRM_DEV_ERROR(dev->dev, "could not get pages: %ld\n", @@ -216,18 +167,6 @@ static struct page **get_pages(struct drm_gem_object *obj) return msm_obj->pages; } -static void put_pages_vram(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_drm_private *priv = obj->dev->dev_private; - - spin_lock(&priv->vram.lock); - drm_mm_remove_node(msm_obj->vram_node); - spin_unlock(&priv->vram.lock); - - kvfree(msm_obj->pages); -} - static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -248,10 +187,7 @@ static void put_pages(struct drm_gem_object *obj) update_device_mem(obj->dev->dev_private, -obj->size); - if (use_pages(obj)) - drm_gem_put_pages(obj, msm_obj->pages, true, false); - else - put_pages_vram(obj); + drm_gem_put_pages(obj, msm_obj->pages, true, false); msm_obj->pages = NULL; update_lru(obj); @@ -1215,19 +1151,10 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 struct msm_drm_private *priv = dev->dev_private; struct msm_gem_object *msm_obj; struct drm_gem_object *obj = NULL; - bool use_vram = false; int ret; size = PAGE_ALIGN(size); - if (!msm_use_mmu(dev)) - use_vram = true; - else if ((flags & (MSM_BO_STOLEN | MSM_BO_SCANOUT)) && priv->vram.size) - use_vram = true; - - if (GEM_WARN_ON(use_vram && !priv->vram.size)) - return ERR_PTR(-EINVAL); - /* Disallow zero sized objects as they make the underlying * infrastructure grumpy */ @@ -1240,44 +1167,16 @@ struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32 msm_obj = to_msm_bo(obj); - if (use_vram) { - struct msm_gem_vma *vma; - struct page **pages; - - drm_gem_private_object_init(dev, obj, size); - - msm_gem_lock(obj); - - vma = add_vma(obj, NULL); - msm_gem_unlock(obj); - if (IS_ERR(vma)) { - ret = PTR_ERR(vma); - goto fail; - } - - to_msm_bo(obj)->vram_node = &vma->node; - - msm_gem_lock(obj); - pages = get_pages(obj); - msm_gem_unlock(obj); - if (IS_ERR(pages)) { - ret = PTR_ERR(pages); - goto fail; - } - - vma->iova = physaddr(obj); - } else { - ret = drm_gem_object_init(dev, obj, size); - if (ret) - goto fail; - /* - * Our buffers are kept pinned, so allocating them from the - * MOVABLE zone is a really bad idea, and conflicts with CMA. - * See comments above new_inode() why this is required _and_ - * expected if you're going to pin these pages. - */ - mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); - } + ret = drm_gem_object_init(dev, obj, size); + if (ret) + goto fail; + /* + * Our buffers are kept pinned, so allocating them from the + * MOVABLE zone is a really bad idea, and conflicts with CMA. + * See comments above new_inode() why this is required _and_ + * expected if you're going to pin these pages. + */ + mapping_set_gfp_mask(obj->filp->f_mapping, GFP_HIGHUSER); drm_gem_lru_move_tail(&priv->lru.unbacked, obj); @@ -1305,12 +1204,6 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, uint32_t size; int ret, npages; - /* if we don't have IOMMU, don't bother pretending we can import: */ - if (!msm_use_mmu(dev)) { - DRM_DEV_ERROR(dev->dev, "cannot import without IOMMU\n"); - return ERR_PTR(-EINVAL); - } - size = PAGE_ALIGN(dmabuf->size); ret = msm_gem_new_impl(dev, size, MSM_BO_WC, &obj); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index d2f39a371373..c16b11182831 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -102,11 +102,6 @@ struct msm_gem_object { struct list_head vmas; /* list of msm_gem_vma */ - /* For physically contiguous buffers. Used when we don't have - * an IOMMU. Also used for stolen/splashscreen buffer. - */ - struct drm_mm_node *vram_node; - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index a59816b6b6de..c184b1a1f522 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -669,11 +669,6 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; - if (unlikely(!ctx->vm) && !capable(CAP_SYS_RAWIO)) { - DRM_ERROR_RATELIMITED("IOMMU support or CAP_SYS_RAWIO required!\n"); - return -EPERM; - } - /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0d466a2e9b32..b30800f80120 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -944,12 +944,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, msm_devfreq_init(gpu); - gpu->vm = gpu->funcs->create_vm(gpu, pdev); - - if (gpu->vm == NULL) - DRM_DEV_INFO(drm->dev, "%s: no IOMMU, fallback to VRAM carveout!\n", name); - else if (IS_ERR(gpu->vm)) { + if (IS_ERR(gpu->vm)) { ret = PTR_ERR(gpu->vm); goto fail; } From patchwork Mon May 19 17:51:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891136 Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com [209.85.210.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 96F3E28A704; Mon, 19 May 2025 17:54:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677289; cv=none; b=IjBuHVB1rYdHpHmHym23XHUrRNXZkPDomHiKLw5UXoeyCzhVlHPPlsT2EJIr09VufUJyDhOvcAmlMiX6Bxs80nX3K3iDtCoB3bDCrXqZyWD4YUldrrXXMtVF0spgyYdfr7DwZWqWAimnE3nf3kXjOprOFpezrj3YOf1R2Cul0vw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677289; c=relaxed/simple; bh=2WpPjdOnE7lRX5ZCDjp17CsoLPIlDS7/qvt4/yxrQdY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nbqZEIRP8Qf5CgGBBXWtnlFamqY/o4EkoLpiEaXWmZPKcRyKrpxNYYDqicC3SyP5fsqaSCgRgQsxtEG4RiV8MZiO09/ykx7tqYdRa7U4Wq09MvU+Ny1uV0cDn2/YlYyHpR0fd2k8cggP+KO4GhpxmE3VC7K/EuahGdceX67oHwM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NQ6d/Cyg; arc=none smtp.client-ip=209.85.210.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NQ6d/Cyg" Received: by mail-pf1-f178.google.com with SMTP id d2e1a72fcca58-742c9563fd9so1440650b3a.3; Mon, 19 May 2025 10:54:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677287; x=1748282087; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9ZJAaoLVx59vP6ayQOWcOdWbgIttwlPWjJpDsI8g2BE=; b=NQ6d/CygaWMMo/wB6fCsLZQIYFxEzRkn0rNyy6lHcy9GZQMq8R8kLfYtAEoj1Gj2xE etEgyMsKHg/Ys+GYSkcWkQK2kiHKgjLgBf11ZHFIF/LL3bLOL8HsJRGBoJW5uoQNa3H5 Fhu1P7omzBT0X0HDBA8IuQmUKLRz0fe3C4mQWr3PccyRPXt6I/P8lr0nnZeCwTkgntkq PLCdchmnupcQJEZFLFu4LZVHjmSTEieSZ8EHMwc2xSaGpXHvVFFLnwPIhzn3Qp75Zb/y iFaYvS0iKe9nY40/MAIclNbdpKDDj607RKplpRrtefFZORDkiO2bVv0FHY68ALaoNyOc aUCg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677287; x=1748282087; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9ZJAaoLVx59vP6ayQOWcOdWbgIttwlPWjJpDsI8g2BE=; b=FXh3ik6qHTedCOrnQnfi8Brjimire8eLftN0EozWr0zvIkMBYEK7UN/iuk8f6fyMZn L9Glaprg3ciobxu+XwchGiAS7yk0zUuH+2upKphG+BpgFc/Ng74x1Lct7TRQE3LcK8C3 Sq2RUrMat/w7HyV7ujPgoI+YJOEp77AGH66DMzHlT5dxuWEoYmHPuWoxHq6OdMQEh86M D/dpjNilUJIevqd4AtG0bywXNk67EQJKxQPtLork0ha7hzTAdDxr0dxFx/JkS0uaBTnl YqaYeteRi9Cj4MjYfeyWE6f7LJdktTj34Bd/IONuhb4iigTr9j3wHKquq+dR82MAgLMm tGUQ== X-Forwarded-Encrypted: i=1; AJvYcCUnIBCBG5GdYZfok9Ww+f7NA/6Ui9cufwP8lH68TQgONZfx7ICe5dvvooNgbPwLUrQTDOhrpKhj4BnqyI8T@vger.kernel.org, AJvYcCXfsMUn/IvuAY5qm5o8Tjbaaui0TsBwwUTGSh2rMqP2X8K1boa9JYkRZ9OibJ2PlKGZV4V7ilLi40ue9Frg@vger.kernel.org X-Gm-Message-State: AOJu0YxsUx/7CVJmm8LmK9q2AqDrtQ3Eh4gllgjyR4r8KmvB2IzJO4m1 hNvSGPj5Z5ZTRcWj+GnBr21EJ8CvFHnGCNd5cIsheKNZ7CYLHpx0z8dt X-Gm-Gg: ASbGnct5K2f21K55vMH/lV5OdkcMoan1N//0K6ujfREtin8+TDD2JxVURa47hF6s30R gxFTDK4xhQpZmfmRQg2gDO6399/Y6yWBCEV/xSMQwMzvsIFjSV9QX3AX4M2Mwc18ac+BMDOLyNF 5dICdLOsP3DqiUEzRhiMFDspv7CdSrdJElleTftXLcAqtg7V7aD8WnPQa07E2q5B6scR/CBn5fc XaQ/Kz+6hx8Gwf7rYs+s1idqV8IpHkteitDSKgZD+DDDwoF56aTBsB6hjbV6//crAAXWj8ka0uK 10Txy5FJKHFzEuX+P3Nz1Mge7eQqZrFIppBfa44a63/Z0ORADhz7MYSDTiff+fs0rI6b9SW4qOn A7V9xN9XP5Vtbcq9Z8lm2acoRuQ== X-Google-Smtp-Source: AGHT+IEkvwoassi3JU2FtzTDATo/5K2z8O6vVEhGhJrQDz6ZrR86QLifz9h8DEvlWr3Zl7UexLwoQA== X-Received: by 2002:a05:6a21:9185:b0:1f5:7ba7:69d8 with SMTP id adf61e73a8af0-2170cc65d73mr20170979637.15.1747677286871; Mon, 19 May 2025 10:54:46 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9739cdesm6539985b3a.82.2025.05.19.10.54.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 10/40] drm/msm: Collapse vma allocation and initialization Date: Mon, 19 May 2025 10:51:33 -0700 Message-ID: <20250519175348.11924-11-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've dropped vram carveout support, we can collapse vma allocation and initialization. This better matches how things work with drm_gpuvm. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 30 +++----------------------- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 36 +++++++++++++------------------ 3 files changed, 20 insertions(+), 50 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 621fb4e17a2e..29247911f048 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -337,23 +337,6 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *add_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; - - msm_gem_assert_locked(obj); - - vma = msm_gem_vma_new(vm); - if (!vma) - return ERR_PTR(-ENOMEM); - - list_add_tail(&vma->list, &msm_obj->vmas); - - return vma; -} - static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct msm_gem_vm *vm) { @@ -420,6 +403,7 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { + struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -427,18 +411,10 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - int ret; - - vma = add_vma(obj, vm); + vma = msm_gem_vma_new(vm, obj, range_start, range_end); if (IS_ERR(vma)) return vma; - - ret = msm_gem_vma_init(vma, obj->size, - range_start, range_end); - if (ret) { - del_vma(vma); - return ERR_PTR(ret); - } + list_add_tail(&vma->list, &msm_obj->vmas); } else { GEM_WARN_ON(vma->iova < range_start); GEM_WARN_ON((vma->iova + obj->size) > range_end); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c16b11182831..9bd78642671c 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -66,8 +66,8 @@ struct msm_gem_vma { bool mapped; }; -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm); -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct msm_gem_vma *vma); int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 9419692f0cc8..6d18364f321c 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -106,47 +106,41 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) msm_gem_vm_put(vm); } -struct msm_gem_vma *msm_gem_vma_new(struct msm_gem_vm *vm) +/* Create a new vma and allocate an iova for it */ +struct msm_gem_vma * +msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, + u64 range_start, u64 range_end) { struct msm_gem_vma *vma; + int ret; vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) - return NULL; + return ERR_PTR(-ENOMEM); vma->vm = vm; - return vma; -} - -/* Initialize a new vma and allocate an iova for it */ -int msm_gem_vma_init(struct msm_gem_vma *vma, int size, - u64 range_start, u64 range_end) -{ - struct msm_gem_vm *vm = vma->vm; - int ret; - - if (GEM_WARN_ON(!vm)) - return -EINVAL; - - if (GEM_WARN_ON(vma->iova)) - return -EBUSY; - spin_lock(&vm->lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - size, PAGE_SIZE, 0, + obj->size, PAGE_SIZE, 0, range_start, range_end, 0); spin_unlock(&vm->lock); if (ret) - return ret; + goto err_free_vma; vma->iova = vma->node.start; vma->mapped = false; + INIT_LIST_HEAD(&vma->list); + kref_get(&vm->kref); - return 0; + return vma; + +err_free_vma: + kfree(vma); + return ERR_PTR(ret); } struct msm_gem_vm * From patchwork Mon May 19 17:51:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892066 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEF3A28A73B; Mon, 19 May 2025 17:54:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677291; cv=none; b=TJLhftdfVpeqyqOfhC9nF/qDOBXuEQc53Y/MS7mXawJbCsrPOuhxV0omfym7ITgFwlMXUvWvdjS3GsPfMVxmYp1DhXL00rTMDiN6YRn7wSAqQMvks0B8UQjFuXFRGqDG03C/e2hJD22J9dcl14mCUdBvWPf4NAM11ZfF6ywEcPA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677291; c=relaxed/simple; bh=ZaHKZAq/z/eNpWNhqjzvNMs33X3FsXvex3c08Wmj4FY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qJH7jsHr159FuwnJUksOgJGOy5cFR94+dxyv9eYYiKNrquDccF5zVWfGkysHHc/RhjzZdgbQvpzvfJGbQQLBnDi659dBSsWgLGaRS2GXUAGv7VdfXEzUrlBxsA4WNthFLu3gnTmBsm3gYN68IqR09LiLtH5gCg620b22H8tgGMk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PBziU+BF; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PBziU+BF" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-74068f95d9fso4296045b3a.0; Mon, 19 May 2025 10:54:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677289; x=1748282089; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=waw4w0Q/P9keedP7Ykz7fyMHfgHgVQWacH3yk9lXO5s=; b=PBziU+BFxi/uRjzpwSMDLB+YkWJg0bYzp9bAIMDd3Q2ubwFcaiGo4+6higRvnO9x/K MXSpTCSiLbksrRrTRBA/Nn+/JNqHZz4rJwHUEbU3PmjsAvwJniEfxsWU6OOhP/cq1Qkq dT9zVWTb2mtCNmRhnyjt0oSnF0V0+00gW/3aAUy6OHvX4CbyAc9/H4ZMyahZ+SalCT6A XwqRi5N97z3AbIaQIrdWub35pi9+2IUT3OevyMVxo4t3q+6wv1So8k96wkr57obya35r /kKWMRQDOtUZw/Y8qyA1QppLrNJEGqXxKgZgrzmBZ0pfz4nEea/Hn3I+82xIfHJXwsoU OFEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677289; x=1748282089; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=waw4w0Q/P9keedP7Ykz7fyMHfgHgVQWacH3yk9lXO5s=; b=oKzLQw8oArDLddS6//ARuOg3VbJVGutvZJlNBLh+Jo1pQMotxAHPwuDQBOuBwz2DVU XanY0CcMgafBYYeGocb4bOyfDzT+AN9CRcho6rr6XUM/3GHHn+2WrioCXg34RNXk3XXv zEruGGsibTNlp8ZyHcMtLl4jIflNnhyiw+ik6IJKFlTtH+/z+nr27UUJabJFC7StEz2F EFi+XH62Vn7Gb5AmLgtjJCcjO0yKXMzkk9vfsiAiKtnY5qGgeMTDCGx645x+uJYfPYD5 2JveSZvln8bDB4ZYo17pugsEdqatpQyMaJm22CGEciUO3cfPsVaFTIRFHeQnR/ASAQQ1 qK1g== X-Forwarded-Encrypted: i=1; AJvYcCUB4kapQ0QrkLq4L9T2FT6aN+YI2WGk06jcBNagcRZw8KuNV15IAcElvtkKJSWNdoarzCVADnc9gmsReIva@vger.kernel.org, AJvYcCV3K0AEwY40E1sdbyWoIsnRgg/j5MhL7JZVqCmGDPkz7CbrNalYfdt3tuLiAiZu2NVwqzQykCBo9LlAJktp@vger.kernel.org X-Gm-Message-State: AOJu0YxiaLeNdHExqEtiCzG963gnZEKJZHARA3wppsReEvj7hG+UkXbE yI577x2Oo2yX8kacrCToeSJEuHYeDqZOnOIIjad5PWyTOlWf1+fl+4Kx X-Gm-Gg: ASbGncu39VcUo7yVIoXqG+MoB1cpvmfycPKml1mIeb6iPMPxBOxvtAEFsoSLWVKWkgi m7U3pR2jonPIe8BusLk4KJWkzznFjmlpgzwhTQFOg64JxBligGEhYptMN2xPWdPZEthdgq+5iv3 k8VWQvpDAWzcVqFTuZQEnTnAltXhAZPbo4qi3IXw6j5KubwasDVUWqD5LnY+pBl0N8sLpghRCnQ wb69VhRMLgC3polg4zOTQpJQ8+Z1JtcB0yliOLbyaIGcENZsFTiWICxUPXfvnVwS+C3wXqCsDAr H7LNFZZCPUgLfz66BxA2zh+ZmM+IbWd7n0EgRkUM7aez33HYhJiQ4F/ycdLx54e4opp6YTZGbMh 9c+GmVZxZu/ywU+/wugVM3Lbr5/fhRSVYfhtZ X-Google-Smtp-Source: AGHT+IFlxtzSJQGaFgyXf/ZCTkw2+pm26AoftU/iyWEASQCl3jv2lzP8WS3TOfWZ0zhznDu5/OaUmA== X-Received: by 2002:a05:6a20:9f90:b0:216:5f67:2d90 with SMTP id adf61e73a8af0-2170ce0b11bmr22774792637.34.1747677289149; Mon, 19 May 2025 10:54:49 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eaf8e05fsm5622749a12.39.2025.05.19.10.54.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:48 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 11/40] drm/msm: Collapse vma close and delete Date: Mon, 19 May 2025 10:51:34 -0700 Message-ID: <20250519175348.11924-12-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This fits better drm_gpuvm/drm_gpuva. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 16 +++------------- drivers/gpu/drm/msm/msm_gem_vma.c | 2 ++ 2 files changed, 5 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 29247911f048..4c10eca404e0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -353,15 +353,6 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, return NULL; } -static void del_vma(struct msm_gem_vma *vma) -{ - if (!vma) - return; - - list_del(&vma->list); - kfree(vma); -} - /* * If close is true, this also closes the VMA (releasing the allocated * iova range) in addition to removing the iommu mapping. In the eviction @@ -372,11 +363,11 @@ static void put_iova_spaces(struct drm_gem_object *obj, bool close) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct msm_gem_vma *vma, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { + list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { if (vma->vm) { msm_gem_vma_purge(vma); if (close) @@ -395,7 +386,7 @@ put_iova_vmas(struct drm_gem_object *obj) msm_gem_assert_locked(obj); list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - del_vma(vma); + msm_gem_vma_close(vma); } } @@ -564,7 +555,6 @@ static int clear_iova(struct drm_gem_object *obj, msm_gem_vma_purge(vma); msm_gem_vma_close(vma); - del_vma(vma); return 0; } diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 6d18364f321c..ca29e81d79d2 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -102,8 +102,10 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) spin_unlock(&vm->lock); vma->iova = 0; + list_del(&vma->list); msm_gem_vm_put(vm); + kfree(vma); } /* Create a new vma and allocate an iova for it */ From patchwork Mon May 19 17:51:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891135 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDB9B28AB12; Mon, 19 May 2025 17:54:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677293; cv=none; b=epZ+wrFZH/BEtaEIQ5/ZA0XzLffjttJ3/zOKFlhdEvl588KF6OMTYz20nBiKgUFLS6A0j9lVK0XlmHBE8pKCHCOXSlvvbmnm5et1fhJo2dolgtg/TEs2eQJkHvl3EoR3MtJWp/TF/00ef41F7h1CsX6WeEw8dmyIQdH0jBdZSmc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677293; c=relaxed/simple; bh=V8c42ToE53u4EzIBKNHkOhDJybrEWeX5QGoGp8mh/ZA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u4fL2NH+IbkpoYrQkkoD3Lr0E1D6UImdl0MplmyvQBgmjGCGXiYVCbwa+wqm7UX31hVKSRAcK3oEHQsuNhhNFulyJvQZYP8Rck4oh9QiM1YVnP8hYSZS/1UhPGtLw3UQ62SGXIJS3+wu4swnJLVkLH+UlB64wvWuJiw6nTFgKqc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=YjjVgnxj; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="YjjVgnxj" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-30e9b0f374fso3267579a91.3; Mon, 19 May 2025 10:54:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677291; x=1748282091; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5PtOzJz3dhv1LS9QV7CQZ1khNGE/TmCpgM2HiFcWd9g=; b=YjjVgnxjDD9qNLzesxzd5Qg1t6V5vwDgD2ziWCIeDXuElMVxMvB7mGqBr1gmMFu557 PJ7MrLSJBihI377AZSXAmvdRZ+qsOwBI0svyN07AvCBo57cEeixH/cT31fx9kSmRpOXf iTDb2Ukq13W7nFVzXn2fttCKrsk8vCTNTiXG+QUlIiz2ZxbBLyAZRYVPwuzS6ruYzbVQ 4fJb73nNATHbdIoI2K2EN17jT64p0GPhheeD6OJxz56LcOVu4XPebNS8l+xickoAck8t Fy0s+Dd7s2TFcju46yPCqQIPdmhVHfPesw3JjDXsnWgePq3nNoNIKCjnnk0Xrztwy0H+ h/mg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677291; x=1748282091; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5PtOzJz3dhv1LS9QV7CQZ1khNGE/TmCpgM2HiFcWd9g=; b=igHDzMS1LVRZuxu6JHHf+PJJlVD7afu5FOqU0xuZv63+7SwW29FsuXF4ymJfWtSxK+ CfchJVh2DTkMbVRtUf/NLfzRK0ApB6ayCVPxdCGT569yQrlzUwCwn/lsRYzsU5kRIF7p DsbPfbo+cyNm+dWMlZKpXRece+NGJiYA7ht2gAcmygjR6RYAgaDn2bqVL040EjuP3WKc bsbvBfzyKHI2+ytrmsLB7SxRTzSIGVjGCNi7UbuRu4lixjzgMrBxwfv0viSfT/FAWqtA SjbS7neXmwC8kdZFIVGePBJUiL9XxFmIueiMrI48uxFCuIquT4IBuOrS05YUIWj5wpcr dxyw== X-Forwarded-Encrypted: i=1; AJvYcCUJqrcrXWZVqeIdX/tnr+RocuzdbUU5+VKtNLOaksXDOsOvNuIwu1ebT1Ju0GiizOQk1THTrY89O0AeTq5E@vger.kernel.org, AJvYcCW7Equqvz+zLLEWBNyO4tL5mB7Qtz75hSAMknCxY6SEe23f3RJX5PzTAzlIT58TL7oyG0Sorb/pIievPfqA@vger.kernel.org X-Gm-Message-State: AOJu0YwwejXM/s9HKblAnSkAehn7EROkHffOwm9XDkQRiR2IhI1m0vTW 0XQE0OaDNx9IAYSQ3kjN8HyF3ZgtU0SpbraToRIC/JvougQzyxYe5Rro X-Gm-Gg: ASbGncsFxSRKMUUz7AmTX/f3aZqjH58ycuD+KPA/I0ZBE1ryaLflgbgfoX4pLL1cm5u tqKH6sMc2S6ep/3IEXoGYec0OyKez5NOhlEzquvwxW0PnoQBYv8WhvM2Ux4uohBmP+rl0QZ7/1O QyVPV6+cypOJUte0Odx+VoAwzawEHWYSgIUDdwE6GCqdH1GEHIR84JqA2hzbNWCIUJ6MhulJCCn FxSqkyJcT3+MFkVx/nxPaNav3dF9inIQH/BUKUdzmK6zUihIPyl8M7roxsVhDf2ar3q6dbqOZ/F AS7ILDYmaVBH1qaZdjIHdXdARiNoem0iVB/ymDs3sfz2hhVaLrMxwELicCW2WOglvMqfV4Fu4nN Sn4XPpj1gQkzv9cFZFQNFVtR9Jw== X-Google-Smtp-Source: AGHT+IHpowPNebnU683NUJPLBntMDJlNddUI7cpcwZOKdAqHjpaQb88DRHEFLqSJ3HYORfJyQtdfDQ== X-Received: by 2002:a17:90b:3905:b0:2ee:5bc9:75c3 with SMTP id 98e67ed59e1d1-30e7d501dd0mr18086643a91.5.1747677290931; Mon, 19 May 2025 10:54:50 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e833da84csm6741686a91.10.2025.05.19.10.54.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:54:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 12/40] drm/msm: Don't close VMAs on purge Date: Mon, 19 May 2025 10:51:35 -0700 Message-ID: <20250519175348.11924-13-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Previously we'd also tear down the VMA, making the address space available again. But with drm_gpuvm conversion, this would require holding the locks of all VMs the GEM object is mapped in. Which is problematic for the shrinker. Instead just let the VMA hang around until the GEM object is freed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 4c10eca404e0..50b866dcf439 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -763,7 +763,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, true); + put_iova_spaces(obj, false); msm_gem_vunmap(obj); From patchwork Mon May 19 17:57:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892065 Received: from mail-pj1-f47.google.com (mail-pj1-f47.google.com [209.85.216.47]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B781288C0F; Mon, 19 May 2025 17:58:04 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.47 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677487; cv=none; b=OdfGXX7Y/nZkZF+bQKVNAzdIRzmsduidSBr+QVkKIybrj3DFl3YV1S6INhPqYCYp5+A/9v5gPLdetCYY/zoGBIMFEZQpBZ9SWbYY845pTUhPb3/UwvNJdzp14KUO8oTqQ2zsYC+wTEnp9bnB2Y/nhF3TgTJcSzGqpYPXkEsWZ0E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677487; c=relaxed/simple; bh=ABQQySTrQoqHwlO2QcV/1PJ4EK2zBLGMFtZzCOmkiHQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MGfXusgPP+bPmBglWY+wrnv4mxQ1qJZeRr74Cy+wv70F7c0yDjas8pRqXRHjijtSuKh4oZAsG9dGnwZTFUYZvaoJru6MHH/gf6t6CdFGZXwBA1eCk8fFzxsf9dSQI6yA3d+H0aj+n++2jn3Fjil5z34Oxf3USGlo9mJORCoYeK8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Pp0P5dFA; arc=none smtp.client-ip=209.85.216.47 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Pp0P5dFA" Received: by mail-pj1-f47.google.com with SMTP id 98e67ed59e1d1-30e8daea8c6so2644475a91.0; Mon, 19 May 2025 10:58:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677484; x=1748282284; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Dylv8ltaCK4u8Wo4QGjh8+l5n/BXZ9ezH+immkhNgAg=; b=Pp0P5dFAoF4vKc8NeS0f3nOjDXxZDwXqu98rdPqLE3CcqTyyvETl9nAYw5sldf46ks SYn6dO4L6Q100LDABcVlB7ZyAVuioHQ4s2Mb4c/QZurKdTEwb4/gv9iQmbe3zvNLQY3u kRFiJNv9LbDOh2CbKhoVXNioS7jP+i07oRFsXUt5U2HAW1UZR4RaeFU5dTBu2+vIB15P fiXuWGxUH1136O/1xwfX1iDgwiLEh557nAWhlnuFW8n+ABoK0AY9a7q1zlWtJUkGIXOa aGwvyTeV+zon7ajJ9XfVa7QW6YcDGwvnqmbUesj4zPOmeaTdDsM6/y0gv9ptcecqJCji qtew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677484; x=1748282284; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Dylv8ltaCK4u8Wo4QGjh8+l5n/BXZ9ezH+immkhNgAg=; b=EAbJsP9px4EQAf8x+dWsW3GdQqJSv9RkR35bYOs5LVOYyOXahJeJiZ1RE/8t7eN1Cd RpGValxLIf1EK2PvQH6H/+Kj4unemqlzMVbW/RiYvXitN5G4Db+aUHnDdRN0xPe5dRvm qufHGhM6CTH814A4+PRtysTpe3kT2Ky24nJ3gTlbR8Yg1cl4YYnzxvejwO4hfxsZPN+q CgmEbOHratRQ+gYI4saKyopd59pbN42/iK5hTVr8rnbLrp8rFykuXBpVY7O+HuGqdJRo QkO50CjH04eP+AF3KU142W/5vARYvelDN6ywVuWYhmbOOs3ooA7T/QTf2B6SXSyuUz/E wYtQ== X-Forwarded-Encrypted: i=1; AJvYcCVoi+I5yLJBFL+4ACiiNC4hgKW7eyppyOWUC7oHLTqU/jG1VAt4ZWLN2tIZaaSnumsCs3meNA+4fjFliZD8@vger.kernel.org, AJvYcCVxUBv3Gh9ddR4/xRn07VBHHvnD1t+ZPfSgcR+NZL/YCayT/n4Hg7lDdHFL7V3ojVlKdCRV6nsFbXsHibZ+@vger.kernel.org X-Gm-Message-State: AOJu0YxM3CxyeC9M5UZlrECKIl4Dkesm4bxiabarSBS1rXVT9pZ61VqW S/jomqLYFD5Hp1J7SBf79GeyJ8v/k5ajBLVDDQtZmSYfpbhzxICSFSLh X-Gm-Gg: ASbGncuGEFRLeHR3wCSZyqFxyK7eY3EkERCqIII9ZaiPTIWiq6llSJucn0cL0H7kC7I DvijvLr5rbMbPvM9nfQMYKxrCFvavUpOVoAvzHVsHBkTxwmnfT3cl9OuoxQHYihZ7Td31IyWPje s6AD+HNypaSvHZQXVmobemG5hAcJYBJ9tDm+NgQIzVOn65iQT3/c1OgK2RK6uoA5WzEP4zDV38K FdEOqgORVaLWaT4/lYoUaRtz35g8SVJoBmQqErDcu6bzdnGQZ5UsE1vEApTMyBCKrvLNwT5+ywc JlceLbduFksVQ+m0DlhlXfa/q4O69h6DI7pJMuwHrKJtioo0jRwnIlr/p2LRgv6Z2EloAUUdkXq i+otE19BMDHhI5I0JkvhHEL1sQhm1rFcM8Mj1 X-Google-Smtp-Source: AGHT+IGMpuXHFCgoZN2N7bVeF3E5BwOEMS8yqz+qwddaps2WVDwoTBDaRmm+EFYHkQDQ7uh5a66v+Q== X-Received: by 2002:a17:90b:2644:b0:309:e195:59d4 with SMTP id 98e67ed59e1d1-30e7d52b166mr27056547a91.12.1747677483994; Mon, 19 May 2025 10:58:03 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e7d576c0esm6943613a91.31.2025.05.19.10.58.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:03 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 13/40] drm/msm: drm_gpuvm conversion Date: Mon, 19 May 2025 10:57:10 -0700 Message-ID: <20250519175755.13037-1-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175348.11924-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Now that we've realigned deletion and allocation, switch over to using drm_gpuvm/drm_gpuva. This allows us to support multiple VMAs per BO per VM, to allow mapping different parts of a single BO at different virtual addresses, which is a key requirement for sparse/VM_BIND. This prepares us for using drm_gpuvm to translate a batch of MAP/ MAP_NULL/UNMAP operations from userspace into a sequence of map/remap/ unmap steps for updating the page tables. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Kconfig | 1 + drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 5 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 7 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 5 +- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gem.c | 142 ++++++++++++++--------- drivers/gpu/drm/msm/msm_gem.h | 89 +++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 140 +++++++++++++++------- drivers/gpu/drm/msm/msm_kms.c | 4 +- 12 files changed, 271 insertions(+), 134 deletions(-) diff --git a/drivers/gpu/drm/msm/Kconfig b/drivers/gpu/drm/msm/Kconfig index 974bc7c0ea76..4af7e896c1d4 100644 --- a/drivers/gpu/drm/msm/Kconfig +++ b/drivers/gpu/drm/msm/Kconfig @@ -21,6 +21,7 @@ config DRM_MSM select DRM_DISPLAY_HELPER select DRM_BRIDGE_CONNECTOR select DRM_EXEC + select DRM_GPUVM select DRM_KMS_HELPER select DRM_PANEL select DRM_BRIDGE diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 095bae92e3e8..889480aa13ba 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -472,8 +472,7 @@ a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); struct msm_gem_vm *vm; - vm = msm_gem_vm_create(mmu, "gpu", SZ_16M, - 0xfff * SZ_64K); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 848acc382b7d..77d9ff9632d1 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1311,7 +1311,7 @@ static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, return 0; } -static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) +static int a6xx_gmu_memory_probe(struct drm_device *drm, struct a6xx_gmu *gmu) { struct msm_mmu *mmu; @@ -1321,7 +1321,7 @@ static int a6xx_gmu_memory_probe(struct a6xx_gmu *gmu) if (IS_ERR(mmu)) return PTR_ERR(mmu); - gmu->vm = msm_gem_vm_create(mmu, "gmu", 0x0, 0x80000000); + gmu->vm = msm_gem_vm_create(drm, mmu, "gmu", 0x0, 0x80000000, true); if (IS_ERR(gmu->vm)) return PTR_ERR(gmu->vm); @@ -1940,7 +1940,7 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node) if (ret) goto err_put_device; - ret = a6xx_gmu_memory_probe(gmu); + ret = a6xx_gmu_memory_probe(adreno_gpu->base.dev, gmu); if (ret) goto err_put_device; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index f4d9cdbc5602..26d0a863f38c 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2271,9 +2271,8 @@ a6xx_create_private_vm(struct msm_gpu *gpu) if (IS_ERR(mmu)) return ERR_CAST(mmu); - return msm_gem_vm_create(mmu, - "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu)); + return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, + adreno_private_vm_size(gpu), true); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 35a99c81f7e0..287b032fefe4 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -226,7 +226,8 @@ adreno_iommu_create_vm(struct msm_gpu *gpu, start = max_t(u64, SZ_16M, geometry->aperture_start); size = geometry->aperture_end - start + 1; - vm = msm_gem_vm_create(mmu, "gpu", start & GENMASK_ULL(48, 0), size); + vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", start & GENMASK_ULL(48, 0), + size, true); if (IS_ERR(vm) && !IS_ERR(mmu)) mmu->funcs->destroy(mmu); @@ -418,12 +419,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_start; + *value = ctx->vm->base.mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->va_size; + *value = ctx->vm->base.mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index 94fbc20b2fbd..d5b5628bee24 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -451,8 +451,9 @@ static int mdp4_kms_init(struct drm_device *dev) "contig buffers for scanout\n"); vm = NULL; } else { - vm = msm_gem_vm_create(mmu, - "mdp4", 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp4", + 0x1000, 0x100000000 - 0x1000, + true); if (IS_ERR(vm)) { if (!IS_ERR(mmu)) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 978f1d355b42..6ef29bc48bb0 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -776,6 +776,7 @@ static const struct file_operations fops = { static const struct drm_driver msm_driver = { .driver_features = DRIVER_GEM | + DRIVER_GEM_GPUVA | DRIVER_RENDER | DRIVER_ATOMIC | DRIVER_MODESET | diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 50b866dcf439..3b7db3b3f763 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,9 +47,32 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close); + static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; + update_ctx_mem(file, -obj->size); + + /* + * If VM isn't created yet, nothing to cleanup. And in fact calling + * put_iova_spaces() with vm=NULL would be bad, in that it will tear- + * down the mappings of shared buffers in other contexts. + */ + if (!ctx->vm) + return; + + /* + * TODO we might need to kick this to a queue to avoid blocking + * in CLOSE ioctl + */ + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, + msecs_to_jiffies(1000)); + + msm_gem_lock(obj); + put_iova_spaces(obj, &ctx->vm->base, true); + msm_gem_unlock(obj); } /* @@ -171,6 +194,13 @@ static void put_pages(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); + /* + * Skip gpuvm in the object free path to avoid a WARN_ON() splat. + * See explaination in msm_gem_assert_locked() + */ + if (kref_read(&obj->refcount)) + drm_gpuvm_bo_gem_evict(obj, true); + if (msm_obj->pages) { if (msm_obj->sgt) { /* For non-cached buffers, ensure the new @@ -338,16 +368,25 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) } static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct msm_gem_vm *vm) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma; + struct drm_gpuvm_bo *vm_bo; msm_gem_assert_locked(obj); - list_for_each_entry(vma, &msm_obj->vmas, list) { - if (vma->vm == vm) - return vma; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + if (vma->vm == &vm->base) { + /* lookup_vma() should only be used in paths + * with at most one vma per vm + */ + GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); + + return to_msm_vma(vma); + } + } } return NULL; @@ -360,33 +399,29 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + struct drm_gpuvm_bo *vm_bo, *tmp; msm_gem_assert_locked(obj); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - if (vma->vm) { - msm_gem_vma_purge(vma); - if (close) - msm_gem_vma_close(vma); - } - } -} + drm_gem_for_each_gpuvm_bo_safe (vm_bo, tmp, obj) { + struct drm_gpuva *vma, *vmatmp; -/* Called with msm_obj locked */ -static void -put_iova_vmas(struct drm_gem_object *obj) -{ - struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct msm_gem_vma *vma, *tmp; + if (vm && vm_bo->vm != vm) + continue; - msm_gem_assert_locked(obj); + drm_gpuvm_bo_get(vm_bo); - list_for_each_entry_safe(vma, tmp, &msm_obj->vmas, list) { - msm_gem_vma_close(vma); + drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + msm_gem_vma_purge(msm_vma); + if (close) + msm_gem_vma_close(msm_vma); + } + + drm_gpuvm_bo_put(vm_bo); } } @@ -394,7 +429,6 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, struct msm_gem_vm *vm, u64 range_start, u64 range_end) { - struct msm_gem_object *msm_obj = to_msm_bo(obj); struct msm_gem_vma *vma; msm_gem_assert_locked(obj); @@ -403,12 +437,9 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); - if (IS_ERR(vma)) - return vma; - list_add_tail(&vma->list, &msm_obj->vmas); } else { - GEM_WARN_ON(vma->iova < range_start); - GEM_WARN_ON((vma->iova + obj->size) > range_end); + GEM_WARN_ON(vma->base.va.addr < range_start); + GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); } return vma; @@ -492,7 +523,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->iova; + *iova = vma->base.va.addr; pin_obj_locked(obj); } @@ -538,7 +569,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->iova; + *iova = vma->base.va.addr; } msm_gem_unlock(obj); @@ -579,7 +610,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->iova != iova)) { + } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -763,7 +794,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); msm_gem_vunmap(obj); @@ -771,8 +802,6 @@ void msm_gem_purge(struct drm_gem_object *obj) put_pages(obj); - put_iova_vmas(obj); - mutex_lock(&priv->lru.lock); /* A one-way transition: */ msm_obj->madv = __MSM_MADV_PURGED; @@ -803,7 +832,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, false); + put_iova_spaces(obj, NULL, false); drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); @@ -869,7 +898,6 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct dma_resv *robj = obj->resv; - struct msm_gem_vma *vma; uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; @@ -912,14 +940,17 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, seq_printf(m, " %08zu %9s %-32s\n", obj->size, madv, msm_obj->name); - if (!list_empty(&msm_obj->vmas)) { + if (!list_empty(&obj->gpuva.list)) { + struct drm_gpuvm_bo *vm_bo; seq_puts(m, " vmas:"); - list_for_each_entry(vma, &msm_obj->vmas, list) { - const char *name, *comm; - if (vma->vm) { - struct msm_gem_vm *vm = vma->vm; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct drm_gpuva *vma; + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + const char *name, *comm; + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct task_struct *task = get_pid_task(vm->pid, PIDTYPE_PID); if (task) { @@ -928,15 +959,14 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, } else { comm = NULL; } - name = vm->name; - } else { - name = comm = NULL; + name = vm->base.name; + + seq_printf(m, " [%s%s%s: vm=%p, %08llx, %smapped]", + name, comm ? ":" : "", comm ? comm : "", + vma->vm, vma->va.addr, + to_msm_vma(vma)->mapped ? "" : "un"); + kfree(comm); } - seq_printf(m, " [%s%s%s: vm=%p, %08llx,%s]", - name, comm ? ":" : "", comm ? comm : "", - vma->vm, vma->iova, - vma->mapped ? "mapped" : "unmapped"); - kfree(comm); } seq_puts(m, "\n"); @@ -982,7 +1012,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj) list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); - put_iova_spaces(obj, true); + put_iova_spaces(obj, NULL, true); if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); @@ -992,13 +1022,10 @@ static void msm_gem_free_object(struct drm_gem_object *obj) */ kvfree(msm_obj->pages); - put_iova_vmas(obj); - drm_prime_gem_destroy(obj, msm_obj->sgt); } else { msm_gem_vunmap(obj); put_pages(obj); - put_iova_vmas(obj); } drm_gem_object_release(obj); @@ -1104,7 +1131,6 @@ static int msm_gem_new_impl(struct drm_device *dev, msm_obj->madv = MSM_MADV_WILLNEED; INIT_LIST_HEAD(&msm_obj->node); - INIT_LIST_HEAD(&msm_obj->vmas); *obj = &msm_obj->base; (*obj)->funcs = &msm_gem_object_funcs; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 9bd78642671c..f7f7e7910754 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -10,6 +10,7 @@ #include #include #include "drm/drm_exec.h" +#include "drm/drm_gpuvm.h" #include "drm/gpu_scheduler.h" #include "msm_drv.h" @@ -22,30 +23,67 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm - VM object + * + * A VM object representing a GPU (or display or GMU or ...) virtual address + * space. + * + * In the case of GPU, if per-process address spaces are supported, the address + * space is split into two VMs, which map to TTBR0 and TTBR1 in the SMMU. TTBR0 + * is used for userspace objects, and is unique per msm_context/drm_file, while + * TTBR1 is the same for all processes. (The kernel controlled ringbuffer and + * a few other kernel controlled buffers live in TTBR1.) + * + * The GPU TTBR0 vm can be managed by userspace or by the kernel, depending on + * whether userspace supports VM_BIND. All other vm's are managed by the kernel. + * (Managed by kernel means the kernel is responsible for VA allocation.) + * + * Note that because VM_BIND allows a given BO to be mapped multiple times in + * a VM, and therefore have multiple VMA's in a VM, there is an extra object + * provided by drm_gpuvm infrastructure.. the drm_gpuvm_bo, which is not + * embedded in any larger driver structure. The GEM object holds a list of + * drm_gpuvm_bo, which in turn holds a list of msm_gem_vma. A linked vma + * holds a reference to the vm_bo, and drops it when the vma is unlinked. + * So we just need to call drm_gpuvm_bo_obtain() to return a ref to an + * existing vm_bo, or create a new one. Once the vma is linked, the ref + * to the vm_bo can be dropped (since the vma is holding one). + */ struct msm_gem_vm { - const char *name; - /* NOTE: mm managed at the page level, size is in # of pages - * and position mm_node->start is in # of pages: + /** @base: Inherit from drm_gpuvm. */ + struct drm_gpuvm base; + + /** + * @mm: Memory management for kernel managed VA allocations + * + * Only used for kernel managed VMs, unused for user managed VMs. + * + * Protected by @mm_lock. */ struct drm_mm mm; - spinlock_t lock; /* Protects drm_mm node allocation/removal */ + + /** @mm_lock: protects @mm node allocation/removal */ + struct spinlock mm_lock; + + /** @vm_lock: protects gpuvm insert/remove/traverse */ + struct mutex vm_lock; + + /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; - struct kref kref; - /* For address spaces associated with a specific process, this + /** + * @pid: For address spaces associated with a specific process, this * will be non-NULL: */ struct pid *pid; - /* @faults: the number of GPU hangs associated with this address space */ + /** @faults: the number of GPU hangs associated with this address space */ int faults; - /** @va_start: lowest possible address to allocate */ - uint64_t va_start; - - /** @va_size: the size of the address space (in bytes) */ - uint64_t va_size; + /** @managed: is this a kernel managed VM? */ + bool managed; }; +#define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm); @@ -53,18 +91,33 @@ msm_gem_vm_get(struct msm_gem_vm *vm); void msm_gem_vm_put(struct msm_gem_vm *vm); struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size); +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed); struct msm_fence_context; +#define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) + +/** + * struct msm_gem_vma - a VMA mapping + * + * Represents a combination of a GEM object plus a VM. + */ struct msm_gem_vma { + /** @base: inherit from drm_gpuva */ + struct drm_gpuva base; + + /** + * @node: mm node for VA allocation + * + * Only used by kernel managed VMs + */ struct drm_mm_node node; - uint64_t iova; - struct msm_gem_vm *vm; - struct list_head list; /* node in msm_gem_object::vmas */ + + /** @mapped: Is this VMA mapped? */ bool mapped; }; +#define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, @@ -100,8 +153,6 @@ struct msm_gem_object { struct sg_table *sgt; void *vaddr; - struct list_head vmas; /* list of msm_gem_vma */ - char name[32]; /* Identifier to print for the debugfs files */ /* userspace metadata backchannel */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c184b1a1f522..86791a854c42 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -321,7 +321,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->iova; + submit->bos[i].iova = vma->base.va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index ca29e81d79d2..d1621761ef36 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -5,14 +5,13 @@ */ #include "msm_drv.h" -#include "msm_fence.h" #include "msm_gem.h" #include "msm_mmu.h" static void -msm_gem_vm_destroy(struct kref *kref) +msm_gem_vm_free(struct drm_gpuvm *gpuvm) { - struct msm_gem_vm *vm = container_of(kref, struct msm_gem_vm, kref); + struct msm_gem_vm *vm = container_of(gpuvm, struct msm_gem_vm, base); drm_mm_takedown(&vm->mm); if (vm->mmu) @@ -25,14 +24,14 @@ msm_gem_vm_destroy(struct kref *kref) void msm_gem_vm_put(struct msm_gem_vm *vm) { if (vm) - kref_put(&vm->kref, msm_gem_vm_destroy); + drm_gpuvm_put(&vm->base); } struct msm_gem_vm * msm_gem_vm_get(struct msm_gem_vm *vm) { if (!IS_ERR_OR_NULL(vm)) - kref_get(&vm->kref); + drm_gpuvm_get(&vm->base); return vm; } @@ -40,14 +39,14 @@ msm_gem_vm_get(struct msm_gem_vm *vm) /* Actually unmap memory for the vma */ void msm_gem_vma_purge(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; - unsigned size = vma->node.size; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + unsigned size = vma->base.va.range; /* Don't do anything if the memory isn't mapped */ if (!vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->iova, size); + vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); vma->mapped = false; } @@ -57,10 +56,10 @@ int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); int ret; - if (GEM_WARN_ON(!vma->iova)) + if (GEM_WARN_ON(!vma->base.va.addr)) return -EINVAL; if (vma->mapped) @@ -68,9 +67,6 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, vma->mapped = true; - if (!vm) - return 0; - /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -80,7 +76,7 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->iova, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); if (ret) { vma->mapped = false; @@ -92,19 +88,20 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, /* Close an iova. Warn if it is still in use */ void msm_gem_vma_close(struct msm_gem_vma *vma) { - struct msm_gem_vm *vm = vma->vm; + struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); GEM_WARN_ON(vma->mapped); - spin_lock(&vm->lock); - if (vma->iova) + spin_lock(&vm->mm_lock); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->lock); + spin_unlock(&vm->mm_lock); - vma->iova = 0; - list_del(&vma->list); + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + drm_gpuva_unlink(&vma->base); + mutex_unlock(&vm->vm_lock); - msm_gem_vm_put(vm); kfree(vma); } @@ -113,6 +110,7 @@ struct msm_gem_vma * msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -120,36 +118,83 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, if (!vma) return ERR_PTR(-ENOMEM); - vma->vm = vm; + if (vm->managed) { + spin_lock(&vm->mm_lock); + ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, + obj->size, PAGE_SIZE, 0, + range_start, range_end, 0); + spin_unlock(&vm->mm_lock); - spin_lock(&vm->lock); - ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, - obj->size, PAGE_SIZE, 0, - range_start, range_end, 0); - spin_unlock(&vm->lock); + if (ret) + goto err_free_vma; - if (ret) - goto err_free_vma; + range_start = vma->node.start; + range_end = range_start + obj->size; + } - vma->iova = vma->node.start; + GEM_WARN_ON((range_end - range_start) > obj->size); + + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - INIT_LIST_HEAD(&vma->list); + mutex_lock(&vm->vm_lock); + ret = drm_gpuva_insert(&vm->base, &vma->base); + mutex_unlock(&vm->vm_lock); + if (ret) + goto err_free_range; - kref_get(&vm->kref); + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); + if (IS_ERR(vm_bo)) { + ret = PTR_ERR(vm_bo); + goto err_va_remove; + } + + mutex_lock(&vm->vm_lock); + drm_gpuvm_bo_extobj_add(vm_bo); + drm_gpuva_link(&vma->base, vm_bo); + mutex_unlock(&vm->vm_lock); + GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; +err_va_remove: + mutex_lock(&vm->vm_lock); + drm_gpuva_remove(&vma->base); + mutex_unlock(&vm->vm_lock); +err_free_range: + if (vm->managed) + drm_mm_remove_node(&vma->node); err_free_vma: kfree(vma); return ERR_PTR(ret); } +static const struct drm_gpuvm_ops msm_gpuvm_ops = { + .vm_free = msm_gem_vm_free, +}; + +/** + * msm_gem_vm_create() - Create and initialize a &msm_gem_vm + * @drm: the drm device + * @mmu: the backing MMU objects handling mapping/unmapping + * @name: the name of the VM + * @va_start: the start offset of the VA space + * @va_size: the size of the VA space + * @managed: is it a kernel managed VM? + * + * In a kernel managed VM, the kernel handles address allocation, and only + * synchronous operations are supported. In a user managed VM, userspace + * handles virtual address allocation, and both async and sync operations + * are supported. + */ struct msm_gem_vm * -msm_gem_vm_create(struct msm_mmu *mmu, const char *name, - u64 va_start, u64 size) +msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, + u64 va_start, u64 va_size, bool managed) { + enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0; struct msm_gem_vm *vm; + struct drm_gem_object *dummy_gem; + int ret = 0; if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -158,15 +203,28 @@ msm_gem_vm_create(struct msm_mmu *mmu, const char *name, if (!vm) return ERR_PTR(-ENOMEM); - spin_lock_init(&vm->lock); - vm->name = name; - vm->mmu = mmu; - vm->va_start = va_start; - vm->va_size = size; + dummy_gem = drm_gpuvm_resv_object_alloc(drm); + if (!dummy_gem) { + ret = -ENOMEM; + goto err_free_vm; + } + + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, + va_start, va_size, 0, 0, &msm_gpuvm_ops); + drm_gem_object_put(dummy_gem); + + spin_lock_init(&vm->mm_lock); + mutex_init(&vm->vm_lock); - drm_mm_init(&vm->mm, va_start, size); + vm->mmu = mmu; + vm->managed = managed; - kref_init(&vm->kref); + drm_mm_init(&vm->mm, va_start, va_size); return vm; + +err_free_vm: + kfree(vm); + return ERR_PTR(ret); + } diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 88504c4b842f..6458bd82a0cd 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -204,8 +204,8 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return NULL; } - vm = msm_gem_vm_create(mmu, "mdp_kms", - 0x1000, 0x100000000 - 0x1000); + vm = msm_gem_vm_create(dev, mmu, "mdp_kms", + 0x1000, 0x100000000 - 0x1000, true); if (IS_ERR(vm)) { dev_err(mdp_dev, "vm create, error %pe\n", vm); mmu->funcs->destroy(mmu); From patchwork Mon May 19 17:57:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891134 Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8A28328982F; Mon, 19 May 2025 17:58:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.170 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677489; cv=none; b=IQpgEUtbJOJAGas3b5WGty3UXcpAW/MMaElAjLagdHEaKC6WbPF4JbxsXWhfawP4EuEcbqooGfc54z7gsZpVVPGdjoTvulOPQAW9CbaLE/O5JFxk1v8zT4SIPEhF1wl653cobInhb/ewfl5bEIQGSf47FdO35jE9D++M5cWPEbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677489; c=relaxed/simple; bh=WgKaWhkKhemf+x9LRLZV0J5rs/gmVfXO6MlEcPH+okE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QuHaT8FSRW9ULx3LgBNGSPSSXcLbDct4YVW7aXNm3Os0BZ5ny4EDolbqISMLae+nxRMyHLuXFC3MSI9klPmtTB1HfdAAYYQm2u3RaOLq8niVOpm/oQZusorAHA2cJLUIr8iTj7CqIl0Cdu6RzUvaoRCS9ZmCXNEYuvrpCoZXvjM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ELif/ySt; arc=none smtp.client-ip=209.85.214.170 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ELif/ySt" Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-23228b9d684so17079975ad.1; Mon, 19 May 2025 10:58:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677486; x=1748282286; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=LJyNsqqMnCFBlZcBxnLOjTsC6Bw4XYLjkgJX6AkR1Z8=; b=ELif/yStD/XNzafeEFiNsAP0bVmwHmJk67Eo0j+89Ch5FzpRsU1e5Tkda2MMijEx2c 6ngEbniye6W1qrF2y+eswPHmKzADmYCFH5+/8cfhK8nWxAJzwkOQA0JugqPhPs+qF19/ hzP2EVdLgvmV2J9ChCbELdivsHsGg2sramzh1T3jZWKTw+AtgRUwf1tza2fJdBB/UJUo oix9Y260ZYxI4IhDmTD1PDyMWtUeuCiV1QGZI/QtrzwO7nGWAXwW9sNIZRqyJsbyymBK vd/oppsqwD4XGhI4hkbxl7wbqlA5BgVEy0mzztXGGkdF+s926lsr+tkSzmGaRr46F3Gh Vpxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677486; x=1748282286; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=LJyNsqqMnCFBlZcBxnLOjTsC6Bw4XYLjkgJX6AkR1Z8=; b=veNnSbhKA5bncbrWhUD/MuQytGzNDBoRP+2mKn6/bTqSKuGo3LnbAGVQ/csWDwASfG X2uxLbY1upTWiwE9Srjj7xmCUeRCCactif+zWDi0Iuv7WvEUtbGW/CrKbTT6IozlcOyv gUJDDEuBZ+kjDW6pZNOPF9ghrgNW5XKgwFmlW9jlSwqBRUIRHNOS4U4qy6QAVlZSPX6W tnh0dJxgxaRehCG5+VWBcfiIHhwSuS2VqdSnJp5CBrES0zYyuO4wW/NsZCljk08jejCr x7qmXwLh4U2heTE6Q/zRmUTkZFKhKLL31XjZcTOSG2E1QBZWNj4JvQ9ZRwQMZKUluvBL dZIQ== X-Forwarded-Encrypted: i=1; AJvYcCUmvdDDTBjtMMo1y55vOQwLIxnQF1TqfA7mma8G3L/XJD+Es2ZyqF6+hoqYcrfiGzoN8O+YlmirukUcMBc=@vger.kernel.org, AJvYcCW3iahS7ianlZ0oAAIoLU0sR5FAGsWGmIbmgvpnc8brjbg2eLPbhI3rbgR4d9IOT+FDFELmK+kbapGABW33@vger.kernel.org, AJvYcCXt6S+wVLC2lPlLxgKwOKipKecPD8gbVmOUgFLhjXvvyWWtMev/IeEupEiMAyiZi0VlHebTidFv82KtZHQY@vger.kernel.org X-Gm-Message-State: AOJu0YwP4rUqYJ20FPHTrG43I9f5mBEwMSmzIiNBEx/CyMsHQHLfOZXr 2Wleay+l41gwxu13KuZ+keUVq99G16MsPcgLQSfF834j0ezxOt1+B6Yw X-Gm-Gg: ASbGncuFOTcGz9Kgd2ZGtpDrCYkjexz3nx8yOtFqz3h/QM3zS0jGOTfuwWbkmrg129g y6nk591i/E4TEOOg0tQXIIYtnARalPonX2Ysy4Y2iNoq+Fz/LOyMhLpBAX8m0Lo7825bkIrL4Ze 3eTR/ODJE2VMOy03zaeGO39l9nHrzB1j4miXft9g7ByLeMHEf9FCgKnz2hYFDT5pHEUMBHJ53+4 IobWu+MChGiqVOFJlo6ezHL9R9StcK7lCCsELpRrmTwxJNOUjN1jTC2nSnN4vf9Zj1UDkFvrony s8wSVe4ClZ8zprhm0ZAhlxz2yKikyCbPX9cWD2kOF4+IYpBq0Og8W+HyHOwWrRbm3zv4CEbTfkO 5zhXQ1HAoI27jz8E3Rx0Pfn6aXvTaQ4xNQAgd X-Google-Smtp-Source: AGHT+IGanyuq409M8oZQs0siMUQtt9jalgdF95x4NnSDYklFUUnntLmWL4PzPxxhK6ssW/iEAev1mg== X-Received: by 2002:a17:902:e54e:b0:22e:4b74:5f67 with SMTP id d9443c01a7336-231de376f05mr190501355ad.31.1747677485653; Mon, 19 May 2025 10:58:05 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e9745esm63022455ad.127.2025.05.19.10.58.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:04 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 14/40] drm/msm: Convert vm locking Date: Mon, 19 May 2025 10:57:11 -0700 Message-ID: <20250519175755.13037-2-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Convert to using the gpuvm's r_obj for serializing access to the VM. This way we can use the drm_exec helper for dealing with deadlock detection and backoff. This will let us deal with upcoming locking order conflicts with the VM_BIND implmentation (ie. in some scenarious we need to acquire the obj lock first, for ex. to iterate all the VMs an obj is bound in, and in other scenarious we need to acquire the VM lock first). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 35 ++++++++--- drivers/gpu/drm/msm/msm_gem.h | 37 ++++++++++-- drivers/gpu/drm/msm/msm_gem_shrinker.c | 80 +++++++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_submit.c | 9 ++- drivers/gpu/drm/msm/msm_gem_vma.c | 27 ++++----- 5 files changed, 150 insertions(+), 38 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 3b7db3b3f763..b7055805a5dd 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -52,6 +52,7 @@ static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bo static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { struct msm_context *ctx = file->driver_priv; + struct drm_exec exec; update_ctx_mem(file, -obj->size); @@ -70,9 +71,9 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, msecs_to_jiffies(1000)); - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, &ctx->vm->base, true); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ } /* @@ -538,11 +539,12 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova, u64 range_start, u64 range_end) { + struct drm_exec exec; int ret; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); ret = get_and_pin_iova_range_locked(obj, vm, iova, range_start, range_end); - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -562,16 +564,17 @@ int msm_gem_get_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t *iova) { struct msm_gem_vma *vma; + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); vma = get_vma_locked(obj, vm, 0, U64_MAX); if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { *iova = vma->base.va.addr; } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -600,9 +603,10 @@ static int clear_iova(struct drm_gem_object *obj, int msm_gem_set_iova(struct drm_gem_object *obj, struct msm_gem_vm *vm, uint64_t iova) { + struct drm_exec exec; int ret = 0; - msm_gem_lock(obj); + msm_gem_lock_vm_and_obj(&exec, obj, vm); if (!iova) { ret = clear_iova(obj, vm); } else { @@ -615,7 +619,7 @@ int msm_gem_set_iova(struct drm_gem_object *obj, ret = -EBUSY; } } - msm_gem_unlock(obj); + drm_exec_fini(&exec); /* drop locks */ return ret; } @@ -1007,12 +1011,27 @@ static void msm_gem_free_object(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); struct drm_device *dev = obj->dev; struct msm_drm_private *priv = dev->dev_private; + struct drm_exec exec; mutex_lock(&priv->obj_lock); list_del(&msm_obj->node); mutex_unlock(&priv->obj_lock); + /* + * We need to lock any VMs the object is still attached to, but not + * the object itself (see explaination in msm_gem_assert_locked()), + * so just open-code this special case: + */ + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } + } put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f7f7e7910754..36a846e9b943 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -62,12 +62,6 @@ struct msm_gem_vm { */ struct drm_mm mm; - /** @mm_lock: protects @mm node allocation/removal */ - struct spinlock mm_lock; - - /** @vm_lock: protects gpuvm insert/remove/traverse */ - struct mutex vm_lock; - /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; @@ -246,6 +240,37 @@ msm_gem_unlock(struct drm_gem_object *obj) dma_resv_unlock(obj->resv); } +/** + * msm_gem_lock_vm_and_obj() - Helper to lock an obj + VM + * @exec: the exec context helper which will be initalized + * @obj: the GEM object to lock + * @vm: the VM to lock + * + * Operations which modify a VM frequently need to lock both the VM and + * the object being mapped/unmapped/etc. This helper uses drm_exec to + * acquire both locks, dealing with potential deadlock/backoff scenarios + * which arise when multiple locks are involved. + */ +static inline int +msm_gem_lock_vm_and_obj(struct drm_exec *exec, + struct drm_gem_object *obj, + struct msm_gem_vm *vm) +{ + int ret = 0; + + drm_exec_init(exec, 0, 2); + drm_exec_until_all_locked (exec) { + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); + if (!ret && (obj->resv != drm_gpuvm_resv(&vm->base))) + ret = drm_exec_lock_obj(exec, obj); + drm_exec_retry_on_contention(exec); + if (GEM_WARN_ON(ret)) + break; + } + + return ret; +} + static inline void msm_gem_assert_locked(struct drm_gem_object *obj) { diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index de185fc34084..5faf6227584a 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -43,6 +43,75 @@ msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) return count; } +static bool +with_vm_locks(struct ww_acquire_ctx *ticket, + void (*fn)(struct drm_gem_object *obj), + struct drm_gem_object *obj) +{ + /* + * Track last locked entry for for unwinding locks in error and + * success paths + */ + struct drm_gpuvm_bo *vm_bo, *last_locked = NULL; + int ret = 0; + + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv = drm_gpuvm_resv(vm_bo->vm); + + if (resv == obj->resv) + continue; + + ret = dma_resv_lock(resv, ticket); + + /* + * Since we already skip the case when the VM and obj + * share a resv (ie. _NO_SHARE objs), we don't expect + * to hit a double-locking scenario... which the lock + * unwinding cannot really cope with. + */ + WARN_ON(ret == -EALREADY); + + /* + * Don't bother with slow-lock / backoff / retry sequence, + * if we can't get the lock just give up and move on to + * the next object. + */ + if (ret) + goto out_unlock; + + /* + * Hold a ref to prevent the vm_bo from being freed + * and removed from the obj's gpuva list, as that would + * would result in missing the unlock below + */ + drm_gpuvm_bo_get(vm_bo); + + last_locked = vm_bo; + } + + fn(obj); + +out_unlock: + if (last_locked) { + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + struct dma_resv *resv = drm_gpuvm_resv(vm_bo->vm); + + if (resv == obj->resv) + continue; + + dma_resv_unlock(resv); + + /* Drop the ref taken while locking: */ + drm_gpuvm_bo_put(vm_bo); + + if (last_locked == vm_bo) + break; + } + } + + return ret == 0; +} + static bool purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) { @@ -52,9 +121,7 @@ purge(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) if (msm_gem_active(obj)) return false; - msm_gem_purge(obj); - - return true; + return with_vm_locks(ticket, msm_gem_purge, obj); } static bool @@ -66,9 +133,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) if (msm_gem_active(obj)) return false; - msm_gem_evict(obj); - - return true; + return with_vm_locks(ticket, msm_gem_evict, obj); } static bool @@ -100,6 +165,7 @@ static unsigned long msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) { struct msm_drm_private *priv = shrinker->private_data; + struct ww_acquire_ctx ticket; struct { struct drm_gem_lru *lru; bool (*shrink)(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket); @@ -124,7 +190,7 @@ msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) drm_gem_lru_scan(stages[i].lru, nr, &stages[i].remaining, stages[i].shrink, - NULL); + &ticket); nr -= stages[i].freed; freed += stages[i].freed; remaining += stages[i].remaining; diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 86791a854c42..6924d03026ba 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -256,11 +256,18 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, /* This is where we make sure all the bo's are reserved and pin'd: */ static int submit_lock_objects(struct msm_gem_submit *submit) { + unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; int ret; - drm_exec_init(&submit->exec, DRM_EXEC_INTERRUPTIBLE_WAIT, submit->nr_bos); +// TODO need to add vm_bind path which locks vm resv + external objs + drm_exec_init(&submit->exec, flags, submit->nr_bos); drm_exec_until_all_locked (&submit->exec) { + ret = drm_exec_lock_obj(&submit->exec, + drm_gpuvm_resv_obj(&submit->vm->base)); + drm_exec_retry_on_contention(&submit->exec); + if (ret) + goto error; for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index d1621761ef36..e294e7f6e723 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -92,15 +92,13 @@ void msm_gem_vma_close(struct msm_gem_vma *vma) GEM_WARN_ON(vma->mapped); - spin_lock(&vm->mm_lock); + drm_gpuvm_resv_assert_held(&vm->base); + if (vma->base.va.addr) drm_mm_remove_node(&vma->node); - spin_unlock(&vm->mm_lock); - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); drm_gpuva_unlink(&vma->base); - mutex_unlock(&vm->vm_lock); kfree(vma); } @@ -114,16 +112,16 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, struct msm_gem_vma *vma; int ret; + drm_gpuvm_resv_assert_held(&vm->base); + vma = kzalloc(sizeof(*vma), GFP_KERNEL); if (!vma) return ERR_PTR(-ENOMEM); if (vm->managed) { - spin_lock(&vm->mm_lock); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); - spin_unlock(&vm->mm_lock); if (ret) goto err_free_vma; @@ -137,9 +135,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); vma->mapped = false; - mutex_lock(&vm->vm_lock); ret = drm_gpuva_insert(&vm->base, &vma->base); - mutex_unlock(&vm->vm_lock); if (ret) goto err_free_range; @@ -149,18 +145,14 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, goto err_va_remove; } - mutex_lock(&vm->vm_lock); drm_gpuvm_bo_extobj_add(vm_bo); drm_gpuva_link(&vma->base, vm_bo); - mutex_unlock(&vm->vm_lock); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); return vma; err_va_remove: - mutex_lock(&vm->vm_lock); drm_gpuva_remove(&vma->base); - mutex_unlock(&vm->vm_lock); err_free_range: if (vm->managed) drm_mm_remove_node(&vma->node); @@ -191,7 +183,13 @@ struct msm_gem_vm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { - enum drm_gpuvm_flags flags = managed ? DRM_GPUVM_VA_WEAK_REF : 0; + /* + * We mostly want to use DRM_GPUVM_RESV_PROTECTED, except that + * makes drm_gpuvm_bo_evict() a no-op for extobjs (ie. we loose + * tracking that an extobj is evicted) :facepalm: + */ + enum drm_gpuvm_flags flags = + (managed ? DRM_GPUVM_VA_WEAK_REF : 0); struct msm_gem_vm *vm; struct drm_gem_object *dummy_gem; int ret = 0; @@ -213,9 +211,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); - spin_lock_init(&vm->mm_lock); - mutex_init(&vm->vm_lock); - vm->mmu = mmu; vm->managed = managed; From patchwork Mon May 19 17:57:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892064 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 532F628A1D1; Mon, 19 May 2025 17:58:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677498; cv=none; b=UYewBr6oGJHPaqVrJjkBAqs4KNk2DGsTNVwVlw+SYuTfQiKWbHa2ePZU7D79kYU4m7msLHAj+n5/qrfodNoy2KmjflIaBg9Ww1K5PBZBQaHTACXtF+zgLDiCAhhvVq76TvH3peTm9Ftd6frcnkIiHpupBjqmqsTPtRLROdg2II0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677498; c=relaxed/simple; bh=c4+Q61e7kP+F4JR3NfM0eCDzGWiA8QWV/Ub6K7bVQlA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f0s2qlZW1EGd88DY4snVPAC/2KVFDZ2sLY8txugrwqQ74ZDWz2pCdERejE9on1a/rW6XZ8CzEYGifseoAKMul4qnXSPrl1OlMcD83sWTxVT59ndO9DY9JjNLtRO3pMfzQ9Hg5JuNsqhrr5gI4dE2+2k2lwSaZ+vwK4yskp0AmUE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=T/yuoLVY; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="T/yuoLVY" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-742c46611b6so2412391b3a.1; Mon, 19 May 2025 10:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677494; x=1748282294; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aJilrg6fMefK/UVbOMSL6gaqkCL9llUL/SZmGyHBt64=; b=T/yuoLVYKwaOq48aAj0CBeYxgQyBEc5/mUBMR6WXqkG9fv02FrnjjitKvEQUmwi5ji fImc9zQTyEBHlYLVZG9MwcQjcaeNhKikd22attV85HHZeMvYiU5ZtSTlwux40PZrfkLo tB+QYrP7EgOC3fIAOa18l7xUjdma/fr7hWZ5gBl38jfTWDeaR3Rjvz9b7XxUf81+5QAU PjbGwQ7uTR0Dwrhq+b/ph7ZjT5wa0TtpLOHeq14lyHKjeTnSO5htHEIMejy6AClJLIxX yzOKds58DJn4NlQkUUvWklIELCbUdbDwRk2N3zflsxrK17hoKJyRUoHgC5bPSDBvro3s kO2w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677494; x=1748282294; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aJilrg6fMefK/UVbOMSL6gaqkCL9llUL/SZmGyHBt64=; b=aevjLcAJ+6pZYENkNTzgpqmbMcYsfX1fgqEoSi+mseNx7HQLTJDUaT1rxDzxsCCB4z UkL4RgGyqwQ9kq+75EJegO8oBRpzh25h1IOynnCNJ2FuTPvCm3CoIvt1RbiT8FslevNF IpgdvCWJBbZXZcJxXqVqgEiftoi64MWWdsv0BGhK21TfGI6TiXktg9WDE2183Qah4CC7 CNhkCw/vSxDY4YsPpzIeCoZVSwh56rnl8p4X3smLZ0dX0UmMlhV42puJNOmvTE4l7Q3P J/o1vLK/chyYuywJpm7tQJz/o5TfncmRtdcqE1IT77E3RQL4V2r3c7yjvP2n4eXfAU7V g1fg== X-Forwarded-Encrypted: i=1; AJvYcCW+YgCipXQC5lUrB1pjgcY3JId+xTkexAFRo+D4aF23BnzX+xrvoCCNRERsYVHlXrJC5tU3ig5lqsJnFcK+@vger.kernel.org, AJvYcCXzOn6s7McFT0aFjeT1xVwC5lkQ6IT8zsBU9dmQ6Kc1KwAEZEeSo0vQ3jhzVM6kLk5YoWyboDHLO2P1gU5Q@vger.kernel.org X-Gm-Message-State: AOJu0YzYi1uLY1zfbrYZoBx/Shn92zBJ0/w35YGGDplbj1Lq5OHvibG3 SnIl7KbxwXkzIC1OJFNbwD0IMf/fmwaXttQcmWym3G/KB0GeBPToCLzA X-Gm-Gg: ASbGnctW+gorXFlFT0JEABh6KKFJBr5CQrQvMGVKzSf6qccKYabgSDAyy8oMTTmod9c NEPCaVnOsln3E4n786PHjsRMQlvQw9YZ5BIziYiu2oJAQDDQsr8LvifKhrWtm9viXRgiD2maZNq bIYlCCXM3WdATTf61xYNgniIMmNnjxSjYozjFTQmpTmfbCeY429vVfOLdfH1g7XWaP6HM3a9AXb DHjbJGrlZJbwF/wKezIMsXx5i0OCXQog+wBKk5STh+jxDSR6sjB8uc/7RIcN6J8pJKnbZFiiUlB taQ0+noEho1QlU0F6qfDJ7Bc67dXgXQ7DPihCN8jKMRxb+j73e5R83jQfo0BVUqTStQSxfmsRcD 7rDbBJtdZD9UJ085FJ7fW0omIWA== X-Google-Smtp-Source: AGHT+IF7VVo7z9qeZCvGcoOq9DZaZ3Zmvzud3Nf4MMgO4xctOPNEUZAwkL3ujochbTxQd2y1QrKCOQ== X-Received: by 2002:a05:6a21:3189:b0:216:5f68:c83a with SMTP id adf61e73a8af0-2170ce311f9mr21671392637.36.1747677494099; Mon, 19 May 2025 10:58:14 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eaf5c6b0sm6598265a12.7.2025.05.19.10.58.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:13 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Jessica Zhang , =?utf-8?b?QmFybmFiw6FzIEN6w6lt?= =?utf-8?b?w6Fu?= , Arnd Bergmann , Jonathan Marek , Krzysztof Kozlowski , Eugene Lepshy , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 15/40] drm/msm: Use drm_gpuvm types more Date: Mon, 19 May 2025 10:57:12 -0700 Message-ID: <20250519175755.13037-3-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Most of the driver code doesn't need to reach in to msm specific fields, so just use the drm_gpuvm/drm_gpuva types directly. This should hopefully improve commonality with other drivers and make the code easier to understand. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpu.c | 6 +- drivers/gpu/drm/msm/adreno/a5xx_gpu.c | 3 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 +- drivers/gpu/drm/msm/adreno/a6xx_gmu.h | 2 +- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 11 +-- drivers/gpu/drm/msm/adreno/a6xx_preempt.c | 2 +- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 21 +++-- drivers/gpu/drm/msm/adreno/adreno_gpu.h | 4 +- .../drm/msm/disp/dpu1/dpu_encoder_phys_wb.c | 4 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h | 2 +- drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c | 6 +- drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h | 2 +- drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 11 +-- drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c | 11 +-- drivers/gpu/drm/msm/dsi/dsi_host.c | 6 +- drivers/gpu/drm/msm/msm_drv.h | 19 ++--- drivers/gpu/drm/msm/msm_fb.c | 14 ++-- drivers/gpu/drm/msm/msm_gem.c | 82 +++++++++---------- drivers/gpu/drm/msm/msm_gem.h | 59 ++++++------- drivers/gpu/drm/msm/msm_gem_submit.c | 6 +- drivers/gpu/drm/msm/msm_gem_vma.c | 70 +++++++--------- drivers/gpu/drm/msm/msm_gpu.c | 21 +++-- drivers/gpu/drm/msm/msm_gpu.h | 10 +-- drivers/gpu/drm/msm/msm_kms.c | 6 +- drivers/gpu/drm/msm/msm_kms.h | 2 +- drivers/gpu/drm/msm/msm_submitqueue.c | 2 +- 27 files changed, 190 insertions(+), 204 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c index 889480aa13ba..ec38db45d8a3 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpu.c @@ -113,7 +113,7 @@ static int a2xx_hw_init(struct msm_gpu *gpu) uint32_t *ptr, len; int i, ret; - a2xx_gpummu_params(gpu->vm->mmu, &pt_base, &tran_error); + a2xx_gpummu_params(to_msm_vm(gpu->vm)->mmu, &pt_base, &tran_error); DBG("%s", gpu->name); @@ -466,11 +466,11 @@ static struct msm_gpu_state *a2xx_gpu_state_get(struct msm_gpu *gpu) return state; } -static struct msm_gem_vm * +static struct drm_gpuvm * a2xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct msm_mmu *mmu = a2xx_gpummu_new(&pdev->dev, gpu); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_gem_vm_create(gpu->dev, mmu, "gpu", SZ_16M, 0xfff * SZ_64K, true); diff --git a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c index 04138a06724b..ee927d8cc0dc 100644 --- a/drivers/gpu/drm/msm/adreno/a5xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a5xx_gpu.c @@ -1786,7 +1786,8 @@ struct msm_gpu *a5xx_gpu_init(struct drm_device *dev) return ERR_PTR(ret); } - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a5xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a5xx_fault_handler); /* Set up the preemption specific bits and pieces for each ringbuffer */ a5xx_preempt_init(gpu); diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c index 77d9ff9632d1..28e6705c6da6 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c @@ -1259,6 +1259,8 @@ int a6xx_gmu_stop(struct a6xx_gpu *a6xx_gpu) static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) { + struct msm_mmu *mmu = to_msm_vm(gmu->vm)->mmu; + msm_gem_kernel_put(gmu->hfi.obj, gmu->vm); msm_gem_kernel_put(gmu->debug.obj, gmu->vm); msm_gem_kernel_put(gmu->icache.obj, gmu->vm); @@ -1266,8 +1268,8 @@ static void a6xx_gmu_memory_free(struct a6xx_gmu *gmu) msm_gem_kernel_put(gmu->dummy.obj, gmu->vm); msm_gem_kernel_put(gmu->log.obj, gmu->vm); - gmu->vm->mmu->funcs->detach(gmu->vm->mmu); - msm_gem_vm_put(gmu->vm); + mmu->funcs->detach(mmu); + drm_gpuvm_put(gmu->vm); } static int a6xx_gmu_memory_alloc(struct a6xx_gmu *gmu, struct a6xx_gmu_bo *bo, diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h index cceda7d9c33a..5da36226b93d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.h +++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.h @@ -62,7 +62,7 @@ struct a6xx_gmu { /* For serializing communication with the GMU: */ struct mutex lock; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; void __iomem *mmio; void __iomem *rscc; diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 26d0a863f38c..c43a443661e4 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -120,7 +120,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(ctx->vm->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { @@ -2243,7 +2243,7 @@ static void a6xx_gpu_set_freq(struct msm_gpu *gpu, struct dev_pm_opp *opp, mutex_unlock(&a6xx_gpu->gmu.lock); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); @@ -2261,12 +2261,12 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) return adreno_iommu_create_vm(gpu, pdev, quirks); } -static struct msm_gem_vm * +static struct drm_gpuvm * a6xx_create_private_vm(struct msm_gpu *gpu) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(gpu->vm->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); if (IS_ERR(mmu)) return ERR_CAST(mmu); @@ -2546,7 +2546,8 @@ struct msm_gpu *a6xx_gpu_init(struct drm_device *dev) adreno_gpu->uche_trap_base = 0x1fffffffff000ull; - msm_mmu_set_fault_handler(gpu->vm->mmu, gpu, a6xx_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(gpu->vm)->mmu, gpu, + a6xx_fault_handler); a6xx_calc_ubwc_config(adreno_gpu); /* Set up the preemption specific bits and pieces for each ringbuffer */ diff --git a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c index b14a7c630bd0..7fd560a2c1ce 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_preempt.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_preempt.c @@ -376,7 +376,7 @@ static int preempt_init_ring(struct a6xx_gpu *a6xx_gpu, struct a7xx_cp_smmu_info *smmu_info_ptr = ptr; - msm_iommu_pagetable_params(gpu->vm->mmu, &ttbr, &asid); + msm_iommu_pagetable_params(to_msm_vm(gpu->vm)->mmu, &ttbr, &asid); smmu_info_ptr->magic = GEN7_CP_SMMU_INFO_MAGIC; smmu_info_ptr->ttbr0 = ttbr; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index 287b032fefe4..f6624a246694 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -191,21 +191,21 @@ int adreno_zap_shader_load(struct msm_gpu *gpu, u32 pasid) return zap_shader_load_mdt(gpu, adreno_gpu->info->zapfw, pasid); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) { return adreno_iommu_create_vm(gpu, pdev, 0); } -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks) { struct iommu_domain_geometry *geometry; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; u64 start, size; mmu = msm_iommu_gpu_new(&pdev->dev, gpu, quirks); @@ -274,9 +274,10 @@ void adreno_check_and_reenable_stall(struct adreno_gpu *adreno_gpu) if (!adreno_gpu->stall_enabled && ktime_after(ktime_get(), adreno_gpu->stall_reenable_time) && !READ_ONCE(gpu->crashstate)) { + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; adreno_gpu->stall_enabled = true; - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, true); + mmu->funcs->set_stall(mmu, true); } spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, flags); } @@ -290,6 +291,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, struct adreno_smmu_fault_info *info, const char *block, u32 scratch[4]) { + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); const char *type = "UNKNOWN"; bool do_devcoredump = info && (info->fsr & ARM_SMMU_FSR_SS) && @@ -302,9 +304,10 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, */ spin_lock_irqsave(&adreno_gpu->fault_stall_lock, irq_flags); if (adreno_gpu->stall_enabled) { + adreno_gpu->stall_enabled = false; - gpu->vm->mmu->funcs->set_stall(gpu->vm->mmu, false); + mmu->funcs->set_stall(mmu, false); } adreno_gpu->stall_reenable_time = ktime_add_ms(ktime_get(), 500); spin_unlock_irqrestore(&adreno_gpu->fault_stall_lock, irq_flags); @@ -314,7 +317,7 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, * it now. */ if (!do_devcoredump) { - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); } /* @@ -409,7 +412,7 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, return 0; case MSM_PARAM_FAULTS: if (ctx->vm) - *value = gpu->global_faults + ctx->vm->faults; + *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; else *value = gpu->global_faults; return 0; @@ -419,12 +422,12 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_VA_START: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_start; + *value = ctx->vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: if (ctx->vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->base.mm_range; + *value = ctx->vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.h b/drivers/gpu/drm/msm/adreno/adreno_gpu.h index bbd7e664286e..e9a63fbd131b 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.h +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.h @@ -644,11 +644,11 @@ void adreno_show_object(struct drm_printer *p, void **ptr, int len, * Common helper function to initialize the default address space for arm-smmu * attached targets */ -struct msm_gem_vm * +struct drm_gpuvm * adreno_create_vm(struct msm_gpu *gpu, struct platform_device *pdev); -struct msm_gem_vm * +struct drm_gpuvm * adreno_iommu_create_vm(struct msm_gpu *gpu, struct platform_device *pdev, unsigned long quirks); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c index 32e208ee946d..3b02f4d1a7a5 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder_phys_wb.c @@ -566,7 +566,7 @@ static void dpu_encoder_phys_wb_prepare_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { const struct msm_format *format; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct dpu_hw_wb_cfg *wb_cfg; int ret; struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); @@ -619,7 +619,7 @@ static void dpu_encoder_phys_wb_cleanup_wb_job(struct dpu_encoder_phys *phys_enc struct drm_writeback_job *job) { struct dpu_encoder_phys_wb *wb_enc = to_dpu_encoder_phys_wb(phys_enc); - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; if (!job->fb) return; diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c index d115b79af771..6aef29590a3d 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.c @@ -274,7 +274,7 @@ int dpu_format_populate_plane_sizes( return _dpu_format_populate_plane_sizes_linear(fmt, fb, layout); } -static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, +static void _dpu_format_populate_addrs_ubwc(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -355,7 +355,7 @@ static void _dpu_format_populate_addrs_ubwc(struct msm_gem_vm *vm, } } -static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, +static void _dpu_format_populate_addrs_linear(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { @@ -373,7 +373,7 @@ static void _dpu_format_populate_addrs_linear(struct msm_gem_vm *vm, * @fb: framebuffer pointer * @layout: format layout structure to populate */ -void dpu_format_populate_addrs(struct msm_gem_vm *vm, +void dpu_format_populate_addrs(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout) { diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h index 989f3e13c497..127bf4f586db 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_formats.h @@ -31,7 +31,7 @@ static inline bool dpu_find_format(u32 format, const u32 *supported_formats, return false; } -void dpu_format_populate_addrs(struct msm_gem_vm *vm, +void dpu_format_populate_addrs(struct drm_gpuvm *vm, struct drm_framebuffer *fb, struct dpu_hw_fmt_layout *layout); diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c index bb5db6da636a..a9cd215cfd33 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c @@ -1098,17 +1098,17 @@ static void _dpu_kms_mmu_destroy(struct dpu_kms *dpu_kms) if (!dpu_kms->base.vm) return; - mmu = dpu_kms->base.vm->mmu; + mmu = to_msm_vm(dpu_kms->base.vm)->mmu; mmu->funcs->detach(mmu); - msm_gem_vm_put(dpu_kms->base.vm); + drm_gpuvm_put(dpu_kms->base.vm); dpu_kms->base.vm = NULL; } static int _dpu_kms_mmu_init(struct dpu_kms *dpu_kms) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; vm = msm_kms_init_vm(dpu_kms->dev); if (IS_ERR(vm)) diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h index 3578f52048a5..fbf9c1fd6cfb 100644 --- a/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h +++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_plane.h @@ -34,7 +34,7 @@ */ struct dpu_plane_state { struct drm_plane_state base; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct dpu_sw_pipe pipe; struct dpu_sw_pipe r_pipe; struct dpu_sw_pipe_cfg pipe_cfg; diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c index d5b5628bee24..9326ed3aab04 100644 --- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c @@ -120,15 +120,16 @@ static void mdp4_destroy(struct msm_kms *kms) { struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); struct device *dev = mdp4_kms->dev->dev; - struct msm_gem_vm *vm = kms->vm; if (mdp4_kms->blank_cursor_iova) msm_gem_unpin_iova(mdp4_kms->blank_cursor_bo, kms->vm); drm_gem_object_put(mdp4_kms->blank_cursor_bo); - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } if (mdp4_kms->rpm_enabled) @@ -380,7 +381,7 @@ static int mdp4_kms_init(struct drm_device *dev) struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(priv->kms)); struct msm_kms *kms = NULL; struct msm_mmu *mmu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int ret; u32 major, minor; unsigned long max_clk; diff --git a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c index 9dca0385a42d..b6e6bd1f95ee 100644 --- a/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c +++ b/drivers/gpu/drm/msm/disp/mdp5/mdp5_kms.c @@ -198,11 +198,12 @@ static void mdp5_destroy(struct mdp5_kms *mdp5_kms); static void mdp5_kms_destroy(struct msm_kms *kms) { struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); - struct msm_gem_vm *vm = kms->vm; - if (vm) { - vm->mmu->funcs->detach(vm->mmu); - msm_gem_vm_put(vm); + if (kms->vm) { + struct msm_mmu *mmu = to_msm_vm(kms->vm)->mmu; + + mmu->funcs->detach(mmu); + drm_gpuvm_put(kms->vm); } mdp_kms_destroy(&mdp5_kms->base); @@ -500,7 +501,7 @@ static int mdp5_kms_init(struct drm_device *dev) struct mdp5_kms *mdp5_kms; struct mdp5_cfg *config; struct msm_kms *kms = priv->kms; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; int i, ret; ret = mdp5_init(to_platform_device(dev->dev), dev); diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c index 16335ebd21e4..2d1699b7dc93 100644 --- a/drivers/gpu/drm/msm/dsi/dsi_host.c +++ b/drivers/gpu/drm/msm/dsi/dsi_host.c @@ -143,7 +143,7 @@ struct msm_dsi_host { /* DSI 6G TX buffer*/ struct drm_gem_object *tx_gem_obj; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* DSI v2 TX buffer */ void *tx_buf; @@ -1146,7 +1146,7 @@ int dsi_tx_buf_alloc_6g(struct msm_dsi_host *msm_host, int size) uint64_t iova; u8 *data; - msm_host->vm = msm_gem_vm_get(priv->kms->vm); + msm_host->vm = drm_gpuvm_get(priv->kms->vm); data = msm_gem_kernel_new(dev, size, MSM_BO_WC, msm_host->vm, @@ -1194,7 +1194,7 @@ void msm_dsi_tx_buf_free(struct mipi_dsi_host *host) if (msm_host->tx_gem_obj) { msm_gem_kernel_put(msm_host->tx_gem_obj, msm_host->vm); - msm_gem_vm_put(msm_host->vm); + drm_gpuvm_put(msm_host->vm); msm_host->tx_gem_obj = NULL; msm_host->vm = NULL; } diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index ad509403f072..b77fd2c531c3 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -48,8 +48,6 @@ struct msm_rd_state; struct msm_perf_state; struct msm_gem_submit; struct msm_fence_context; -struct msm_gem_vm; -struct msm_gem_vma; struct msm_disp_state; #define MAX_CRTCS 8 @@ -230,7 +228,7 @@ void msm_crtc_disable_vblank(struct drm_crtc *crtc); int msm_register_mmu(struct drm_device *dev, struct msm_mmu *mmu); void msm_unregister_mmu(struct drm_device *dev, struct msm_mmu *mmu); -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev); +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, @@ -251,13 +249,14 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needs_dirtyfb); -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, bool needed_dirtyfb); -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane); -struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, int plane); +int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needs_dirtyfb); +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needed_dirtyfb); +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + int plane); +struct drm_gem_object *msm_framebuffer_bo(struct drm_framebuffer *fb, + int plane); const struct msm_format *msm_framebuffer_format(struct drm_framebuffer *fb); struct drm_framebuffer *msm_framebuffer_create(struct drm_device *dev, struct drm_file *file, const struct drm_mode_fb_cmd2 *mode_cmd); diff --git a/drivers/gpu/drm/msm/msm_fb.c b/drivers/gpu/drm/msm/msm_fb.c index 6df318b73534..d267aa1cb218 100644 --- a/drivers/gpu/drm/msm/msm_fb.c +++ b/drivers/gpu/drm/msm/msm_fb.c @@ -75,9 +75,8 @@ void msm_framebuffer_describe(struct drm_framebuffer *fb, struct seq_file *m) /* prepare/pin all the fb's bo's for scanout. */ -int msm_framebuffer_prepare(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needs_dirtyfb) +int msm_framebuffer_prepare(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needs_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int ret, i, n = fb->format->num_planes; @@ -98,9 +97,8 @@ int msm_framebuffer_prepare(struct drm_framebuffer *fb, return 0; } -void msm_framebuffer_cleanup(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, - bool needed_dirtyfb) +void msm_framebuffer_cleanup(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + bool needed_dirtyfb) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); int i, n = fb->format->num_planes; @@ -115,8 +113,8 @@ void msm_framebuffer_cleanup(struct drm_framebuffer *fb, memset(msm_fb->iova, 0, sizeof(msm_fb->iova)); } -uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, - struct msm_gem_vm *vm, int plane) +uint32_t msm_framebuffer_iova(struct drm_framebuffer *fb, struct drm_gpuvm *vm, + int plane) { struct msm_framebuffer *msm_fb = to_msm_framebuffer(fb); return msm_fb->iova[plane] + fb->offsets[plane]; diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index b7055805a5dd..81500066369f 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -72,7 +72,7 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) msecs_to_jiffies(1000)); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, &ctx->vm->base, true); + put_iova_spaces(obj, ctx->vm, true); drm_exec_fini(&exec); /* drop locks */ } @@ -368,8 +368,8 @@ uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj) return offset; } -static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { struct drm_gpuvm_bo *vm_bo; @@ -379,13 +379,13 @@ static struct msm_gem_vma *lookup_vma(struct drm_gem_object *obj, struct drm_gpuva *vma; drm_gpuvm_bo_for_each_va (vma, vm_bo) { - if (vma->vm == &vm->base) { + if (vma->vm == vm) { /* lookup_vma() should only be used in paths * with at most one vma per vm */ GEM_WARN_ON(!list_is_singular(&vm_bo->list.gpuva)); - return to_msm_vma(vma); + return vma; } } } @@ -415,22 +415,20 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - struct msm_gem_vma *msm_vma = to_msm_vma(vma); - - msm_gem_vma_purge(msm_vma); + msm_gem_vma_purge(vma); if (close) - msm_gem_vma_close(msm_vma); + msm_gem_vma_close(vma); } drm_gpuvm_bo_put(vm_bo); } } -static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, - u64 range_start, u64 range_end) +static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm, u64 range_start, + u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_assert_locked(obj); @@ -439,14 +437,14 @@ static struct msm_gem_vma *get_vma_locked(struct drm_gem_object *obj, if (!vma) { vma = msm_gem_vma_new(vm, obj, range_start, range_end); } else { - GEM_WARN_ON(vma->base.va.addr < range_start); - GEM_WARN_ON((vma->base.va.addr + obj->size) > range_end); + GEM_WARN_ON(vma->va.addr < range_start); + GEM_WARN_ON((vma->va.addr + obj->size) > range_end); } return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma) +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) { struct msm_gem_object *msm_obj = to_msm_bo(obj); struct page **pages; @@ -503,17 +501,17 @@ void msm_gem_unpin_active(struct drm_gem_object *obj) update_lru_active(obj); } -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm) { return get_vma_locked(obj, vm, 0, U64_MAX); } static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; int ret; msm_gem_assert_locked(obj); @@ -524,7 +522,7 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, ret = msm_gem_pin_vma_locked(obj, vma); if (!ret) { - *iova = vma->base.va.addr; + *iova = vma->va.addr; pin_obj_locked(obj); } @@ -536,8 +534,8 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, * limits iova to specified range (in pages) */ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end) + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end) { struct drm_exec exec; int ret; @@ -550,8 +548,8 @@ int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, } /* get iova and pin it. Should have a matching put */ -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { return msm_gem_get_and_pin_iova_range(obj, vm, iova, 0, U64_MAX); } @@ -560,10 +558,10 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, * Get an iova but don't pin it. Doesn't need a put because iovas are currently * valid for the life of the object */ -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova) +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; struct drm_exec exec; int ret = 0; @@ -572,7 +570,7 @@ int msm_gem_get_iova(struct drm_gem_object *obj, if (IS_ERR(vma)) { ret = PTR_ERR(vma); } else { - *iova = vma->base.va.addr; + *iova = vma->va.addr; } drm_exec_fini(&exec); /* drop locks */ @@ -580,9 +578,9 @@ int msm_gem_get_iova(struct drm_gem_object *obj, } static int clear_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { - struct msm_gem_vma *vma = lookup_vma(obj, vm); + struct drm_gpuva *vma = lookup_vma(obj, vm); if (!vma) return 0; @@ -601,7 +599,7 @@ static int clear_iova(struct drm_gem_object *obj, * Setting an iova of zero will clear the vma. */ int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova) + struct drm_gpuvm *vm, uint64_t iova) { struct drm_exec exec; int ret = 0; @@ -610,11 +608,11 @@ int msm_gem_set_iova(struct drm_gem_object *obj, if (!iova) { ret = clear_iova(obj, vm); } else { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; vma = get_vma_locked(obj, vm, iova, iova + obj->size); if (IS_ERR(vma)) { ret = PTR_ERR(vma); - } else if (GEM_WARN_ON(vma->base.va.addr != iova)) { + } else if (GEM_WARN_ON(vma->va.addr != iova)) { clear_iova(obj, vm); ret = -EBUSY; } @@ -629,10 +627,9 @@ int msm_gem_set_iova(struct drm_gem_object *obj, * purged until something else (shrinker, mm_notifier, destroy, etc) decides * to get rid of it */ -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm) +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm) { - struct msm_gem_vma *vma; + struct drm_gpuva *vma; msm_gem_lock(obj); vma = lookup_vma(obj, vm); @@ -1260,9 +1257,9 @@ struct drm_gem_object *msm_gem_import(struct drm_device *dev, return ERR_PTR(ret); } -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova) +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova) { void *vaddr; struct drm_gem_object *obj = msm_gem_new(dev, size, flags); @@ -1295,8 +1292,7 @@ void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, } -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm) +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm) { if (IS_ERR_OR_NULL(bo)) return; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 36a846e9b943..813e886bc43f 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -79,12 +79,7 @@ struct msm_gem_vm { }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm); - -void msm_gem_vm_put(struct msm_gem_vm *vm); - -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); @@ -113,12 +108,12 @@ struct msm_gem_vma { }; #define to_msm_vma(x) container_of(x, struct msm_gem_vma, base) -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct msm_gem_vma *vma); -int msm_gem_vma_map(struct msm_gem_vma *vma, int prot, struct sg_table *sgt, int size); -void msm_gem_vma_close(struct msm_gem_vma *vma); +void msm_gem_vma_purge(struct drm_gpuva *vma); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { struct drm_gem_object base; @@ -163,22 +158,21 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct msm_gem_vma *vma); +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); -struct msm_gem_vma *msm_gem_get_vma_locked(struct drm_gem_object *obj, - struct msm_gem_vm *vm); -int msm_gem_get_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -int msm_gem_set_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t iova); +struct drm_gpuva *msm_gem_get_vma_locked(struct drm_gem_object *obj, + struct drm_gpuvm *vm); +int msm_gem_get_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +int msm_gem_set_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t iova); int msm_gem_get_and_pin_iova_range(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova, - u64 range_start, u64 range_end); -int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm, uint64_t *iova); -void msm_gem_unpin_iova(struct drm_gem_object *obj, - struct msm_gem_vm *vm); + struct drm_gpuvm *vm, uint64_t *iova, + u64 range_start, u64 range_end); +int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, + uint64_t *iova); +void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); @@ -199,11 +193,10 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, uint32_t size, uint32_t flags, uint32_t *handle, char *name); struct drm_gem_object *msm_gem_new(struct drm_device *dev, uint32_t size, uint32_t flags); -void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, - uint32_t flags, struct msm_gem_vm *vm, - struct drm_gem_object **bo, uint64_t *iova); -void msm_gem_kernel_put(struct drm_gem_object *bo, - struct msm_gem_vm *vm); +void *msm_gem_kernel_new(struct drm_device *dev, uint32_t size, uint32_t flags, + struct drm_gpuvm *vm, struct drm_gem_object **bo, + uint64_t *iova); +void msm_gem_kernel_put(struct drm_gem_object *bo, struct drm_gpuvm *vm); struct drm_gem_object *msm_gem_import(struct drm_device *dev, struct dma_buf *dmabuf, struct sg_table *sgt); __printf(2, 3) @@ -254,14 +247,14 @@ msm_gem_unlock(struct drm_gem_object *obj) static inline int msm_gem_lock_vm_and_obj(struct drm_exec *exec, struct drm_gem_object *obj, - struct msm_gem_vm *vm) + struct drm_gpuvm *vm) { int ret = 0; drm_exec_init(exec, 0, 2); drm_exec_until_all_locked (exec) { - ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(&vm->base)); - if (!ret && (obj->resv != drm_gpuvm_resv(&vm->base))) + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(vm)); + if (!ret && (obj->resv != drm_gpuvm_resv(vm))) ret = drm_exec_lock_obj(exec, obj); drm_exec_retry_on_contention(exec); if (GEM_WARN_ON(ret)) @@ -328,7 +321,7 @@ struct msm_gem_submit { struct kref ref; struct drm_device *dev; struct msm_gpu *gpu; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct list_head node; /* node in ring submit list */ struct drm_exec exec; uint32_t seqno; /* Sequence number of the submit on the ring */ diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 6924d03026ba..c4569e7b5a02 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -264,7 +264,7 @@ static int submit_lock_objects(struct msm_gem_submit *submit) drm_exec_until_all_locked (&submit->exec) { ret = drm_exec_lock_obj(&submit->exec, - drm_gpuvm_resv_obj(&submit->vm->base)); + drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) goto error; @@ -315,7 +315,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) for (i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; - struct msm_gem_vma *vma; + struct drm_gpuva *vma; /* if locking succeeded, pin bo: */ vma = msm_gem_get_vma_locked(obj, submit->vm); @@ -328,7 +328,7 @@ static int submit_pin_objects(struct msm_gem_submit *submit) if (ret) break; - submit->bos[i].iova = vma->base.va.addr; + submit->bos[i].iova = vma->va.addr; } /* diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index e294e7f6e723..4963306e83de 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -20,52 +20,38 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } - -void msm_gem_vm_put(struct msm_gem_vm *vm) -{ - if (vm) - drm_gpuvm_put(&vm->base); -} - -struct msm_gem_vm * -msm_gem_vm_get(struct msm_gem_vm *vm) -{ - if (!IS_ERR_OR_NULL(vm)) - drm_gpuvm_get(&vm->base); - - return vm; -} - /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct msm_gem_vma *vma) +void msm_gem_vma_purge(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); - unsigned size = vma->base.va.range; + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ - if (!vma->mapped) + if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->base.va.addr, size); + vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); - vma->mapped = false; + msm_vma->mapped = false; } /* Map and pin vma: */ int -msm_gem_vma_map(struct msm_gem_vma *vma, int prot, +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; - if (GEM_WARN_ON(!vma->base.va.addr)) + if (GEM_WARN_ON(!vma->va.addr)) return -EINVAL; - if (vma->mapped) + if (msm_vma->mapped) return 0; - vma->mapped = true; + msm_vma->mapped = true; /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold @@ -76,38 +62,40 @@ msm_gem_vma_map(struct msm_gem_vma *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->base.va.addr, sgt, size, prot); + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); if (ret) { - vma->mapped = false; + msm_vma->mapped = false; } return ret; } /* Close an iova. Warn if it is still in use */ -void msm_gem_vma_close(struct msm_gem_vma *vma) +void msm_gem_vma_close(struct drm_gpuva *vma) { - struct msm_gem_vm *vm = to_msm_vm(vma->base.vm); + struct msm_gem_vm *vm = to_msm_vm(vma->vm); + struct msm_gem_vma *msm_vma = to_msm_vma(vma); - GEM_WARN_ON(vma->mapped); + GEM_WARN_ON(msm_vma->mapped); drm_gpuvm_resv_assert_held(&vm->base); - if (vma->base.va.addr) - drm_mm_remove_node(&vma->node); + if (vma->va.addr && vm->managed) + drm_mm_remove_node(&msm_vma->node); - drm_gpuva_remove(&vma->base); - drm_gpuva_unlink(&vma->base); + drm_gpuva_remove(vma); + drm_gpuva_unlink(vma); kfree(vma); } /* Create a new vma and allocate an iova for it */ -struct msm_gem_vma * -msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, +struct drm_gpuva * +msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, u64 range_start, u64 range_end) { + struct msm_gem_vm *vm = to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; struct msm_gem_vma *vma; int ret; @@ -149,7 +137,7 @@ msm_gem_vma_new(struct msm_gem_vm *vm, struct drm_gem_object *obj, drm_gpuva_link(&vma->base, vm_bo); GEM_WARN_ON(drm_gpuvm_bo_put(vm_bo)); - return vma; + return &vma->base; err_va_remove: drm_gpuva_remove(&vma->base); @@ -179,7 +167,7 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { * handles virtual address allocation, and both async and sync operations * are supported. */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed) { @@ -216,7 +204,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); - return vm; + return &vm->base; err_free_vm: kfree(vm); diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index b30800f80120..82e33aa1ccd0 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -283,7 +283,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, if (state->fault_info.ttbr0) { struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = submit->vm->mmu; + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, &info->asid); @@ -387,7 +387,7 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; if (submit->vm) - submit->vm->faults++; + to_msm_vm(submit->vm)->faults++; get_comm_cmdline(submit, &comm, &cmd); @@ -463,6 +463,7 @@ static void fault_worker(struct kthread_work *work) { struct msm_gpu *gpu = container_of(work, struct msm_gpu, fault_work); struct msm_gem_submit *submit; + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; struct msm_ringbuffer *cur_ring = gpu->funcs->active_ring(gpu); char *comm = NULL, *cmd = NULL; @@ -492,7 +493,7 @@ static void fault_worker(struct kthread_work *work) resume_smmu: memset(&gpu->fault_info, 0, sizeof(gpu->fault_info)); - gpu->vm->mmu->funcs->resume_translation(gpu->vm->mmu); + mmu->funcs->resume_translation(mmu); mutex_unlock(&gpu->lock); } @@ -829,10 +830,11 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) } /* Return a new address space for a msm_drm_private instance */ -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) { - struct msm_gem_vm *vm = NULL; + struct drm_gpuvm *vm = NULL; + if (!gpu) return NULL; @@ -843,11 +845,11 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) if (gpu->funcs->create_private_vm) { vm = gpu->funcs->create_private_vm(gpu); if (!IS_ERR(vm)) - vm->pid = get_pid(task_pid(task)); + to_msm_vm(vm)->pid = get_pid(task_pid(task)); } if (IS_ERR_OR_NULL(vm)) - vm = msm_gem_vm_get(gpu->vm); + vm = drm_gpuvm_get(gpu->vm); return vm; } @@ -1016,8 +1018,9 @@ void msm_gpu_cleanup(struct msm_gpu *gpu) msm_gem_kernel_put(gpu->memptrs_bo, gpu->vm); if (!IS_ERR_OR_NULL(gpu->vm)) { - gpu->vm->mmu->funcs->detach(gpu->vm->mmu); - msm_gem_vm_put(gpu->vm); + struct msm_mmu *mmu = to_msm_vm(gpu->vm)->mmu; + mmu->funcs->detach(mmu); + drm_gpuvm_put(gpu->vm); } if (gpu->worker) { diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 1f26ba00f773..d8425e6d7f5a 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -78,8 +78,8 @@ struct msm_gpu_funcs { /* note: gpu_set_freq() can assume that we have been pm_resumed */ void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); - struct msm_gem_vm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct msm_gem_vm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -234,7 +234,7 @@ struct msm_gpu { void __iomem *mmio; int irq; - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* Power Control: */ struct regulator *gpu_reg, *gpu_cx; @@ -363,7 +363,7 @@ struct msm_context { int queueid; /** @vm: the per-process GPU address-space */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /** @kref: the reference count */ struct kref ref; @@ -673,7 +673,7 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, struct msm_gpu *gpu, const struct msm_gpu_funcs *funcs, const char *name, struct msm_gpu_config *config); -struct msm_gem_vm * +struct drm_gpuvm * msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c index 6458bd82a0cd..e82b8569a468 100644 --- a/drivers/gpu/drm/msm/msm_kms.c +++ b/drivers/gpu/drm/msm/msm_kms.c @@ -176,9 +176,9 @@ static int msm_kms_fault_handler(void *arg, unsigned long iova, int flags, void return -ENOSYS; } -struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) +struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev) { - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; struct msm_mmu *mmu; struct device *mdp_dev = dev->dev; struct device *mdss_dev = mdp_dev->parent; @@ -212,7 +212,7 @@ struct msm_gem_vm *msm_kms_init_vm(struct drm_device *dev) return vm; } - msm_mmu_set_fault_handler(vm->mmu, kms, msm_kms_fault_handler); + msm_mmu_set_fault_handler(to_msm_vm(vm)->mmu, kms, msm_kms_fault_handler); return vm; } diff --git a/drivers/gpu/drm/msm/msm_kms.h b/drivers/gpu/drm/msm/msm_kms.h index f45996a03e15..7cdb2eb67700 100644 --- a/drivers/gpu/drm/msm/msm_kms.h +++ b/drivers/gpu/drm/msm/msm_kms.h @@ -139,7 +139,7 @@ struct msm_kms { atomic_t fault_snapshot_capture; /* mapper-id used to request GEM buffer mapped for scanout: */ - struct msm_gem_vm *vm; + struct drm_gpuvm *vm; /* disp snapshot support */ struct kthread_worker *dump_worker; diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 6298233c3568..8ced49c7557b 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -59,7 +59,7 @@ void __msm_context_destroy(struct kref *kref) kfree(ctx->entities[i]); } - msm_gem_vm_put(ctx->vm); + drm_gpuvm_put(ctx->vm); kfree(ctx->comm); kfree(ctx->cmdline); kfree(ctx); From patchwork Mon May 19 17:57:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891133 Received: from mail-pl1-f169.google.com (mail-pl1-f169.google.com [209.85.214.169]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 16B7B288C0F; Mon, 19 May 2025 17:58:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.169 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677498; cv=none; b=f6xIZyRol1SVrrjGVV07iPtga5A1ydu7fplvrZ8VYDv61FVZEaK+2PPp2fmsOZJfCWqfLbIQ3pCoetAeCPwbdsW7MSPPdcmRc2jiNKTHuMtbUXLL9YXsYq5dTqKnPQm7jJQ3Ehzf4Xl28I5aofyz1rZjBZabm+AmVlgGlktaJw0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677498; c=relaxed/simple; bh=h0LM7y3HUhdp4vKxYG/KykLjtSwzZLPYiRqlbsRlb/M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=geStwmeMn1OZbwykPSiW7u7xk3CyKWAuVSMZ9EIFMUhz6XTnKqlgs+Fe1peSDmYsvBenCthBkhlIOxPyCKRKjYQHhs4zxTQv3sLiqvx0F9xwmftDU71Sd7ERM5u7ODFG8KZKVNzrDojKZA5olDonupQdQuFmb6LrxdKC4US6pbE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=S9uO7TGM; arc=none smtp.client-ip=209.85.214.169 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="S9uO7TGM" Received: by mail-pl1-f169.google.com with SMTP id d9443c01a7336-231f5a7baa2so21665025ad.0; Mon, 19 May 2025 10:58:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677495; x=1748282295; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oSQkQS6gUCueUQ4Hx1PQ5yLuWwzx74JFyGcJ6vU2NiI=; b=S9uO7TGM6tRiiWcQd9bexbfRLqKw6bfsdsI8xWEl/Pbs4qxLScBzfvY1kCFU6PhLjG 9QtEUDrf8j20bflfAAIOzDVeWSfcMPaP76dONKzV2EH3IR++/1SQeann6Nka0CUEsrO6 Xd3AXfs0W0+cczqmRHAJgrKxK/whm4Gpd0W+qeGiGkZTbyaMM5rUlP/MZYHgust/Pxkt uccGXb7t67Jx8J/Fyvgk01GHKaBCjXexBgZyG1HjAq3l+UIqhSBVIGdJK+6kb6XEJz96 djj9bwHH3HlsUljNFSMl/zxtOg2zoFbBXW2YfOpdGNodUxue4T14a0FPqXoerNzCoOsi 3Z+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677495; x=1748282295; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oSQkQS6gUCueUQ4Hx1PQ5yLuWwzx74JFyGcJ6vU2NiI=; b=XDrnFjXCvxvUyv4M9XDexmMoWe/rnM1Cs2EiJgHsSvZaE4YDEWHkxQrGE3w3PNbZiY DKsiMe/T1WS5YQ+Gq+9c3O4Fo2KTn93XYp5KSoEqbL+YSyxS0yqzf1IQg0T8dG10hYxU pVIyj+V3yPAiolMFt6oLQ5V2uY2KWSjPufT5Qcgp3bck9q/Vsfas0eTO0H4cQbgwbckg 0S6Pyevt9wIWmwg+jn2N9GUQ7VKnBnQVLg7XZLDAm9mTZp/P5iB9uViqqk3FNi0iFnrN 77I+X20loe7cR4p3MhA5pZiaokqUU4sU3uHNE4HSmKtu8NSPYwwdvgdIeno4692V2zsf dxGg== X-Forwarded-Encrypted: i=1; AJvYcCVte2CBu9+2mHKb9C5f1dR6GjYQlcE1DLQOD5GVHE0VaOpOZ5ya0fk/2aGul1C/nI0/aSFvRTvhYNQ7604t@vger.kernel.org, AJvYcCXzEc2uuvnJb983nXqTMbJGad9E2D3fmmrDI9xqVg6/JWo5uBP118jajOgbY+qurx2FdEJ7b/nPXS9DmMnx@vger.kernel.org X-Gm-Message-State: AOJu0Yz3aqTTM45X/7Ox5ijO8pnhgCmU5bxbTrGj2BacB0aZZMNu6EGT yJQZa6xLHhbQYRsDC8Uq+KIi46bB78EbKyIG8fqu7Bei7XNd5sbSvySm X-Gm-Gg: ASbGncumKxH/m1g6sahmNXAZUZKQnSQt8q3E91Z0faPJmufOCr1mFb4uksXt5xCAUyp Xgf8yH2cWGEIi5lPoRBkDRM7ykQDzsJ0lovcuECqi0saBGbXC8+aSweMtGWerU9gsNWH0RqIS6U Y6YKLxfAkL5cOMkyNSQcWJ990/FlbXc+8evkwrzSOSssHQ81FtQosPtWHYDnb8c9ciykt7Bn7b0 2RKDAp5lrQxtEA070MLTpbqx2I1Bjg71K0AJcj5yVjxtTZKSIvukj1NPeuF3rQYMr3OD/6Fjh16 g8H7Xrqt4MTyI3Pg5rj+acnG65pwOGl87EvIEbbS4nAI/ZjPvQWuGa2NWxRHqpk3Iek1PgnhiGZ PvNHC8gOoqRlZZhyWnVt16KSWlQ== X-Google-Smtp-Source: AGHT+IFd9XZtWJ8GT07VyLdYZzMpDphs3K+QujX8XgAxBMMoKxGzCGJZYFE+xZNJPKrHrsqet7BTEw== X-Received: by 2002:a17:902:ce87:b0:223:653e:eb09 with SMTP id d9443c01a7336-231d438a294mr182426675ad.7.1747677495374; Mon, 19 May 2025 10:58:15 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4adbf5esm62981905ad.64.2025.05.19.10.58.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:14 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 16/40] drm/msm: Split out helper to get iommu prot flags Date: Mon, 19 May 2025 10:57:13 -0700 Message-ID: <20250519175755.13037-4-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark We'll re-use this in the vm_bind path. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 12 ++++++++++-- drivers/gpu/drm/msm/msm_gem.h | 1 + 2 files changed, 11 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 81500066369f..5b8b9c1d6c74 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -444,10 +444,9 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, return vma; } -int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +int msm_gem_prot(struct drm_gem_object *obj) { struct msm_gem_object *msm_obj = to_msm_bo(obj); - struct page **pages; int prot = IOMMU_READ; if (!(msm_obj->flags & MSM_BO_GPU_READONLY)) @@ -463,6 +462,15 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) else if (prot == 2) prot |= IOMMU_USE_LLC_NWA; + return prot; +} + +int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) +{ + struct msm_gem_object *msm_obj = to_msm_bo(obj); + struct page **pages; + int prot = msm_gem_prot(obj); + msm_gem_assert_locked(obj); pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 813e886bc43f..3a853fcb8944 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -158,6 +158,7 @@ struct msm_gem_object { #define to_msm_bo(x) container_of(x, struct msm_gem_object, base) uint64_t msm_gem_mmap_offset(struct drm_gem_object *obj); +int msm_gem_prot(struct drm_gem_object *obj); int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma); void msm_gem_unpin_locked(struct drm_gem_object *obj); void msm_gem_unpin_active(struct drm_gem_object *obj); From patchwork Mon May 19 17:57:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892063 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D582628A1E4; Mon, 19 May 2025 17:58:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677500; cv=none; b=qAqLGpjZzh0n8ZShlpnD7dbK7NDOx+P2H3J68wis0Gf59yqBmMjUMuLcbgrCwzjBmAvIqrWPBA8cnNh4G4AlOrzUB37G9zANKcVWu1Fi4VE16YVwPvi8DF+f5JCKbGhsr1SgW3SDGqadwXiAxHzB0kk10Od5RxzVJug0wBCuRyQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677500; c=relaxed/simple; bh=Seo1jbeglIlexgzfBe0vhp8Ynk923hK9LkA9ao1yAzQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t3CGZ9ZbMfsGHW+CAgEshZv+iSEt6aE7LCs9oH5PDRyt1bV8CIxuR3D6d4X2krDuDZtVc3AifPZFbAOxeb5sgUdNL7BOZNGedfBEuF2jqF4SgLlfysLVeL0+BLaXr4Indx4ihljriqUHNQumPMMmwP56NxGecEjWBnVl9wo1DUY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=MUQZXIo6; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="MUQZXIo6" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-742c9563fafso1563352b3a.0; Mon, 19 May 2025 10:58:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677497; x=1748282297; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9BjU8/FFlF6LUN+rEjh2QVKSTUrbmNWhpuo4RChNOZU=; b=MUQZXIo6Dn6kJv+HHYm11jOPTIPBYuWlJ6n3T0+gJyrUrYAEYrV1/H/JqOu3NtL4Lr O3E7IFXHUiT1+s5kXFyueVfOS6MFzAWLls4gKqI9MMYOqzuUapXm5JTqU4j8CuYPqWuw lW7/IoWHmBAo+vrWpgUfeFoNdd17C2bDSKwmroexK3BzOfdxxj4HfTZaeWKBTo0znnqG GczpBs+cR6dkn+icvHEGpbtZ6iGHFL3SwNqhw3cddqt1SOtTz5X1B6l+F5HGOuyxoA3L LrW/dvenerDojMmf5KRpPyaWzv5MN8cfw8sVmdHX9bpzrf1HbrdC0EEFY7TMZNHHNDWb +h9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677497; x=1748282297; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9BjU8/FFlF6LUN+rEjh2QVKSTUrbmNWhpuo4RChNOZU=; b=YwjWfwT673s9PZuyuM5wSQLxJgEPUkKUNo01vWL/JNc/2DyPR1s5G+c/UpxKv3xWAA q+ZaScXjFUaMTpOgvkw86pRw3y5ZpB1ZqVfB1eDlE3RcUSKO5E+FKHMqu9WQpJoYqVR/ yRlp2yw/YHEwR7v8gOMaGRtiDzQomj/MhEIOOueywqB+DflfIvZgxBi8cWD/NOMSTyEB yugtHLCkDUKwvh6wP+nXbp3W6dY/fdTAFW17xvsRcvlX5ih5qAlO7AXEjCftOg1nMqPW 01pN1e8Sx/JpymUKmebdLYnUdSH5ARgJ+6Wz/mw2iBdDyoDp0YIVGjzgPIW0VwFF9VIx wbvA== X-Forwarded-Encrypted: i=1; AJvYcCUX1oX5rTslMEAoj5hfjf5WK9oCMt4k5md3COAXMJC7w8kNLVzeV+d6d8CWMwJwgoADFrDnZX2DSW2DMA61@vger.kernel.org, AJvYcCXV7mJcwWprblIAKAtS4iTf12qFjVYfzmRjEwEBYVZ0t9D+YjJm9iPmslYLxG7LJ7RDhdbP75NrYY8p0EgV@vger.kernel.org X-Gm-Message-State: AOJu0YxjKfAzfD7T85GKCITqz4bb+4nrDNT6ShKbgOc/ZerQyAsCagKG Lt7/v3wExEaLvbu9r1iUcbZ4f5QvWfU6axJZ03Qv1FoXXdqhWuaiC9BK X-Gm-Gg: ASbGncu/ZtOSokhKy+/kbrdzT+uDDaaPINkki2ami01k5HRUsuqTsjsULGjfaM893q1 ui88kLK3tWjH7wjs/rt8f1JbzGG4jGfLTupHluOP6NBw/02R0vbG7Y6cVcCJaYkjFwekvP2XVmq IwAwOXEVDJe+wHe4DW/azB7gstfPKnP2rCoszwYtkWOwoSwzMf3SvF7u7rBUUFHvP66PKJ0668N ZV2FbRAzHz/7hLgOE89KMNJPaXNtwjyNHwW8A7OcquSXfDXgQqiin9rIIjB9NX7TMkuhwxoNGPr +PfVOA+0ATc4GGMt0OaAjQUzR6+VuJhOfBKq0vrtj2zwqlp3WkgPdlbQQShvUhQUZLYtgosu3ui J43z8f6Y83ygfNHrEupMrSz1YUA== X-Google-Smtp-Source: AGHT+IGH5bHi/k5GChcfJp8+Q5WeXGQ2NP9IwpNxqiWaxSfUorhIXwoC+t+SQc9S0+I9Ne8OznRyTA== X-Received: by 2002:a05:6a21:4ccc:b0:1f5:6d00:ba05 with SMTP id adf61e73a8af0-216219f8fc6mr22397262637.38.1747677496884; Mon, 19 May 2025 10:58:16 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eaf6f48esm6509449a12.27.2025.05.19.10.58.16 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:16 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 17/40] drm/msm: Add mmu support for non-zero offset Date: Mon, 19 May 2025 10:57:14 -0700 Message-ID: <20250519175755.13037-5-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Only needs to be supported for iopgtables mmu, the other cases are either only used for kernel managed mappings (where offset is always zero) or devices which do not support sparse bindings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a2xx_gpummu.c | 5 ++++- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 4 ++-- drivers/gpu/drm/msm/msm_gem_vma.c | 13 +++++++------ drivers/gpu/drm/msm/msm_iommu.c | 22 ++++++++++++++++++++-- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 6 files changed, 36 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c index 39641551eeb6..6124336af2ec 100644 --- a/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c +++ b/drivers/gpu/drm/msm/adreno/a2xx_gpummu.c @@ -29,13 +29,16 @@ static void a2xx_gpummu_detach(struct msm_mmu *mmu) } static int a2xx_gpummu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct a2xx_gpummu *gpummu = to_a2xx_gpummu(mmu); unsigned idx = (iova - GPUMMU_VA_START) / GPUMMU_PAGE_SIZE; struct sg_dma_page_iter dma_iter; unsigned prot_bits = 0; + WARN_ON(off != 0); + if (prot & IOMMU_WRITE) prot_bits |= 1; if (prot & IOMMU_READ) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 5b8b9c1d6c74..738620603d2c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -435,7 +435,7 @@ static struct drm_gpuva *get_vma_locked(struct drm_gem_object *obj, vma = lookup_vma(obj, vm); if (!vma) { - vma = msm_gem_vma_new(vm, obj, range_start, range_end); + vma = msm_gem_vma_new(vm, obj, 0, range_start, range_end); } else { GEM_WARN_ON(vma->va.addr < range_start); GEM_WARN_ON((vma->va.addr + obj->size) > range_end); @@ -477,7 +477,7 @@ int msm_gem_pin_vma_locked(struct drm_gem_object *obj, struct drm_gpuva *vma) if (IS_ERR(pages)) return PTR_ERR(pages); - return msm_gem_vma_map(vma, prot, msm_obj->sgt, obj->size); + return msm_gem_vma_map(vma, prot, msm_obj->sgt); } void msm_gem_unpin_locked(struct drm_gem_object *obj) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 3a853fcb8944..0d755b9d5f26 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -110,9 +110,9 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, - u64 range_start, u64 range_end); + u64 offset, u64 range_start, u64 range_end); void msm_gem_vma_purge(struct drm_gpuva *vma); -int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt, int size); +int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); struct msm_gem_object { diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 4963306e83de..109b985e1d0f 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -38,8 +38,7 @@ void msm_gem_vma_purge(struct drm_gpuva *vma) /* Map and pin vma: */ int -msm_gem_vma_map(struct drm_gpuva *vma, int prot, - struct sg_table *sgt, int size) +msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); @@ -62,8 +61,9 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, size, prot); - + ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, + vma->gem.offset, vma->va.range, + prot); if (ret) { msm_vma->mapped = false; } @@ -93,7 +93,7 @@ void msm_gem_vma_close(struct drm_gpuva *vma) /* Create a new vma and allocate an iova for it */ struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, - u64 range_start, u64 range_end) + u64 offset, u64 range_start, u64 range_end) { struct msm_gem_vm *vm = to_msm_vm(gpuvm); struct drm_gpuvm_bo *vm_bo; @@ -107,6 +107,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, return ERR_PTR(-ENOMEM); if (vm->managed) { + BUG_ON(offset != 0); ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -120,7 +121,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, GEM_WARN_ON((range_end - range_start) > obj->size); - drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, 0); + drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; ret = drm_gpuva_insert(&vm->base, &vma->base); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index e70088a91283..2fd48e66bc98 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -113,7 +113,8 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, } static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; @@ -125,6 +126,19 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, size_t size = sg->length; phys_addr_t phys = sg_phys(sg); + if (!len) + break; + + if (size <= off) { + off -= size; + continue; + } + + phys += off; + size -= off; + size = min_t(size_t, size, len); + off = 0; + while (size) { size_t pgsize, count, mapped = 0; int ret; @@ -140,6 +154,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, phys += mapped; addr += mapped; size -= mapped; + len -= mapped; if (ret) { msm_iommu_pagetable_unmap(mmu, iova, addr - iova); @@ -400,11 +415,14 @@ static void msm_iommu_detach(struct msm_mmu *mmu) } static int msm_iommu_map(struct msm_mmu *mmu, uint64_t iova, - struct sg_table *sgt, size_t len, int prot) + struct sg_table *sgt, size_t off, size_t len, + int prot) { struct msm_iommu *iommu = to_msm_iommu(mmu); size_t ret; + WARN_ON(off != 0); + /* The arm-smmu driver expects the addresses to be sign extended */ if (iova & BIT_ULL(48)) iova |= GENMASK_ULL(63, 49); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c33247e459d6..c874852b7331 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -12,7 +12,7 @@ struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, - size_t len, int prot); + size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); void (*destroy)(struct msm_mmu *mmu); void (*resume_translation)(struct msm_mmu *mmu); From patchwork Mon May 19 17:57:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891132 Received: from mail-pl1-f182.google.com (mail-pl1-f182.google.com [209.85.214.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B5E1289E3A; Mon, 19 May 2025 17:58:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677500; cv=none; b=J0L8g0TuGUR8pC6Eh+nupZZmEtnp+ujBUp+xI9stlVv1HB4kMyNeVzD8Ty0oa1faGA/iVeStDOos6yJE5LWBNuZy7dUEmJmWUhkldha4A0h1J8SUHg3hSm+nXwzByJ5r6N81JtpHmHHBPbUKU7vhSJWVSdfBUOa7wbqeAPtbUqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677500; c=relaxed/simple; bh=RlQo8YAghCDJm/SaXs8MaKXbZ2FsLwHEJSBVL5STJf4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EUY0nwVz0wVnL9OPGXbbY6GE+PIZnQmaoklmtQn+4WXuhw98dKqTRSezcBOgRyYQ9Fcc7BfOALJDVodTo7R0ReXUKmJvsbgwgfyC8puFu8/Mgk73BrG93r2uf/wUahHOM50wdOVJgLqk3/pAI1dOwtwUXAjGRZsXdTeM+46/e3Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gwTpriB4; arc=none smtp.client-ip=209.85.214.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gwTpriB4" Received: by mail-pl1-f182.google.com with SMTP id d9443c01a7336-232059c0b50so19328115ad.2; Mon, 19 May 2025 10:58:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677498; x=1748282298; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=r+D65lEByNc4LMsDIcZTWRQIukmgFnZvdvZtXU4eE4o=; b=gwTpriB4OOfH8+RSFRxHCozKjeE4QTgwn3tQxpJFwoHqWXXo2URQbEwaCDzHcZgVlS dt2yBbKdIaaQJHjWJvMxrGmfg6b7xCKtMjgsBT9KtqDiaikCE+IK3vO+zp8da9ghfQm3 NLpdKzueXgU8h0DAyuL28QRoJZFBLnXhHB8TCyQ8T9Y8A8T3LT3xyzLskvqyYw26R+zm 88b/8IrdWK0tOWtmYVawOVUi6mqqaTvkDl6oa0umJFLQF5xuapNmAp1wmW9hxsEApN1S N+tjGnr4dvP7+0679wFgDWI4PEt5OsbEsP6SfADVQwIjS+7IxhmydZrX+tEjT9UmlTbQ KDIg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677498; x=1748282298; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=r+D65lEByNc4LMsDIcZTWRQIukmgFnZvdvZtXU4eE4o=; b=HROKYsWDZ4QK42MZuN00rR8kCggahcwFoPQITuyDJ3mEHnbPmx3/Z1siVjm1zAdAK6 miIdgaSrv+FfD0lLmSXFa/X4dLW/+SCtkDzd374y1g3BiFeOOeCSXXejFgfzUaevsCHu YiWmPQX8gIsjhfvugcc8vzFZNZmeSCHImNTiiWUgcTR4BUU9ga2Ox7uik37a8LGqaI8l TUUqRFQyL/4i1a+cWcPY6iRpMPUT8LHGjEm2PJ0tq1i8F63SSDGSD8sUPPOmSTXBNEl7 Z6CRujs5ydDDGRdtlEMo+CTWFU2hwVNEzmxs0FZoj+gDXpKpojug8oTpqBWQzfFhjnr+ ourw== X-Forwarded-Encrypted: i=1; AJvYcCV8iAV39sLwehIqz/TyzNBd2IQUS3vC4WmUFa7wea9qFBkBGpDErR9RyDHA7HeF1KNEBMDgTeGFEJJD/h6n@vger.kernel.org, AJvYcCVWcwCgZdwi+DE0fk+Y9QJZJHo5oQl1RFfuq1TCbQ9bcyPqeixfoL+YM+431Vcsmm5Q1ghEyWCSRpYtRGwg@vger.kernel.org X-Gm-Message-State: AOJu0YwXuAPcT4AmPDIOoIWQbwfkLLCHM6wagECgPsVlbot2L5BfXI4b 3ZxMGMyCd+xNVgYgZcKCE+u6rMvqtdQ66B0iGChIt5T+0MVm5qUEybsk X-Gm-Gg: ASbGncu46Vk11n5h9s+QlQY6SZ2uAqDy0mYEitDBrHPmCuoLI6NibJcBY0u9qA7Sn2L 6sRDsYsYLyPD4Bb0Oqvss3P32sfBdDNzBi/Uq8WBw7gkEQ108aHM2OXt9Al+LTof+x00yb0547V sXcN0N4U1okhnXTTEMviFGjoV/gR86PmMumMBvABvsKUNQCqWDmIJX8odC4egb9JpR8hdY9gZ+B 6G3ydcEeSzibnhQbPtv7NtnvPflhx3WykwiLBKstt7QZGlHkHnsmB9r6mxPdwfSNe8RT3DlztT6 Np6uT/AyVR3XKrIvNx5tQf6YuRWmTOHwHELpdAfeWB7rPzMDOy31KOGED6dfhfUV+MJ5nd3Cjms FLvV67JH1EgKiYUYfGjBzZFar+Q== X-Google-Smtp-Source: AGHT+IFQcPSehL1xYQOTKUm0XOa2dBmJUhV+7tQ0TuiltDzL7TcBBkxmzlJzQ3pvrH4U2wsCD79HSw== X-Received: by 2002:a17:902:ef4e:b0:225:ac99:ae08 with SMTP id d9443c01a7336-231de2e6bbfmr207857375ad.5.1747677498278; Mon, 19 May 2025 10:58:18 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ed5460sm62461485ad.241.2025.05.19.10.58.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:17 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 18/40] drm/msm: Add PRR support Date: Mon, 19 May 2025 10:57:15 -0700 Message-ID: <20250519175755.13037-6-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add PRR (Partial Resident Region) is a bypass address which make GPU writes go to /dev/null and reads return zero. This is used to implement vulkan sparse residency. To support PRR/NULL mappings, we allocate a page to reserve a physical address which we know will not be used as part of a GEM object, and configure the SMMU to use this address for PRR/NULL mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 10 ++++ drivers/gpu/drm/msm/msm_iommu.c | 62 ++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 2 + 3 files changed, 73 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index f6624a246694..e24f627daf37 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -361,6 +361,13 @@ int adreno_fault_handler(struct msm_gpu *gpu, unsigned long iova, int flags, return 0; } +static bool +adreno_smmu_has_prr(struct msm_gpu *gpu) +{ + struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(&gpu->pdev->dev); + return adreno_smmu && adreno_smmu->set_prr_addr; +} + int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, uint32_t param, uint64_t *value, uint32_t *len) { @@ -444,6 +451,9 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, case MSM_PARAM_UCHE_TRAP_BASE: *value = adreno_gpu->uche_trap_base; return 0; + case MSM_PARAM_HAS_PRR: + *value = adreno_smmu_has_prr(gpu); + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 2fd48e66bc98..756bd55ee94f 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -13,6 +13,7 @@ struct msm_iommu { struct msm_mmu base; struct iommu_domain *domain; atomic_t pagetables; + struct page *prr_page; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -112,6 +113,36 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, return (size == 0) ? 0 : -EINVAL; } +static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + struct msm_iommu *iommu = to_msm_iommu(pagetable->parent); + phys_addr_t phys = page_to_phys(iommu->prr_page); + u64 addr = iova; + + while (len) { + size_t mapped = 0; + size_t size = PAGE_SIZE; + int ret; + + ret = ops->map_pages(ops, addr, phys, size, 1, prot, GFP_KERNEL, &mapped); + + /* map_pages could fail after mapping some of the pages, + * so update the counters before error handling. + */ + addr += mapped; + len -= mapped; + + if (ret) { + msm_iommu_pagetable_unmap(mmu, iova, addr - iova); + return -EINVAL; + } + } + + return 0; +} + static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, struct sg_table *sgt, size_t off, size_t len, int prot) @@ -122,6 +153,9 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, u64 addr = iova; unsigned int i; + if (!sgt) + return msm_iommu_pagetable_map_prr(mmu, iova, len, prot); + for_each_sgtable_sg(sgt, sg, i) { size_t size = sg->length; phys_addr_t phys = sg_phys(sg); @@ -177,9 +211,16 @@ static void msm_iommu_pagetable_destroy(struct msm_mmu *mmu) * If this is the last attached pagetable for the parent, * disable TTBR0 in the arm-smmu driver */ - if (atomic_dec_return(&iommu->pagetables) == 0) + if (atomic_dec_return(&iommu->pagetables) == 0) { adreno_smmu->set_ttbr0_cfg(adreno_smmu->cookie, NULL); + if (adreno_smmu->set_prr_bit) { + adreno_smmu->set_prr_bit(adreno_smmu->cookie, false); + __free_page(iommu->prr_page); + iommu->prr_page = NULL; + } + } + free_io_pgtable_ops(pagetable->pgtbl_ops); kfree(pagetable); } @@ -336,6 +377,25 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) kfree(pagetable); return ERR_PTR(ret); } + + BUG_ON(iommu->prr_page); + if (adreno_smmu->set_prr_bit) { + /* + * We need a zero'd page for two reasons: + * + * 1) Reserve a known physical address to use when + * mapping NULL / sparsely resident regions + * 2) Read back zero + * + * It appears the hw drops writes to the PRR region + * on the floor, but reads actually return whatever + * is in the PRR page. + */ + iommu->prr_page = alloc_page(GFP_KERNEL | __GFP_ZERO); + adreno_smmu->set_prr_addr(adreno_smmu->cookie, + page_to_phys(iommu->prr_page)); + adreno_smmu->set_prr_bit(adreno_smmu->cookie, true); + } } /* Needed later for TLB flush */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2342cb90857e..5bc5e4526ccf 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -91,6 +91,8 @@ struct drm_msm_timespec { #define MSM_PARAM_UBWC_SWIZZLE 0x12 /* RO */ #define MSM_PARAM_MACROTILE_MODE 0x13 /* RO */ #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ +/* PRR (Partially Resident Region) is required for sparse residency: */ +#define MSM_PARAM_HAS_PRR 0x15 /* RO */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Mon May 19 17:57:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892062 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AD0C028A3E1; Mon, 19 May 2025 17:58:20 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677502; cv=none; b=QeAjG+wrksOVaVfXs7PwJgJWcUQhTS2a+Y7BaB1gnI0GUiYClfBFoF0k74hIhWI07efKZJ1jCeEC9+znp0Szby+8psWVmrIe95JAnjDs3tNmDbGoFxJE4HI+EI4gyd50bhCJsnLpvV95uC8rIH74qyAwzyINMafJ5jX47qD0DLw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677502; c=relaxed/simple; bh=KIuHGIUF/1yvTwDdZYUYF8i9TsA52u5OlIoj3cPfuMQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mViu2oBAdHqtd2sTP2V8zSG1vheQLE3p6Z9IpE1cpV04l55WyWwESeyC1naXQMpSTeMig0Dm4JiyktgDDqQBzVm+YqP+CIEucbgM3ChLdFEbAOtrgRY2U0edJCXFqgHzhcTK7bZ6yXlZkn9hlJpzyvUFjJpxA7/0QZSwjKcBg4E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=AC9HQjZR; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="AC9HQjZR" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-742c27df0daso1692625b3a.1; Mon, 19 May 2025 10:58:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677500; x=1748282300; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=tZR45HonzG0kOSwz0CHVhwP/iZ2n3XMsmuHYxia7o9s=; b=AC9HQjZRarA1NHg4Fbidb88HyPZNt9FqjXl4DUP4GrtdlqmT3LtbY4/+AWJXi9/gqu BB7mSDBrXVVhug5czz81duc8P6j8ZzmYCTugFnV2+nawP4E+PTLvamrl9P4X1sRaQm5f /+lmZeRwpZgMjnPr7OZN6aDNL3n75HA2KtMu1u0xjbrRTE7EcfgK6LUreFEBpRlO2EHl Lxbgknlubq105SC36tPFYwTNPPWujAwTx8k2uF1jFVlmcSH3l5qRjG2xcXIbkLP/+6hS 3T78ISyKGnqtULTN/CMqLE3VXTHakkNBoNv4RJBNS7++aUQzGe+R7rPZfXCwXUXDqRVp +riA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677500; x=1748282300; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=tZR45HonzG0kOSwz0CHVhwP/iZ2n3XMsmuHYxia7o9s=; b=AeLX8NPWoprixmpc5nTwFOo3YMBjWEaKxEdyrU9maW2s7abMUEt5AagUpda+DFizVW H3v6112ExOs/g9cc5HW1FYqAzJMi6XBWxylZ0+kxNO0gIc/UdR9GJqbhN9JRLSHW8+c+ pDWci5I5mpxjzrzYDB09miqHIoZCrSSlwxHPDdrHiVERj2ioCPek9ceQrLSi/Gsh1k8R uXXwlu2VtGbEyqO6VuBa+JkAOyujScSQdwyXm9Y3c5W/7E0SH2+nge40mrCM3ZzIm77Y fZLAAQ9CSliRQ2U9SGbAPax1GPyg6+vSqBQ4p7X5oVAIrmTQwT2FR6JYy+CQhg4o0jGm IHrQ== X-Forwarded-Encrypted: i=1; AJvYcCV4ssOaKPO6CG0F+YNEb25neR7f++7Aij+0PRwUhKYmZSiAOsKFNE5lavae4Vw2jJtI3lmXc2Vnx13PnE/k@vger.kernel.org, AJvYcCXsmQVEDGkTNum0MWhN23v8XNwqN+YeWg/ySyJbxX+w2RlJs2YjGX9pUpRybFhXaoW898arBQfGw1nrtLke@vger.kernel.org X-Gm-Message-State: AOJu0Yx1gBUst0VeBYjwjW8YgWT8lJlbZv9vUHXa29C1A8A+DnjsLQ1o uHoD9AJEkFPxzMAvvJbE8DAVZPtat4u5mHGEmpO6aCzRdOvXptJME1bg X-Gm-Gg: ASbGncvvvP8KIJk9h7XLa3+1tV230mvIQzAZQlQV1l3642OWQqXCW+iguNMo7UNlm1m tXNdpDU4xQv+EkjX9mOdzODErTJ/pGZZOiRg/BV8bweYXL5N2JeGkuRCGQvXA1z5qzzthGc38rC qy3rYHCT7tPrjt4+2tBI7VAI8Uddh1GbumMeuBVywFK7UejeMrZxtTE4Sr/x9oa4dARRC/+j+Ub OB0E2/LHiEFdEW+sjLfVfT+JbyiwNMKt+L2wHj4vaKYUkv0OrO19scKLNzWT9GwfbygB09bjFKb EZ4nJmgbA8zPeIfL1RcDlIkRIDqh3JHMu5TT/KsvnrEq16NDiExGRpXTY0fGU06+ooM6MGpgwVP 8Ofcp3LnFPiOHVmcxJe9xpqZNTQ== X-Google-Smtp-Source: AGHT+IHOuR52fY/COtpZM6lIOPAeDrtBd0ZuNPdb6M1SPZESV1zaVwdxxix9u7E3jidU4bsfhrIuRw== X-Received: by 2002:a05:6a21:900c:b0:1f5:790c:947 with SMTP id adf61e73a8af0-216218f7a98mr18397671637.19.1747677499805; Mon, 19 May 2025 10:58:19 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eb081aafsm6516577a12.47.2025.05.19.10.58.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:19 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 19/40] drm/msm: Rename msm_gem_vma_purge() -> _unmap() Date: Mon, 19 May 2025 10:57:16 -0700 Message-ID: <20250519175755.13037-7-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This is a more descriptive name. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 4 ++-- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 738620603d2c..bdcb90a295fc 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -415,7 +415,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); if (close) msm_gem_vma_close(vma); } @@ -593,7 +593,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; - msm_gem_vma_purge(vma); + msm_gem_vma_unmap(vma); msm_gem_vma_close(vma); return 0; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 0d755b9d5f26..da8f92911b7b 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -111,7 +111,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_purge(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 109b985e1d0f..72667316df51 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -21,7 +21,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) } /* Actually unmap memory for the vma */ -void msm_gem_vma_purge(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); struct msm_gem_vm *vm = to_msm_vm(vma->vm); From patchwork Mon May 19 17:57:17 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891131 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2BA728A419; Mon, 19 May 2025 17:58:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677503; cv=none; b=jrCy7J0R0qU5pvFIJcvYfNOb15s/krw1Xf2VjbV93rQd6ZD1jp9YR45uaaQbwYbbCAsrGoY+vQrPW03+g1bOpqm6UGCgcGadhGOAfiH4QaDgh9ntjB4lttrqElFXzY0EbHuqAhiZA0YhitboWMtsNF6oXX/ozdGGEK2voSnzPkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677503; c=relaxed/simple; bh=RQNYZfcWFkBFy1yXVF2TvLd6+/9GINYibAbtHU0LBxg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=YAp9sLAdXJTxUUIr6m2/ZK/6sS0Jm681UnJPK5L0D4DwN2iUF+dNMe9L6QYhMjfUDs66y3SrV3rVKz9pKQkUeLT0uqC40vjmhWLgBfT3ztUgqMwhuP/ETfm6fWalAmsA0gVdTpfTvZLXskx6/P35DgrbPDjGLrC2gPzEvUTT3rk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=OpPUpvtw; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="OpPUpvtw" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-74267c68c11so4248155b3a.0; Mon, 19 May 2025 10:58:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677501; x=1748282301; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=C/IOEUaoWMiY3FcOTsLO0mCip4CvBTwhZvr3W3BIF6Y=; b=OpPUpvtwaUqVtj8Ai5s2VdTnBALXXrvoyeKWeoF4BI/7/2kPCPdOLFlpNLXG5OyzSs 8U3tLFoRbGKUyTDUJrcFlOynYS9Q6jNM3gojsLQtDVKMB9hIfyyADirKMep1w78Jd9UH KoU1EW5jK2iqBNCmcHAYlpeTC0iCX1PcJhF9pMDIlb0Ezh5K3ehYcapOlpuHFWUTPWA8 zvS1Tcw7ryN/dzc4SIIU/egZMcxdca8BgP7qn9ljsSwge+jrLenZR0wdzO9f0sJO9veZ 8VNvRkJLN+o5YIYn9ASzst1PYcnnmidpeU1AAAM0/NairTSekQH7Q9l8c7IkMSaHj2iM mXuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677501; x=1748282301; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=C/IOEUaoWMiY3FcOTsLO0mCip4CvBTwhZvr3W3BIF6Y=; b=bP6WXDIUFlJcwJJur5RFye7rDejdQG0quDRVLqOnnH8X1RcFKn3jwPRom1wWGNHrW9 V24fXAejAzF25TUMd+CYgEipFyr0AaEePSapY/ZC9OmA67MVvjptKARQmlycVLM5U6+H SANXCflK+ZZ4IYSwH+5X4HU+omcMXXvh5MPPzFDSTmNnQlNrm6eOC/JhK0XDHZq3L/7C yy9DsNaCmRNSQa71XsBU55lkkQ8mK/6MlA1AiO0aW0OAG3sIHQIhk6t5iiMJAtvhFars qLkaYFC/R322y85wgXt5leObblkr5JmIbjMetnY57Chu8hax1HZ8kDWpgMMgT0BbhBRk myDw== X-Forwarded-Encrypted: i=1; AJvYcCV1Q+REKVuk6T67ylLD0h8O1W1ifo5TTluQN3tj1pkqFEYLJubUGeyUhSe/zN+7o419H4Ie2X0Ne9/F5OXq@vger.kernel.org, AJvYcCVTMQe9gzHoZXddG+JtiN8tzsBsPW0g0ss9qf+tP7ilP1wKbFhdVqYLd1pd8u22g50wGdnVtUU83T8NSDmF@vger.kernel.org X-Gm-Message-State: AOJu0YwfhwwnVuc5fp0hBsYqiX803qr3F9RAUesQy7OAalqrutZ3FbSv 3M9KIAJzS0IP9fAKJBiK5I4MRqHGJYo96Ldk2LXwvmt53qNa+Py08dhd X-Gm-Gg: ASbGncvppMnOLmygZWBivNIhl4JgwAtVEc6mAG8BQCG6FozE+7WV1d/JpBXjQn+6c9Z zsECPtuu1tyrSfnGkS+80/MfUtWA7RPx0/EzDz3PB5QSirCrhXuCLrKKtiCphiNHUmc2nMrsd6W SSmMF/P4Md26Q7rYinX3m+jUR08f9wUwd4elEeFHHfh9nKbyAyRdQ9NAdd1n8aRBUh6+jg7dYbg XYeaZka21GSi9snyXf21zI4FbvlOb16u2DbYKFtQX8+YyJ9CpiTyyim+4C8GsmdHLFh/6iSGiuy LkuZ52xOE5Tz65rC0SACnfKjm5OATRBgnbqQ4GY3ZQj/8Ck55n0+kNroQdnfjwhBo9kn79LsPHc 90d/SKvy6I1L5HmYc1JB91Y+FgQ== X-Google-Smtp-Source: AGHT+IFIyFTDzMZF98uDkDCYSSBpvn7zLyPicAJGXxSwxhrsNbTAsiqbNsLjqrVwcxCyiEStsO/+9g== X-Received: by 2002:a05:6a21:1089:b0:1f3:2e85:c052 with SMTP id adf61e73a8af0-2170ce31534mr20223986637.35.1747677501245; Mon, 19 May 2025 10:58:21 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a970d7c5sm6778137b3a.67.2025.05.19.10.58.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:20 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 20/40] drm/msm: Drop queued submits on lastclose() Date: Mon, 19 May 2025 10:57:17 -0700 Message-ID: <20250519175755.13037-8-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If we haven't written the submit into the ringbuffer yet, then drop it. The submit still retires through the normal path, to preserve fence signalling order, but we can skip the IB's to userspace cmdstream. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_gpu.h | 8 ++++++++ drivers/gpu/drm/msm/msm_ringbuffer.c | 6 ++++++ 3 files changed, 15 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 6ef29bc48bb0..5909720be48d 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -250,6 +250,7 @@ static int msm_open(struct drm_device *dev, struct drm_file *file) static void context_close(struct msm_context *ctx) { + ctx->closed = true; msm_submitqueue_close(ctx); msm_context_put(ctx); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d8425e6d7f5a..bfaec80e5f2d 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -362,6 +362,14 @@ struct msm_context { */ int queueid; + /** + * @closed: The device file associated with this context has been closed. + * + * Once the device is closed, any submits that have not been written + * to the ring buffer are no-op'd. + */ + bool closed; + /** @vm: the per-process GPU address-space */ struct drm_gpuvm *vm; diff --git a/drivers/gpu/drm/msm/msm_ringbuffer.c b/drivers/gpu/drm/msm/msm_ringbuffer.c index bbf8503f6bb5..b8bcd5d9690d 100644 --- a/drivers/gpu/drm/msm/msm_ringbuffer.c +++ b/drivers/gpu/drm/msm/msm_ringbuffer.c @@ -17,6 +17,7 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) struct msm_fence_context *fctx = submit->ring->fctx; struct msm_gpu *gpu = submit->gpu; struct msm_drm_private *priv = gpu->dev->dev_private; + unsigned nr_cmds = submit->nr_cmds; int i; msm_fence_init(submit->hw_fence, fctx); @@ -36,8 +37,13 @@ static struct dma_fence *msm_job_run(struct drm_sched_job *job) /* TODO move submit path over to using a per-ring lock.. */ mutex_lock(&gpu->lock); + if (submit->queue->ctx->closed) + submit->nr_cmds = 0; + msm_gpu_submit(gpu, submit); + submit->nr_cmds = nr_cmds; + mutex_unlock(&gpu->lock); return dma_fence_get(submit->hw_fence); From patchwork Mon May 19 17:57:18 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892061 Received: from mail-pg1-f179.google.com (mail-pg1-f179.google.com [209.85.215.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6164528B7D3; Mon, 19 May 2025 17:58:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677506; cv=none; b=LjeHCOVyWxgS4yhqTQCl3f7GEfBBhj/l7OSSjIMiSBoyeS6MFAr3CObJwqycVrHltl/KonEUIlgeC+Y6bMu/j3uWqtBK0pXQLRoB9WcT6szEAeJWHA3H80R5IveBVCqrk9EisaeHABv8hFLBk9YVSYDHCwLK1kcdOtqdroyZE6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677506; c=relaxed/simple; bh=kMxwx2gjYDrvMLl0c/kJ8NnU0wTbC67hrTtpx/XVn4E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=b/5trghGWrMaU8fCkEEB9xOG16gbPwznClo0IVPJ4CKCfmXhwcB+nY1oR1JIo9mY2sco2GvFAH3c40OVHEf4QqHQWsb5SK0UAWmARpkdIVwYne/cEj8Htg7p+l+1yd1n2xKGFP/Bo7hP+TgzMNdI+s54Yb+WLNIHUgGI5HJ+Vdc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dH+mZaPX; arc=none smtp.client-ip=209.85.215.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dH+mZaPX" Received: by mail-pg1-f179.google.com with SMTP id 41be03b00d2f7-b1fd59851baso2885587a12.0; Mon, 19 May 2025 10:58:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677503; x=1748282303; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BiPABw7s71Q3Gj4b/BfPSrooDItesEXFLbstK5lSkBY=; b=dH+mZaPXL/gcw+dIzHW1xC00NK777LTRmLvvp07DysNv1fHKW0KRckgIv9ysLIXcJb rjcrfx9LzDETnKhcZ7ycDVxqrmz+ZVO7gbaPdTL0i2fPLkwGOmbEaX6u+S0+l9jiBNUd Rkik4/eYCJeVx4Odm0bVCSIIXwdJjivx/1+Kq27DPOsvUdcektCc0wAF8CbgzaT0prgt 2vSB1QVSk7S19DGnFMjkZ0Rp67orFKQv1S8X8k4HITHpxrIJHw8NPVBiIec6jOQ5Oanc +mEgx7JoUdBebEFtAjONVMDAfBnnn2ie98I+5e8VeMzagBvfjEz4JM8BwddNXoeDGpMv Cf5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677503; x=1748282303; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BiPABw7s71Q3Gj4b/BfPSrooDItesEXFLbstK5lSkBY=; b=HyWVPWl7ad+K1d6Z+QUTntuhh88AcJtZZ/rqrWHkw46Z6Ubd3Cyv1aJvk2WZWLwcbi sDWaz0GStIxwV1ZnYObXnoUfkOlRbdfQ7Sgb/bIDWe+i4CRRaymhyrWNDCWKqD5YiwyL 0AVkrOnsM2TM/XcbFPuTzfIKP3aizw06Qkr7oSdd8mIO7cTJSHwEGmT5GYYMTWtJ9r36 JBXM1i0qeHhjGvcEyXr/ict5GrOLpzv2nA4vjcaB4ULI/+zF/n4aYRsM17NNCJSynwKU 7/h/QsJcwCedWOMzDT498zJAEeSR6Sk2dMIVGvCU+FmqAo0nNSj+MksDHBD7Y58Gbk3b zi7Q== X-Forwarded-Encrypted: i=1; AJvYcCVhxwp/waRzz/pRVZK2zfkA4SPPaNSlYQxV/aIimrs+UcSi3XSic2Ugq5JpVNScMznz6+j6acs7SZGPtJLy@vger.kernel.org, AJvYcCXQ9J6lMT2JhOegqTQAc+9ODODKR7yMSCAVfBc2CIkQlC7yNduEEoxts17t59JR95x9BKyso4pUWi9qqEA7@vger.kernel.org X-Gm-Message-State: AOJu0YwakjMziiSdNfd7EAatMqNP9rD2csdtE2i/0uUCvkg96HI9e3YN Xreqv6uhkAlxS0d4XyV4mZeDKoffBgaXmfkrz/7K2ukJ7aLRK9vs3CAO X-Gm-Gg: ASbGnctz6eWppesK3dHMVJNhjRiuC//rStQs8K5mmtbbjkPVxzS21Xdhk04kyXj/WaC L/1V0/b0+A31IFQxGMqsP2EOa3/BOHY7d2hYUyx4RdVOoIBoOd5lgwsfnLTUvJf+1Y6Gp9MMzmf PNy3EfuaFTb50O6KsqQtSlJbbb6qyqabSl25YzaVHDEL86PfFD0qP5HL2IqtLkSkIQy0idBNBKa vTIZLsKpUDh54snIzP3Cu4kNWidMDgPfjE1O66gDJa5Av3wLC+SsayotkAiGNxqr4z5p+VPigEh 3bjGed+GfQer4VKgL6UpROSv5NOkXgz5llv9tNswl+mQq3ZZ2kr9E2TPQi40u9muMO8yJJgeIxD ThcHMIRdSunNKH/G3AToH2f5dUg== X-Google-Smtp-Source: AGHT+IGIfdGP76/d1hQeN1WKrdW7mwDTsv1d5jFl6DDl86WgSp19cSzTHpqPczKqp3xN1i73FsonhQ== X-Received: by 2002:a17:903:b8f:b0:215:8d49:e2a7 with SMTP id d9443c01a7336-231de37f1d8mr164935675ad.50.1747677503513; Mon, 19 May 2025 10:58:23 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebba01sm62541715ad.208.2025.05.19.10.58.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:22 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 21/40] drm/msm: Lazily create context VM Date: Mon, 19 May 2025 10:57:18 -0700 Message-ID: <20250519175755.13037-9-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In the next commit, a way for userspace to opt-in to userspace managed VM is added. For this to work, we need to defer creation of the VM until it is needed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 3 ++- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 14 +++++++----- drivers/gpu/drm/msm/msm_drv.c | 29 ++++++++++++++++++++----- drivers/gpu/drm/msm/msm_gem_submit.c | 2 +- drivers/gpu/drm/msm/msm_gpu.h | 9 +++++++- 5 files changed, 43 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index c43a443661e4..0d7c2a2eeb8f 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -112,6 +112,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, { bool sysprof = refcount_read(&a6xx_gpu->base.base.sysprof_active) > 1; struct msm_context *ctx = submit->queue->ctx; + struct drm_gpuvm *vm = msm_context_vm(submit->dev, ctx); struct adreno_gpu *adreno_gpu = &a6xx_gpu->base; phys_addr_t ttbr; u32 asid; @@ -120,7 +121,7 @@ static void a6xx_set_pagetable(struct a6xx_gpu *a6xx_gpu, if (ctx->seqno == ring->cur_ctx_seqno) return; - if (msm_iommu_pagetable_params(to_msm_vm(ctx->vm)->mmu, &ttbr, &asid)) + if (msm_iommu_pagetable_params(to_msm_vm(vm)->mmu, &ttbr, &asid)) return; if (adreno_gpu->info->family >= ADRENO_7XX_GEN1) { diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index e24f627daf37..b70ed4bc0e0d 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -373,6 +373,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, { struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); struct drm_device *drm = gpu->dev; + /* Note ctx can be NULL when called from rd_open(): */ + struct drm_gpuvm *vm = ctx ? msm_context_vm(drm, ctx) : NULL; /* No pointer params yet */ if (*len != 0) @@ -418,8 +420,8 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = 0; return 0; case MSM_PARAM_FAULTS: - if (ctx->vm) - *value = gpu->global_faults + to_msm_vm(ctx->vm)->faults; + if (vm) + *value = gpu->global_faults + to_msm_vm(vm)->faults; else *value = gpu->global_faults; return 0; @@ -427,14 +429,14 @@ int adreno_get_param(struct msm_gpu *gpu, struct msm_context *ctx, *value = gpu->suspend_count; return 0; case MSM_PARAM_VA_START: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_start; + *value = vm->mm_start; return 0; case MSM_PARAM_VA_SIZE: - if (ctx->vm == gpu->vm) + if (vm == gpu->vm) return UERR(EINVAL, drm, "requires per-process pgtables"); - *value = ctx->vm->mm_range; + *value = vm->mm_range; return 0; case MSM_PARAM_HIGHEST_BANK_BIT: *value = adreno_gpu->ubwc_config.highest_bank_bit; diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 5909720be48d..ac8a5b072afe 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -214,10 +214,29 @@ static void load_gpu(struct drm_device *dev) mutex_unlock(&init_lock); } +/** + * msm_context_vm - lazily create the context's VM + * + * @dev: the drm device + * @ctx: the context + * + * The VM is lazily created, so that userspace has a chance to opt-in to having + * a userspace managed VM before the VM is created. + * + * Note that this does not return a reference to the VM. Once the VM is created, + * it exists for the lifetime of the context. + */ +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) +{ + struct msm_drm_private *priv = dev->dev_private; + if (!ctx->vm) + ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + return ctx->vm; +} + static int context_init(struct drm_device *dev, struct drm_file *file) { static atomic_t ident = ATOMIC_INIT(0); - struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx; ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); @@ -230,7 +249,6 @@ static int context_init(struct drm_device *dev, struct drm_file *file) kref_init(&ctx->ref); msm_submitqueue_init(dev, ctx); - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); file->driver_priv = ctx; ctx->seqno = atomic_inc_return(&ident); @@ -409,7 +427,7 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, * Don't pin the memory here - just get an address so that userspace can * be productive */ - return msm_gem_get_iova(obj, ctx->vm, iova); + return msm_gem_get_iova(obj, msm_context_vm(dev, ctx), iova); } static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, @@ -418,18 +436,19 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, { struct msm_drm_private *priv = dev->dev_private; struct msm_context *ctx = file->driver_priv; + struct drm_gpuvm *vm = msm_context_vm(dev, ctx); if (!priv->gpu) return -EINVAL; /* Only supported if per-process address space is supported: */ - if (priv->gpu->vm == ctx->vm) + if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; - return msm_gem_set_iova(obj, ctx->vm, iova); + return msm_gem_set_iova(obj, vm, iova); } static int msm_ioctl_gem_info_set_metadata(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index c4569e7b5a02..7a9bd20363dd 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -63,7 +63,7 @@ static struct msm_gem_submit *submit_create(struct drm_device *dev, kref_init(&submit->ref); submit->dev = dev; - submit->vm = queue->ctx->vm; + submit->vm = msm_context_vm(dev, queue->ctx); submit->gpu = gpu; submit->cmd = (void *)&submit->bos[nr_bos]; submit->queue = queue; diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index bfaec80e5f2d..d1530de96315 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -370,7 +370,12 @@ struct msm_context { */ bool closed; - /** @vm: the per-process GPU address-space */ + /** + * @vm: + * + * The per-process GPU address-space. Do not access directly, use + * msm_context_vm(). + */ struct drm_gpuvm *vm; /** @kref: the reference count */ @@ -455,6 +460,8 @@ struct msm_context { atomic64_t ctx_mem; }; +struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * From patchwork Mon May 19 17:57:19 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891130 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D8AFF28BA89; Mon, 19 May 2025 17:58:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677507; cv=none; b=Mj6AtFhUcm470DyiIuPRmG2MhWTJ5UHXtm4lNRQlbPqIcjnGl4VxVvqmsnQEK+BImJV3B6hQhGagNjBIBTzVIb+GpX33TJTGH00KgD+P5VeVYvsEbmiGN9rnYtFO8lUwVRqzmnMPsBlMld1tjvyHcbXrLfZ8YjKCRoWBOz4gnkw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677507; c=relaxed/simple; bh=DkoDTidvRJAccFQvCY9Ny/NlCqO0rCQrnqj+U8u5Hhc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FFXIHVgCgbSQ7yYPXT04c6aweUIGBxap6CQQ4Je89sFQky6Bzux8wzUBIskFqqTnr940vD9gIwET1MYjJ7kMmrIrXa7f5VLcZNdKGsB/lJv13gtEzSKwrhZwIZAQhwM9L1Qomtf3UcZmGcsOlGSbWAmuTHjXKKvSA9h7DQJcm70= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=H30+5mNS; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="H30+5mNS" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-74019695377so3720046b3a.3; Mon, 19 May 2025 10:58:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677505; x=1748282305; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=SIn04vW9NzbNbQ+JDqxVwsVXyuEuV1ci6Qcw478mSOM=; b=H30+5mNS8z6YJ8whnE0DRO3n5hdXw0qF8tMvwAM7AAC2XqMlcdEbhZwnyTul3mEeqb Wb6OxVrqZ8wh3FvqjDVkPi+WDBb/ndqOv9gBXcVPCgBZXRllJChBJ88ibHvwPtRfM/7l PAo3m7IEsFDBlWWycWz0m86OG8idepssSno8Moz85Ir7hFcnyE5ddQKXoANC3bGbIEnE X7sfeTkTPKVIZsOcHr2IJzu4hk22Vs6X2iXS1Fo0y7jDjg++IssuOFlS2iM5Y5SaHzgb UF4z1kyxKVpU/PCYXuUhyHcTfSb8b04X621T+9gH3uCSCPm1PE3XWZyVULBGQ6VCOKsV lY8Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677505; x=1748282305; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=SIn04vW9NzbNbQ+JDqxVwsVXyuEuV1ci6Qcw478mSOM=; b=j65ijEHMZcvIgeub++VtKMrnodpMx05ZFlmxofdP4aOTBNPypeC1VrBm1Y8tJiCZWK QMH/NVYJGeZSjlnZh+lyBHplNm7A87gRdxw/VTBWJgULGT7c1bbFzrhRqQAQdO57f90t iu5bsdXpfOocHK3ivCpc6ryEniR9LbMd95drrA1nDYv+gzYqyrlJKd3wrOAoa/M7myNg kKVNOdW9ZBQV1oCTgZN5sb7FYy2dKUf9NwXs6MM2QYJVZIlh0MHjlXxtH6IyUVIx6myB pfAoo+MTvZrWi0hgFzha00C+jifkGM+0LZZ0Dgv4dGAjlLS3bYU4kxX+h0XzWCNh/jcQ YQuQ== X-Forwarded-Encrypted: i=1; AJvYcCW/cQxu9Yy7Rz/yb7++LcNena+YTT1qOn+v/3wZeXeI0vaCWEb6PFyfM5ta6q5X5mhAYDxh3ZI/785ENIIX@vger.kernel.org, AJvYcCX2Wo/0J+FIj/4WO5uhZnt0gwfMeGF5haHcoXazg8P0loqDLzX9fnSqBfIIEEWeWEc4zblCgEe+BmNRFxFT@vger.kernel.org X-Gm-Message-State: AOJu0YxZTmIATpyq2wOHQ9x76oi7lZparkbBVlqR3JW+OgRfwB5TTr27 mKYA2y9AgKHH9Esrtg57XSWG0Z2btc+i+v7dHUbCu0ANpSE79SbeSmOL X-Gm-Gg: ASbGncu+ek/RtS4vV3+YT/9yC+f/yov/yvhFHSsvl7nZYyqNv5ThuMgkU+45Si0nmGm 35fI9HTxPhxkubJYQNAr7SMAqu6hXMciJsQB3SJezpTrKYUQU8GrXw/AJXaWdJRXmLQrSPkLIoc 5e3B/add9a/KLazJoolKKSXvXEDEzmCKxZVU9ns2rA1Qozt3b+ns6XBzKJSRPyIFQuDycG1LKpx nyz8RhETMHFL6biy8xBzht3sMG5Fp2UiNtwLbGrdhusxn9o78wn3X7cN+8fsHeQYNlqXd42AEuA JA/oJtV+iVjGP6i31dzyTnr1zDwBiF6zvTWTGeyVTaBIaIdCfOMuswqq1tDRjbUZiRv9dITWBy1 A5cg6llcr6baD5837rxyA7RQvJw== X-Google-Smtp-Source: AGHT+IE+Gf4+P9mQbPcZy2p2te5NaWaft/ig4l58ls5folVwzmXCoEWZzHYSdTjUvcVq3Btn3r+LXA== X-Received: by 2002:a05:6a00:ad1:b0:736:ab49:d56 with SMTP id d2e1a72fcca58-742a9776965mr18980651b3a.1.1747677505044; Mon, 19 May 2025 10:58:25 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a987af30sm6449017b3a.156.2025.05.19.10.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:24 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 22/40] drm/msm: Add opt-in for VM_BIND Date: Mon, 19 May 2025 10:57:19 -0700 Message-ID: <20250519175755.13037-10-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add a SET_PARAM for userspace to request to manage to the VM itself, instead of getting a kernel managed VM. In order to transition to a userspace managed VM, this param must be set before any mappings are created. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 4 ++-- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 15 +++++++++++++ drivers/gpu/drm/msm/msm_drv.c | 22 +++++++++++++++++-- drivers/gpu/drm/msm/msm_gem.c | 8 +++++++ drivers/gpu/drm/msm/msm_gpu.c | 5 +++-- drivers/gpu/drm/msm/msm_gpu.h | 29 +++++++++++++++++++++++-- include/uapi/drm/msm_drm.h | 24 ++++++++++++++++++++ 7 files changed, 99 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index 0d7c2a2eeb8f..f0e37733c65d 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2263,7 +2263,7 @@ a6xx_create_vm(struct msm_gpu *gpu, struct platform_device *pdev) } static struct drm_gpuvm * -a6xx_create_private_vm(struct msm_gpu *gpu) +a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; @@ -2273,7 +2273,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu) return ERR_CAST(mmu); return msm_gem_vm_create(gpu->dev, mmu, "gpu", ADRENO_VM_START, - adreno_private_vm_size(gpu), true); + adreno_private_vm_size(gpu), kernel_managed); } static uint32_t a6xx_get_rptr(struct msm_gpu *gpu, struct msm_ringbuffer *ring) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index b70ed4bc0e0d..efe03f3f42ba 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -508,6 +508,21 @@ int adreno_set_param(struct msm_gpu *gpu, struct msm_context *ctx, if (!capable(CAP_SYS_ADMIN)) return UERR(EPERM, drm, "invalid permissions"); return msm_context_set_sysprof(ctx, gpu, value); + case MSM_PARAM_EN_VM_BIND: + /* We can only support VM_BIND with per-process pgtables: */ + if (ctx->vm == gpu->vm) + return UERR(EINVAL, drm, "requires per-process pgtables"); + + /* + * We can only swtich to VM_BIND mode if the VM has not yet + * been created: + */ + if (ctx->vm) + return UERR(EBUSY, drm, "VM already created"); + + ctx->userspace_managed_vm = value; + + return 0; default: return UERR(EINVAL, drm, "%s: invalid param: %u", gpu->name, param); } diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index ac8a5b072afe..89cb7820064f 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -228,9 +228,21 @@ static void load_gpu(struct drm_device *dev) */ struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx) { + static DEFINE_MUTEX(init_lock); struct msm_drm_private *priv = dev->dev_private; - if (!ctx->vm) - ctx->vm = msm_gpu_create_private_vm(priv->gpu, current); + + /* Once ctx->vm is created it is valid for the lifetime of the context: */ + if (ctx->vm) + return ctx->vm; + + mutex_lock(&init_lock); + if (!ctx->vm) { + ctx->vm = msm_gpu_create_private_vm( + priv->gpu, current, !ctx->userspace_managed_vm); + + } + mutex_unlock(&init_lock); + return ctx->vm; } @@ -420,6 +432,9 @@ static int msm_ioctl_gem_info_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + if (should_fail(&fail_gem_iova, obj->size)) return -ENOMEM; @@ -441,6 +456,9 @@ static int msm_ioctl_gem_info_set_iova(struct drm_device *dev, if (!priv->gpu) return -EINVAL; + if (msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "VM_BIND is enabled"); + /* Only supported if per-process address space is supported: */ if (priv->gpu->vm == vm) return UERR(EOPNOTSUPP, dev, "requires per-process pgtables"); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index bdcb90a295fc..36b9e9eefc3c 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -64,6 +64,14 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) if (!ctx->vm) return; + /* + * VM_BIND does not depend on implicit teardown of VMAs on handle + * close, but instead on implicit teardown of the VM when the device + * is closed (see msm_gem_vm_close()) + */ + if (msm_context_is_vmbind(ctx)) + return; + /* * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 82e33aa1ccd0..0314e15d04c2 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -831,7 +831,8 @@ static int get_clocks(struct platform_device *pdev, struct msm_gpu *gpu) /* Return a new address space for a msm_drm_private instance */ struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed) { struct drm_gpuvm *vm = NULL; @@ -843,7 +844,7 @@ msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task) * the global one */ if (gpu->funcs->create_private_vm) { - vm = gpu->funcs->create_private_vm(gpu); + vm = gpu->funcs->create_private_vm(gpu, kernel_managed); if (!IS_ERR(vm)) to_msm_vm(vm)->pid = get_pid(task_pid(task)); } diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index d1530de96315..448ebf721bd8 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -79,7 +79,7 @@ struct msm_gpu_funcs { void (*gpu_set_freq)(struct msm_gpu *gpu, struct dev_pm_opp *opp, bool suspended); struct drm_gpuvm *(*create_vm)(struct msm_gpu *gpu, struct platform_device *pdev); - struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu); + struct drm_gpuvm *(*create_private_vm)(struct msm_gpu *gpu, bool kernel_managed); uint32_t (*get_rptr)(struct msm_gpu *gpu, struct msm_ringbuffer *ring); /** @@ -370,6 +370,14 @@ struct msm_context { */ bool closed; + /** + * @userspace_managed_vm: + * + * Has userspace opted-in to userspace managed VM (ie. VM_BIND) via + * MSM_PARAM_EN_VM_BIND? + */ + bool userspace_managed_vm; + /** * @vm: * @@ -462,6 +470,22 @@ struct msm_context { struct drm_gpuvm *msm_context_vm(struct drm_device *dev, struct msm_context *ctx); +/** + * msm_context_is_vm_bind() - has userspace opted in to VM_BIND? + * + * @ctx: the drm_file context + * + * See MSM_PARAM_EN_VM_BIND. If userspace is managing the VM, it can + * do sparse binding including having multiple, potentially partial, + * mappings in the VM. Therefore certain legacy uabi (ie. GET_IOVA, + * SET_IOVA) are rejected because they don't have a sensible meaning. + */ +static inline bool +msm_context_is_vmbind(struct msm_context *ctx) +{ + return ctx->userspace_managed_vm; +} + /** * msm_gpu_convert_priority - Map userspace priority to ring # and sched priority * @@ -689,7 +713,8 @@ int msm_gpu_init(struct drm_device *drm, struct platform_device *pdev, const char *name, struct msm_gpu_config *config); struct drm_gpuvm * -msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task); +msm_gpu_create_private_vm(struct msm_gpu *gpu, struct task_struct *task, + bool kernel_managed); void msm_gpu_cleanup(struct msm_gpu *gpu); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 5bc5e4526ccf..b974f5a24dbc 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -93,6 +93,30 @@ struct drm_msm_timespec { #define MSM_PARAM_UCHE_TRAP_BASE 0x14 /* RO */ /* PRR (Partially Resident Region) is required for sparse residency: */ #define MSM_PARAM_HAS_PRR 0x15 /* RO */ +/* MSM_PARAM_EN_VM_BIND is set to 1 to enable VM_BIND ops. + * + * With VM_BIND enabled, userspace is required to allocate iova and use the + * VM_BIND ops for map/unmap ioctls. MSM_INFO_SET_IOVA and MSM_INFO_GET_IOVA + * will be rejected. (The latter does not have a sensible meaning when a BO + * can have multiple and/or partial mappings.) + * + * With VM_BIND enabled, userspace does not include a submit_bo table in the + * SUBMIT ioctl (this will be rejected), the resident set is determined by + * the the VM_BIND ops. + * + * Enabling VM_BIND will fail on devices which do not have per-process pgtables. + * And it is not allowed to disable VM_BIND once it has been enabled. + * + * Enabling VM_BIND should be done (attempted) prior to allocating any BOs or + * submitqueues of type MSM_SUBMITQUEUE_VM_BIND. + * + * Relatedly, when VM_BIND mode is enabled, the kernel will not try to recover + * from GPU faults or failed async VM_BIND ops, in particular because it is + * difficult to communicate to userspace which op failed so that userspace + * could rewind and try again. When the VM is marked unusable, the SUBMIT + * ioctl will throw -EPIPE. + */ +#define MSM_PARAM_EN_VM_BIND 0x16 /* WO, once */ /* For backwards compat. The original support for preemption was based on * a single ring per priority level so # of priority levels equals the # From patchwork Mon May 19 17:57:20 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892060 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 85ACA28AB03; Mon, 19 May 2025 17:58:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677509; cv=none; b=baOkCugIKzeVTUWGAf63fnvpVzfxwpZJCd4XQ1ExwCU0UFWRvLZz7vAu7vbC7inAMdbzt1Id1mseVTIh5M3D7BNStdxBSANMWgUjLiHk2e7ZRk2RFRNQyuDl6bjeJWOMdH+RwLfgSb/iTtz75X4A3jmY0Duvl7qZlULY/8O/rlw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677509; c=relaxed/simple; bh=vcGnDUc0jlkB8HrBi9RCWsHF6aWOVhZxx+sQm1b4eUg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sRvzF52I5utCxNWq4uNTrP2yq4+R54Rz5wAGfcvRIE/g9nHQGvlkDclblZqIYyc8WQtbZ3o6rRdplmO7n5uNvYE15wxgpyg1ltpsTHtwLi45YCSdaxL4eoymrQCSpJPyGybXbtMqCVMFyF4NKBl1ubRroHNeOMblOvsjh8QfAOQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=N78csABb; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="N78csABb" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-2321c38a948so19542655ad.2; Mon, 19 May 2025 10:58:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677507; x=1748282307; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=HkF3rncj/rkQO6FWsPGasU3HuIPXC7SuBq50xwp/TTg=; b=N78csABb6ecgr4t97ZsNFwyoHjhsdGPx2x5vOQN7+156sDTG9VrT4+Xnp8olCteaT7 hffkC96SJuGH3Mtr2HyJn06UOZgZSUdx7J55mqclhfFmFzNEm4pXZm0m4tufdMnw611m Fzo9WFMHewAu91X5E4Ujt4VqZIuGaRaJBVIe0ZdWUS/nKoW5hu76Ka0nN8xE7IqXYIWa YGYvfQ/gM0mfW9Ex7+HuPzEox0SNFll1JDo/RSMTHA3jZqo/myLvYkBbytsCGcn7b+DO aTh4KO1CZXCoWbnC0psmqo8+JWDzPaapkPjfFmoLoM7YbwyV746VUMfzXTIWnwjfHPe5 ydCQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677507; x=1748282307; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HkF3rncj/rkQO6FWsPGasU3HuIPXC7SuBq50xwp/TTg=; b=asGyA5r1mescNnJQ+nvutwk0S1sdaipHBN22rrd2IKe6pqc7ny8xHNju20Zhhm+YK4 B2gGixmYQCPqtZuizgRLcC2cGwhdmqtfHyBNeM8M3OO+7zOa6A3foQ1MyERF26a6LL2M PxbXrzVY6uCd1+tQj68zGjcpY8G3HlfKNMHc7sOwq3Vpxr+0VG2mwX0HIU87jvzNn0cR qpGhwNgHX6dvSxjkQScFb57M458Jpvma+DKB3tZoBl9JpDQn8Ln/c/a7wXv7emo9XOKr kC2lkAh3h1CKcPL6VrJ7Mont/FR7HI2XzCUPVakvwxfXwjZYN390Fm/6haDv6wSxoofd OPZg== X-Forwarded-Encrypted: i=1; AJvYcCWum13448pdT+VVFg2efjg0LTD+ANyJoDhc0yRcpXhPN/tE6q8XciKgXKn5JW00kluBkeMQoZJJPXvJtOZC@vger.kernel.org, AJvYcCXLU7L2ldLkEJ3W8/yGlDQIhSGSlj2l7nnyTGtiu4qtUdRll6yjPzEn8EnjPCuQer57sfkd47NwIOByw4Ep@vger.kernel.org X-Gm-Message-State: AOJu0YzDTR3e3bMLnCNf38yTeKn0S2Fge9XLGAMV57CRuqQAs60ZMn/M 9YKKjkvYZvR+8lWmOuwtyIhLiwv8jsHl+gYhElSz47SuHfcklK4cW3Oh X-Gm-Gg: ASbGncs1ZQoi6lPXHq8YzL+z4v7+JY1GBrV4D7/kwuSZyD+efsAom4M7UANt2XPuGCJ CzloSccwXlR3nZfGg4w/P3RyGXGbxQC9BU9AOgYer9TWfW/ywAAQJGVCIDhqyneSq/QnwlZnOM6 eTg4mzOBIBKmbh/yovoi8h3oSw2uB+hnmIJdAfvTSmB8dTrdyOORoecNWOf9qImEGHCRTp6v6rX rdwJzlM3waqp8B3p2YTJfGvZe4YUdmkkgZ5jKpw8NPKFrBg5GOxU0jKh3PEM3DIoAkvCi/QZ5Ss vwFVFXVriXxLwQxemmwxzkD92p8nTRCxsjkn9JlsyIvOZ9RhUfO9YXDU1ES//vqGOJHqWd2MRkt uhCJ2/RGpqzj+OrPR1dpolNacaQ== X-Google-Smtp-Source: AGHT+IEpah7ZiizvtAX0mFUqc0dXE4qmNDG9i9hxm1u0++6a0TW5fE3lhDJFlLqIxbd53kB+MIFJnw== X-Received: by 2002:a17:902:ecd2:b0:231:d143:745c with SMTP id d9443c01a7336-231de35f0f8mr162321705ad.13.1747677506687; Mon, 19 May 2025 10:58:26 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebb0f5sm62714445ad.192.2025.05.19.10.58.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:25 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 23/40] drm/msm: Mark VM as unusable on GPU hangs Date: Mon, 19 May 2025 10:57:20 -0700 Message-ID: <20250519175755.13037-11-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark If userspace has opted-in to VM_BIND, then GPU hangs and VM_BIND errors will mark the VM as unusable. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 17 +++++++++++++++++ drivers/gpu/drm/msm/msm_gem_submit.c | 3 +++ drivers/gpu/drm/msm/msm_gpu.c | 16 ++++++++++++++-- 3 files changed, 34 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index da8f92911b7b..67f845213810 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -76,6 +76,23 @@ struct msm_gem_vm { /** @managed: is this a kernel managed VM? */ bool managed; + + /** + * @unusable: True if the VM has turned unusable because something + * bad happened during an asynchronous request. + * + * We don't try to recover from such failures, because this implies + * informing userspace about the specific operation that failed, and + * hoping the userspace driver can replay things from there. This all + * sounds very complicated for little gain. + * + * Instead, we should just flag the VM as unusable, and fail any + * further request targeting this VM. + * + * As an analogy, this would be mapped to a VK_ERROR_DEVICE_LOST + * situation, where the logical device needs to be re-created. + */ + bool unusable; }; #define to_msm_vm(x) container_of(x, struct msm_gem_vm, base) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 7a9bd20363dd..f282d691087f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -676,6 +676,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (args->pad) return -EINVAL; + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + /* for now, we just have 3d pipe.. eventually this would need to * be more clever to dispatch to appropriate gpu module: */ diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 0314e15d04c2..6503ce655b10 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -386,8 +386,20 @@ static void recover_worker(struct kthread_work *work) /* Increment the fault counts */ submit->queue->faults++; - if (submit->vm) - to_msm_vm(submit->vm)->faults++; + if (submit->vm) { + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + + vm->faults++; + + /* + * If userspace has opted-in to VM_BIND (and therefore userspace + * management of the VM), faults mark the VM as unusuable. This + * matches vulkan expectations (vulkan is the main target for + * VM_BIND) + */ + if (!vm->managed) + vm->unusable = true; + } get_comm_cmdline(submit, &comm, &cmd); From patchwork Mon May 19 17:57:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891129 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F3F3328BABE; Mon, 19 May 2025 17:58:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677510; cv=none; b=nVmwE6FXHtJiN+980tBorLkVDW9zmwHZSNewaRgBUAmoFJZMbqo4nZczhEhsUg/KtKL7N7aWonjkopO5nrO6DlHcZY+RO2dGwFOoR14KPCD1zcx8txL/e1FIAKqu9ZaMKbsDGY++0fa9aAnYYtVTUsAKDAwBOFoqWWRvEkSUgXs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677510; c=relaxed/simple; bh=3/FKOZ2fW3WRSCgp2omB41mHKjLEIN8851T8UHvaH14=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qyWiviOlIUD56VJztFThJV47qHXieAQY/ytYFoYrrMRVzsAittLyJAd/4GUZeNekBkpoRvPW5C3UwyqmN1pO0Ik6QszV1y2DrMK6JZ66kDPfs+SlXNta9HG54jixHGiec8JuRd7Ts7MeyOfxkKQp3Lezavz9mTjLToheRHdbt5Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ENRYY6Cc; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ENRYY6Cc" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-7426c44e014so4937234b3a.3; Mon, 19 May 2025 10:58:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677508; x=1748282308; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=wYI2zRJOtd0DrZN4qncIv20rYHQymHOcIiawR317nWQ=; b=ENRYY6Cc32tzw/dJEguW3DYPmbBBFOwF7mz43lvHhLoXMRNXeuB5To18qiKyB2/HLJ S5bOgyTvnsF2vSsvfqf4a3cL/ZbZUj2Phf/rhhRyCRlzFFhol9tfjfhcbTAf6kLBBbBs 0M28RbUztWISFdqZp62dD4QppUsdinwYpo3zw+jz97kLk+2/TLknrjiPoaNLRRbFCu33 TIiuGA2rVMRfQ6ftM0WGcTpoIhR15zmdRB+qO4F2M5HkuBAFQJyIu+mx48MQqpiDVGYv zZemOdnF/+UF2C4Mc/ZIe+Orb/EKHqUtasE6Kes0e3Kp9rU+dJ0YCuDBWiPuTHrGim9M ubrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677508; x=1748282308; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wYI2zRJOtd0DrZN4qncIv20rYHQymHOcIiawR317nWQ=; b=NbbPXkWT5BmI3lHsoIR/EtLsB+02P7UX2qsPwcOcC2KBCXMzopgpk5raE4VxQck+dH vy79QF5wcbHt9lMuV08coXWr0gZXGxwNDVZ0tK/QlfYa00fao7P4pCJvQc6Bv5N0jVXz zzjNUrNeCbl2GSqO6MqO2ryuOES6622ydkzMlEmKeHoSQIkX3kFbNT62QhfDuoZJKfgG 73cEqeRCzAjC8c0rrRO1yqOQ8eYbtCgHGbBey+FEWku5FoZkpdUbD/qEGoZL1ZKxhEmv fipCA+oXvjTCh70ELRPTeEX/TEa9gaW97Uapt+jmyHH9XUGTQ10kzV+pB04pYJj5dDDm AKog== X-Forwarded-Encrypted: i=1; AJvYcCU+MsfHA2xlERpXQW2UYTg4J3Lvbr/PJ9y0sXCEK7yF99mvHOJ74nuqtId9SR14nsxZs8ghlBpTYglcCcg=@vger.kernel.org, AJvYcCWXUmxzwEr4OJamXb8r2gVCq4I6D+f2X9HV7Gv4I6nobPuU3M+YDoLYoJDvxisYpiQaB+nan/aMWrwKs/qn@vger.kernel.org, AJvYcCWxTBUCW3PVxkrukG9dyDBs/4TZhDvMmp1RfQzYZaJIuRS+VSdOihPgjr1cRZ4Hul+ogaNGN7VLmJX7qlZ9@vger.kernel.org X-Gm-Message-State: AOJu0Yx1flYiW1OgNSsYqvvE+x/DfTakhh2sQ/M4yk7dFGpD074Oa2MR 6AvZv9aXK726muCqgIM1djd0esZvQZet/FLnwnrZmIuMNewm+lMbqCKB X-Gm-Gg: ASbGncvIx9WVw2F0gY/qH0rt3wNgYpQ8dCxDkbXSYy+wVaE5iq5L8z48NSgfEaQIrCP whQpjCGZOWf1OwNjRCTCCnjGd4CfSZLx7V3+yu9ZVf2G58THIvvP7Yvmx96UwQa7LYOhFUTBOaX aJC67qpn0NwYk5HCdqSgw/OKB9nV1a2Zs49tb5+4DFSmZ+2bd80xsSr2alK9wVzviRd04248gyv xhEryNsr1qRjegrqDAs1cCbPkLQUDDOTmOdD1v43TorFSoY0xp2xxy+q/hAMKxewbQk9ksGYtcs m6MxxfRAXhqcES9tUiPeSaN7R0qncM/VR0i8OYgvuu1iN3Na+a7Qjsl5fxspPRv8v0INOm416hO MdyR+TouULfW2GcqX5C7vf9UAkw== X-Google-Smtp-Source: AGHT+IFzBCS9nXCRFLRNGKyzZg3GWPK2vE74D3SoO9nls63JyJkY7kSswkhFeCAR1kOXNf3TDeRLZA== X-Received: by 2002:a05:6a20:6a2b:b0:215:d38f:11d1 with SMTP id adf61e73a8af0-216219bd5aemr18354925637.29.1747677508122; Mon, 19 May 2025 10:58:28 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eaf5c6b0sm6598482a12.7.2025.05.19.10.58.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:27 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 24/40] drm/msm: Add _NO_SHARE flag Date: Mon, 19 May 2025 10:57:21 -0700 Message-ID: <20250519175755.13037-12-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Buffers that are not shared between contexts can share a single resv object. This way drm_gpuvm will not track them as external objects, and submit-time validating overhead will be O(1) for all N non-shared BOs, instead of O(n). Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.h | 1 + drivers/gpu/drm/msm/msm_gem.c | 23 +++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gem_prime.c | 15 +++++++++++++++ include/uapi/drm/msm_drm.h | 14 ++++++++++++++ 4 files changed, 53 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b77fd2c531c3..b0add236cbb3 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -246,6 +246,7 @@ int msm_gem_prime_vmap(struct drm_gem_object *obj, struct iosys_map *map); void msm_gem_prime_vunmap(struct drm_gem_object *obj, struct iosys_map *map); struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags); int msm_gem_prime_pin(struct drm_gem_object *obj); void msm_gem_prime_unpin(struct drm_gem_object *obj); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 36b9e9eefc3c..65ec99526f82 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -532,6 +532,9 @@ static int get_and_pin_iova_range_locked(struct drm_gem_object *obj, msm_gem_assert_locked(obj); + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + vma = get_vma_locked(obj, vm, range_start, range_end); if (IS_ERR(vma)) return PTR_ERR(vma); @@ -1060,6 +1063,16 @@ static void msm_gem_free_object(struct drm_gem_object *obj) put_pages(obj); } + if (obj->resv != &obj->_resv) { + struct drm_gem_object *r_obj = + container_of(obj->resv, struct drm_gem_object, _resv); + + BUG_ON(!(msm_obj->flags & MSM_BO_NO_SHARE)); + + /* Drop reference we hold to shared resv obj: */ + drm_gem_object_put(r_obj); + } + drm_gem_object_release(obj); kfree(msm_obj->metadata); @@ -1092,6 +1105,15 @@ int msm_gem_new_handle(struct drm_device *dev, struct drm_file *file, if (name) msm_gem_object_set_name(obj, "%s", name); + if (flags & MSM_BO_NO_SHARE) { + struct msm_context *ctx = file->driver_priv; + struct drm_gem_object *r_obj = drm_gpuvm_resv_obj(ctx->vm); + + drm_gem_object_get(r_obj); + + obj->resv = r_obj->resv; + } + ret = drm_gem_handle_create(file, obj, handle); /* drop reference from allocate - handle holds it now */ @@ -1124,6 +1146,7 @@ static const struct drm_gem_object_funcs msm_gem_object_funcs = { .free = msm_gem_free_object, .open = msm_gem_open, .close = msm_gem_close, + .export = msm_gem_prime_export, .pin = msm_gem_prime_pin, .unpin = msm_gem_prime_unpin, .get_sg_table = msm_gem_prime_get_sg_table, diff --git a/drivers/gpu/drm/msm/msm_gem_prime.c b/drivers/gpu/drm/msm/msm_gem_prime.c index ee267490c935..1a6d8099196a 100644 --- a/drivers/gpu/drm/msm/msm_gem_prime.c +++ b/drivers/gpu/drm/msm/msm_gem_prime.c @@ -16,6 +16,9 @@ struct sg_table *msm_gem_prime_get_sg_table(struct drm_gem_object *obj) struct msm_gem_object *msm_obj = to_msm_bo(obj); int npages = obj->size >> PAGE_SHIFT; + if (msm_obj->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EINVAL); + if (WARN_ON(!msm_obj->pages)) /* should have already pinned! */ return ERR_PTR(-ENOMEM); @@ -45,6 +48,15 @@ struct drm_gem_object *msm_gem_prime_import_sg_table(struct drm_device *dev, return msm_gem_import(dev, attach->dmabuf, sg); } + +struct dma_buf *msm_gem_prime_export(struct drm_gem_object *obj, int flags) +{ + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return ERR_PTR(-EPERM); + + return drm_gem_prime_export(obj, flags); +} + int msm_gem_prime_pin(struct drm_gem_object *obj) { struct page **pages; @@ -53,6 +65,9 @@ int msm_gem_prime_pin(struct drm_gem_object *obj) if (obj->import_attach) return 0; + if (to_msm_bo(obj)->flags & MSM_BO_NO_SHARE) + return -EINVAL; + pages = msm_gem_pin_pages_locked(obj); if (IS_ERR(pages)) ret = PTR_ERR(pages); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index b974f5a24dbc..1bccc347945c 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -140,6 +140,19 @@ struct drm_msm_param { #define MSM_BO_SCANOUT 0x00000001 /* scanout capable */ #define MSM_BO_GPU_READONLY 0x00000002 +/* Private buffers do not need to be explicitly listed in the SUBMIT + * ioctl, unless referenced by a drm_msm_gem_submit_cmd. Private + * buffers may NOT be imported/exported or used for scanout (or any + * other situation where buffers can be indefinitely pinned, but + * cases other than scanout are all kernel owned BOs which are not + * visible to userspace). + * + * In exchange for those constraints, all private BOs associated with + * a single context (drm_file) share a single dma_resv, and if there + * has been no eviction since the last submit, there are no per-BO + * bookeeping to do, significantly cutting the SUBMIT overhead. + */ +#define MSM_BO_NO_SHARE 0x00000004 #define MSM_BO_CACHE_MASK 0x000f0000 /* cache modes */ #define MSM_BO_CACHED 0x00010000 @@ -149,6 +162,7 @@ struct drm_msm_param { #define MSM_BO_FLAGS (MSM_BO_SCANOUT | \ MSM_BO_GPU_READONLY | \ + MSM_BO_NO_SHARE | \ MSM_BO_CACHE_MASK) struct drm_msm_gem_new { From patchwork Mon May 19 17:57:22 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892059 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A87D28C022; Mon, 19 May 2025 17:58:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677511; cv=none; b=Io7qODuxfN7H/PPTiRvuDp/B6bNSD1HfmEVd7bzC/K61+DA/qYQacyxyeShFc2bQrgabxIIalX9S/dKMOKoSLbu/fYkfIqKJb0wcC/w/4jvZ4ZOFqwe0yImCaFpjP9xsqlyLvJY7FI5U8Uplw/lDHWSUiBeAGSNJ9fOOJ3e3h4c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677511; c=relaxed/simple; bh=j6bKjmoop+1/uaz6CfUQ2VZqh1wDm9EZSFTjlMrhnBo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rxJqvBCvuE995G/YHRVV91aYHtmL4hy9na0ozsPg/wpfFShKlNwGOYAk3VWXOmPICDh4cR9YsdoKJRPscb3V0Ji3auEj3pfzLLt22VnJiqDAbVGSuf+tmr8XjxMt1n+idvuF2VGxL/y+Efkv5VYRPZKu7FjNMa9EPeakfTRtF90= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Qv8Maxg9; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qv8Maxg9" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-7376e311086so5795542b3a.3; Mon, 19 May 2025 10:58:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677510; x=1748282310; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mQObzJ/G5smai1eIvIjNkPePS5oTUL5f/3qh2v6cVDU=; b=Qv8Maxg9TkyaD4BDmgeLW3E5nq4/25Grtygky8cnxtPWtYdluorHkB6JnnSsFj1uFW BeB3lmvngBFudPjrTQxS7GGGz4zLUCq0ieZznB/55DEiKtZhAh/43CxOGb9TrVbtsyE3 cgjsx2/8afz4w6uDoOENQKm6D1yNd/fVJ7ZKBYyMxoMiq2wU55yvS4iZnaaQD9PqWR+k yZSDUiZr/xxI+YSev/JSawxlGhZAtWzDffpgDDVu2Xjg0bbzUMn08XNPUwLCsWNGr4Dg lZlcSQvBfFtW/LBMakoD1ZbhY+UVciI859qyvUCY3lJCRKzP3JaWLYCvHCSHP1k/qkh+ AaeA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677510; x=1748282310; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mQObzJ/G5smai1eIvIjNkPePS5oTUL5f/3qh2v6cVDU=; b=b8UrEy4y9oIAWNBY2OY79+FO2KDCY+mpYFTJh3TyZuJtxelhejfBGK7hCr5KqkKVbT OU4Xk6OOIooluHUp45jJ2OGO/BKyhH38LmRV+QQnNkfkxu0ULmGaATt1JG007aOgdTs+ pCkaykjmkcyY5LdveYJqmBCpYUFHqpOk+NJclHPX+qutIvpKPKhj/IxayS2wxLFu9kd8 Ab6qhN1Gsgt9CWob9ZhnqhhKuEOcKc3gaWIlpoHECRStK1LJ3JP9o55cZuKcxvRbwmHb 4Jor+bpOj76lVW4zxfMKqD4plosR4+WmFJbsau5x1uBl2BOe07XirQ40BlhZUwUN+33P MQrQ== X-Forwarded-Encrypted: i=1; AJvYcCVvWakljM2Kl8H/i3F0HZs/yCDPF7Be1EoXGLQoCOpP+34tRxRCkJWY/iSFf0OBSLq126j/gAyEb/0B+l8y@vger.kernel.org, AJvYcCWAR+UFNMMU67A7LqBwGzLfDu4XnCLGt910sjlu9L+Iug9MI03uGeNTY9J/s8XrTsaQmwYbz7Mv4GHprfxH@vger.kernel.org X-Gm-Message-State: AOJu0Yzk6UjY4T0RnAR5CQZQpD/g445W16IoA7j261yF4yD7KgN44C46 SyqciivufcHQXfF/2iNo8RDSdzd0GLR9RDnlf/pO5+MGbC9egZLhAyRj X-Gm-Gg: ASbGnctFYRSHlF7BUM5P2Hz288x/wEMmIEiP4JbcV7u6tseJ2zpzADp5mZvpoadL4Fu xHbCaka1fxb8dClrLArbNkMh92aBXqH9O/IYRDxlJr6O3c+4aT4aFqv2QdC1iE7YTWZvOwJSyJ2 Dz+uZvc+fI3bm8T4/Z+vPjxUKLBbM242CzxlGRtc52QwKW2Eoi9RdMBvBcRiImW3LI7FpTFqYG2 fO+5UPv4gqpimVSpAAyJVJ92kNjJKV/nUvpdJtQsUb6/lbEJWXRN+UxeSPuAKNvqTJngSFUSXXb MjGm7sg8+QPt0MMCg7Osh0bVQh2zD2hJ6dVGdyJfo7BSXX5+3yUCxAF172cCFk0IK4PGF95Y9FV B7bQoGTfiOyM0z6IhPAPRKxxzBEyHkY/NOSXF X-Google-Smtp-Source: AGHT+IHFOJVq8mjGt4OQPxx9qu1OTBNCaGJXJ80cWCN7v0fVRfy0Hrt32g1zLejmVT6bVKwJz/WjxA== X-Received: by 2002:a05:6a00:a86:b0:736:a8db:93b4 with SMTP id d2e1a72fcca58-742a97a71d6mr18340281b3a.2.1747677509539; Mon, 19 May 2025 10:58:29 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742bab0bc35sm4789417b3a.13.2025.05.19.10.58.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:28 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 25/40] drm/msm: Crashdump prep for sparse mappings Date: Mon, 19 May 2025 10:57:22 -0700 Message-ID: <20250519175755.13037-13-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In this case, userspace could request dumping partial GEM obj mappings. Also drop use of should_dump() helper, which really only makes sense in the old submit->bos[] table world. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.c | 17 ++++++++++------- 1 file changed, 10 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 6503ce655b10..2eaca2a22de9 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -219,13 +219,14 @@ static void msm_gpu_devcoredump_free(void *data) } static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, - struct drm_gem_object *obj, u64 iova, bool full) + struct drm_gem_object *obj, u64 iova, + bool full, size_t offset, size_t size) { struct msm_gpu_state_bo *state_bo = &state->bos[state->nr_bos]; struct msm_gem_object *msm_obj = to_msm_bo(obj); /* Don't record write only objects */ - state_bo->size = obj->size; + state_bo->size = size; state_bo->flags = msm_obj->flags; state_bo->iova = iova; @@ -236,7 +237,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (full) { void *ptr; - state_bo->data = kvmalloc(obj->size, GFP_KERNEL); + state_bo->data = kvmalloc(size, GFP_KERNEL); if (!state_bo->data) goto out; @@ -249,7 +250,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, goto out; } - memcpy(state_bo->data, ptr, obj->size); + memcpy(state_bo->data, ptr + offset, size); msm_gem_put_vaddr(obj); } out: @@ -279,6 +280,7 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, state->fault_info = gpu->fault_info; if (submit) { + extern bool rd_full; int i; if (state->fault_info.ttbr0) { @@ -294,9 +296,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); for (i = 0; state->bos && i < submit->nr_bos; i++) { - msm_gpu_crashstate_get_bo(state, submit->bos[i].obj, - submit->bos[i].iova, - should_dump(submit, i)); + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); } } From patchwork Mon May 19 17:57:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891128 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0529828C2B1; Mon, 19 May 2025 17:58:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677513; cv=none; b=ShtSnOAXrmb2QJkQWjwnkwx8rULCdHYuYlOKyhSxM5h3oZ5dGXgk/5E8AsypljKdP/TWifDeFN4BOdBG0qeLDqYxmC522/aA9e3ldH/JPf2TVp9n+12h+Ibwjqi6TElunwtrtW42fC5dQGw9Yk0uSFGy03/fI6OQiP/D0HMJGgk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677513; c=relaxed/simple; bh=BuOG55v99u/wyZEZkhop63nDtOBrVULAauHXI0T6Qfo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sHflsQ5Ts0LaQZBDcM7QIXGPq2aJg81eGamJ32lYFdl57kM2o83hzK5qTU7EjMrjTJRiYNUbdOhV8Q2dYnsuzKVaPa4V9Z9A0rdFO2HhQkDZk2Ewk6Cx5rZPU+NbRc/5mvdyphrJJ20XbTS0rC/AJkOogKfce4dxYEVd2QpSoDs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=gvGidRLm; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="gvGidRLm" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-7425bd5a83aso4856663b3a.0; Mon, 19 May 2025 10:58:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677511; x=1748282311; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2kT0TmtBcaT0OVTnm09Euq5X3LcvTa/vRpQlqBQDIOE=; b=gvGidRLmwL5bngtUSCs1wIQ5TT2aQThuiojStzXgJYGhpEvmw/xxEe5n+57ZCApk+S s8ITKFdG12UdX5TowiuiKaGLNtm1byj45IsJngWAgZ1ntM9Nm6v/N/k7KdPOeszt0NbT kMKoSiKYx8Yn+YwWgiL+vF41jr7B6yiPTmshrTST+P+0+nKT9a+pmZHK59riK3Qqt4WJ RCm1k96qeUqUfEnePD3e8r6LlvMBgldiIY4+8oXPWXtJSBZjs48o04JXxd8SkjuM2ZzT OKXKpXrEsX7n48V4fnQFg8kKxo8kyLaxmk5IKxx9R20zYzsvuF2Uq24X9zvET7RV7OAY HDZw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677511; x=1748282311; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2kT0TmtBcaT0OVTnm09Euq5X3LcvTa/vRpQlqBQDIOE=; b=b16P/gsvjmwRaLJ63tQ59kMcgDUERRAgr7rV9Krk9P8M91oaw/Msi0B2Q9GmDqRCkx j9A66U1GEvfyRy4XQmFnciJxWbxRrn/+t+lNSopvXoF4/P7R7A9JWeyvq0z1TYAj8szD sWGBDQgkiQdtrLaX9SVsVfmvYX1aW4kzOakUFAAPe2OEyziV53k+DztD60XpNkcih3Tb NQK8ssVAid9qvnt9m5RgV8DKSXQ4tdzO73qh7cycFEMzWjxO2VDS40sGNvFqhP1BzQMU ubGC58tQYUXp6ZPz3Jxol+fUXozrs6wHHuHI0nFpQkG2AF0xyEFxanbxOr6pNgREI7zd YURg== X-Forwarded-Encrypted: i=1; AJvYcCUIiO85T9eqYJJLCejqulyRrlNM4f02C+KK3xUvu8WalSfLUNWx955auiC6sZ1k7bD3H/+w28bFecKmPAJu@vger.kernel.org, AJvYcCVZsHE2ndX7RGNYEy8HyPkgPQ98nxtg6Gt9HE3WC2Fwd9MufClrIHZy5E1uNM8ec3KZnFYz0OB+nm14/IKJ@vger.kernel.org X-Gm-Message-State: AOJu0Yw+LKR4NQ16se66rbzD/8q0mXW0fbL+9p+q2f0R1YSaISjOCLe+ OG8kE6NCO3RYtEWQCcttBlFSM/8V2FvA4tjz2yfIyDG3hknJvy9l3AyR X-Gm-Gg: ASbGncsxmYgOywoZRA1zRm2FAAzlKppyddK6lQH5WwEOVWV70bBzeBkU/1RbTiJVie/ XG0hF27MMge1gAqtgC+JP0+aiq9beKNv00vJeEURGs4NbDiy/NNbceybawOYpo08jNd1YRlylrJ dnPUGP6mAEUJtL1Xg0rme95umtdKbyM1LzKFQ6M3vzbRdCpB1Yw/p59a5E2z2wDaxa7nY9QvFng Q/mxBafT6slAEU7f+Mmo+KI/Uci+Ih7CjkHvk6C5ZI79CeKzhIUAe7+uRwA/E2pIIscCPLhQHr2 yDFc0X77AFDhgETVBJ2a1EVTpCEptKaXYh74BwT90xLsNCauR7ZqWX7ioMlAj4WOU6qpF6UoWCe ZX/mGzA9FIU1Pk4KsizY8uWONYg== X-Google-Smtp-Source: AGHT+IGcNbZUzGfoylGoUZxjRkQzz9FJ2idEjJZ6ivfqPV/jiUM1EgTwpooQaYWRwMgYCi0RdliOzw== X-Received: by 2002:a05:6a20:4309:b0:1f3:1d13:96b3 with SMTP id adf61e73a8af0-2162188cb19mr18253281637.5.1747677511282; Mon, 19 May 2025 10:58:31 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b26eb0a44e7sm6583580a12.73.2025.05.19.10.58.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:30 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 26/40] drm/msm: rd dumping prep for sparse mappings Date: Mon, 19 May 2025 10:57:23 -0700 Message-ID: <20250519175755.13037-14-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Similar to the previous commit, add support for dumping partial mappings. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 10 --------- drivers/gpu/drm/msm/msm_rd.c | 38 ++++++++++++++++------------------- 2 files changed, 17 insertions(+), 31 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 67f845213810..f7b85084e228 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -402,14 +402,4 @@ static inline void msm_gem_submit_put(struct msm_gem_submit *submit) void msm_submit_retire(struct msm_gem_submit *submit); -/* helper to determine of a buffer in submit should be dumped, used for both - * devcoredump and debugfs cmdstream dumping: - */ -static inline bool -should_dump(struct msm_gem_submit *submit, int idx) -{ - extern bool rd_full; - return rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); -} - #endif /* __MSM_GEM_H__ */ diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index 39138e190cb9..edbcb93410a9 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -308,21 +308,11 @@ void msm_rd_debugfs_cleanup(struct msm_drm_private *priv) priv->hangrd = NULL; } -static void snapshot_buf(struct msm_rd_state *rd, - struct msm_gem_submit *submit, int idx, - uint64_t iova, uint32_t size, bool full) +static void snapshot_buf(struct msm_rd_state *rd, struct drm_gem_object *obj, + uint64_t iova, bool full, size_t offset, size_t size) { - struct drm_gem_object *obj = submit->bos[idx].obj; - unsigned offset = 0; const char *buf; - if (iova) { - offset = iova - submit->bos[idx].iova; - } else { - iova = submit->bos[idx].iova; - size = obj->size; - } - /* * Always write the GPUADDR header so can get a complete list of all the * buffers in the cmd @@ -333,10 +323,6 @@ static void snapshot_buf(struct msm_rd_state *rd, if (!full) return; - /* But only dump the contents of buffers marked READ */ - if (!(submit->bos[idx].flags & MSM_SUBMIT_BO_READ)) - return; - buf = msm_gem_get_vaddr_active(obj); if (IS_ERR(buf)) return; @@ -352,6 +338,7 @@ static void snapshot_buf(struct msm_rd_state *rd, void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, const char *fmt, ...) { + extern bool rd_full; struct task_struct *task; char msg[256]; int i, n; @@ -385,16 +372,25 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) - snapshot_buf(rd, submit, i, 0, 0, should_dump(submit, i)); + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } for (i = 0; i < submit->nr_cmds; i++) { uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); /* snapshot cmdstream bo's (if we haven't already): */ - if (!should_dump(submit, i)) { - snapshot_buf(rd, submit, submit->cmd[i].idx, - submit->cmd[i].iova, szd * 4, true); + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); } } From patchwork Mon May 19 17:57:24 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892058 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A73728C5A6; Mon, 19 May 2025 17:58:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677515; cv=none; b=el9qleOmLctSB6KEfbiKLPdgdYB1l5+yHm219EENDvFIWoTRg5pNZn60syXI+PZoa9UpGN5kMDK3uCUHUNsvwSid4ihSuiZ5rt+/G/Grk1k3LMHgafreDW5ItU2Mxs+L+E+eOGLVValG1Cpw0PJKIyJqZRoptcgd2xeGqQV8hKE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677515; c=relaxed/simple; bh=6o6txMLc6EUCt2tZE/ZV6Q3VKko0BSuow8deFYOJUDw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FlZw+YuiaKGbP/YiLbYMPQ1+CX7eu+f1HCGifrqMwYXuBIwXIZ/LWl2p3Xo/HuDI/LhBcpu5+C97EBi3r+egjYJYnKcIa0EF52zcPjuT0yWa38fsjsnuwy6+l5Nk1//4TPbvVtmi62FUsAcZ2mJb3G5xOXW+0xhhghBN3pYiaiA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=NGEFM45d; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="NGEFM45d" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-742c3d06de3so2145333b3a.0; Mon, 19 May 2025 10:58:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677513; x=1748282313; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zY1avGRqORwj7W2bUyBF8WM5IlOJxuqb8uwI4O2Hev0=; b=NGEFM45dnWFSf1f4pKUWeUBDeWXZN4Ithdm5TCUk+oSA4KzxxB7l0iNGK/fINtGcEU XBkW8Ms/eCj4wROl/Rswz2G09eAtyDG4AyQtpxG27krt9WI9NGnmL+ge4WFkUBiiCKwq 9FhtvDA3Vi8slHjp4HAH3mdth/T3mSnWm+HfLQ4Jkbw9Ht5KnnPkO2vg1FEJ6VaCD4v6 Vadwv6MOwAkvsWt5YHJQT9hb1We+UQD/RcJy5Ruvc4ImgEiiSrb9SWJZciF9JRC/htzj vZjORk4bG0vYpQjcTJfmkhWDHwQjQDlM9CVEryMyfSXBaFp6KKpcfeRsGiGOpHfXwh9U hdSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677513; x=1748282313; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zY1avGRqORwj7W2bUyBF8WM5IlOJxuqb8uwI4O2Hev0=; b=Z3WddLksZ6YQdc95nZcAXwC7DqHYsnmGg0KaQCOVfzrO5wTSfEPoVEnbGWX1bK+cIp EJEnEioI23KmNVGpJJxWYB9bWPmuGrBAGMFuCql26RN9F47gjxGOeURWo6JnTYA2KPk5 gCT5/Y/XL78fWZHfY/7oi00bvKdz4ApUQFXpWi3ypirXMPtqtVACtlj6KWsOzKxFDh9K NcIMUCeYJDzkAeTf8CuypJNAaBgo4SaNH7e7s4D9LnD1TOCENvZMG854f+4mGWkKFbT4 BQa5k80TeoDXofXgfIEqlmQ/Hr+m7sQsD8Qo/szWK5KMhZ91ivYWyZftWGdvENeo6q5P xd/A== X-Forwarded-Encrypted: i=1; AJvYcCVErBjyZ5sK1gd1mJOZWCdRYJQiuRHB8ghjFHBiajdVKUt6v3A+s95jUqy13DH6T9QXbCy5qDjVESzWUoxs@vger.kernel.org, AJvYcCVIwIK93pXlkX65Gm7MdBOiMlwUPTsXWZvMKqOXCInyhVnudkcDltn8iiEcNu93AM3ej2K2WgkX7SbtJrMB@vger.kernel.org X-Gm-Message-State: AOJu0Yyra7u0D591JM7unxUx63UtVGfEGYFU6//mC5DwEi1kuMZqwvyX YMB19lqcBlbY0oK1yNmGv1dhOCkFU5CksmIHNQHLDGzyave1cFoKXudo X-Gm-Gg: ASbGncsn29way4qYnrJOjQuiKE2OBNW4x/AuD7ZqNRHitRSnDpfC2UxJB/duQlJYueD fYbR6S2g9dm9GDKZMHaWzybhJQWUmCSpw+EgVuYEsYnEtFnzjRTyF0F+lbHncVvClx3L8E7Wifv iVjnlLwP8FIafAjgzguecl8eIKkscrySoDWV7OD6BKmoAqHfHkiRv89d43bKyengPPYrWw4qNo1 SSnHkjBBB3uMJWLQVH0hXyr+XOb+oN3tz2TMEiil2Dczb2eFlQeMtcgyt2A/mX/aRhC/qOzANKa Vh6W1RB/dPgwTqmOpPqgbMmFDY167yoyuZ3pL8V8NTbQZq9N23Wr+2yr95igKPsxPOHqZfoKDoH aUVIe3ZOaJvwARVzgvf/Og5Ot2nZbaMz1MKz+ X-Google-Smtp-Source: AGHT+IFUxK2Us5L6gTZXlOtrwuoPQ/zSMm9MPFd7jSFIQ7VrGyhlBc7R6pQsPS3u5sVrKCGwMMnmdQ== X-Received: by 2002:a05:6a00:3492:b0:742:9bd3:cd1f with SMTP id d2e1a72fcca58-742acd728eamr16972300b3a.23.1747677512785; Mon, 19 May 2025 10:58:32 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9876dcdsm6728589b3a.138.2025.05.19.10.58.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:32 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 27/40] drm/msm: Crashdec support for sparse Date: Mon, 19 May 2025 10:57:24 -0700 Message-ID: <20250519175755.13037-15-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark In this case, we need to iterate the VMAs looking for ones with MSM_VMA_DUMP flag. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu.c | 96 ++++++++++++++++++++++++++--------- 1 file changed, 72 insertions(+), 24 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index 2eaca2a22de9..b70355fc8570 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -241,9 +241,7 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, if (!state_bo->data) goto out; - msm_gem_lock(obj); ptr = msm_gem_get_vaddr_active(obj); - msm_gem_unlock(obj); if (IS_ERR(ptr)) { kvfree(state_bo->data); state_bo->data = NULL; @@ -251,12 +249,75 @@ static void msm_gpu_crashstate_get_bo(struct msm_gpu_state *state, } memcpy(state_bo->data, ptr + offset, size); - msm_gem_put_vaddr(obj); + msm_gem_put_vaddr_locked(obj); } out: state->nr_bos++; } +static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submit *submit) +{ + extern bool rd_full; + + if (!submit) + return; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_exec exec; + struct drm_gpuva *vma; + unsigned cnt = 0; + + drm_exec_init(&exec, DRM_EXEC_IGNORE_DUPLICATES, 0); + drm_exec_until_all_locked(&exec) { + cnt = 0; + + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(submit->vm)); + drm_exec_retry_on_contention(&exec); + + drm_gpuvm_for_each_va (vma, submit->vm) { + if (!vma->gem.obj) + continue; + + cnt++; + drm_exec_lock_obj(&exec, vma->gem.obj); + drm_exec_retry_on_contention(&exec); + } + + } + + drm_gpuvm_for_each_va (vma, submit->vm) + cnt++; + + state->bos = kcalloc(cnt, sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + msm_gpu_crashstate_get_bo(state, vma->gem.obj, vma->va.addr, + dump, vma->gem.offset, vma->va.range); + } + + drm_exec_fini(&exec); + } else { + state->bos = kcalloc(submit->nr_bos, + sizeof(struct msm_gpu_state_bo), GFP_KERNEL); + + for (int i = 0; state->bos && i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + msm_gem_lock(obj); + msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, + dump, 0, obj->size); + msm_gem_unlock(obj); + } + } +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -279,30 +340,17 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, state->cmd = kstrdup(cmd, GFP_KERNEL); state->fault_info = gpu->fault_info; - if (submit) { - extern bool rd_full; - int i; - - if (state->fault_info.ttbr0) { - struct msm_gpu_fault_info *info = &state->fault_info; - struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; + if (submit && state->fault_info.ttbr0) { + struct msm_gpu_fault_info *info = &state->fault_info; + struct msm_mmu *mmu = to_msm_vm(submit->vm)->mmu; - msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, - &info->asid); - msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); - } - - state->bos = kcalloc(submit->nr_bos, - sizeof(struct msm_gpu_state_bo), GFP_KERNEL); - - for (i = 0; state->bos && i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); - msm_gpu_crashstate_get_bo(state, obj, submit->bos[i].iova, - dump, 0, obj->size); - } + msm_iommu_pagetable_params(mmu, &info->pgtbl_ttbr0, + &info->asid); + msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } + crashstate_get_bos(state, submit); + /* Set the active crash state to be dumped on failure */ gpu->crashstate = state; From patchwork Mon May 19 17:57:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891127 Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1E2A728C5D8; Mon, 19 May 2025 17:58:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677516; cv=none; b=BgYi5NNeO6Jw0KPkmGzEtRak4hyu31wIIrzMy88XOXRFBLTl5YTroGajrNx1W/38IRKouPan/+ceCDNAS/+D0+0sF5CBKEicxMcSNGkfbwN6c6Ovtq0DuDLOPU8XfT/+gt0+UOF5t7WOkFVGbJKFekOmXZQ+EQp3+w03z4j7g8k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677516; c=relaxed/simple; bh=1wsvWV0YZRQ3M4IxbBtLuTb+92UFg0iH3xbdfIwOzMM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Wuqzou8vIydgS+ZBYzpKmyVu6xkAbeoluyliaDWoabiBX5DKYm7Ps8xGKga1XQ/Tm9OXV/9AHUD2GGcFL2MVjBfBl/VVHbmf9gckrsnarXD5Q8PT8bdxOPBGu9NheA+kQVZ7vDDR8zmgWdsqhr6bJEcSEz0WXFHQLLC63u7GqLQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WKwsCfZW; arc=none smtp.client-ip=209.85.215.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WKwsCfZW" Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-af523f4511fso3738224a12.0; Mon, 19 May 2025 10:58:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677514; x=1748282314; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QiZxqllnGo+IvZFpO+z8qrywFd5uUBfHA/RzhZX4v9U=; b=WKwsCfZWFw0qbXlKmK8Sccoc4E6JwqBeCDyKwrc0EoaFBFfwEksTUzmeTkYzBU1yCB 0k1cfuOzsD7oSTGJsz1n4lDF+/ZSKqUg0qGTQ7WG6KVQwdVMmcXcfLzHNIJJtiUz36BI 9L96moTrGRg+xdi0htfaKOGE6SXzPVM7EmOJEV8lX2d/hY5aocVI8dlwYDxYF5ghagoz wqw+4tOsiqU7ru8ZF2p8aLTOmDZa3TW4dN+RnxL8yN66ELRlhZJ7dwFaJTTQeXc58O4S DN/s+DYy/RH5Opj0MVIlmRqVMaStie21LdJoyJPuK/dXsrTmk70LUk8dkg2Ob4ZXijl7 qPog== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677514; x=1748282314; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QiZxqllnGo+IvZFpO+z8qrywFd5uUBfHA/RzhZX4v9U=; b=rlbitACyfaJMxWRYW+/FQ6M0/uZ+DBq9FkEnb9WQwwBrLyyV5sQQfYnfZROP6iaKXU +2bGEdJx3Y/v/XoQgvqGSYJQyz/1stq0Az9oW9oNko60GZmLP1zxIhZdTDrf5aGWART4 fFYqRBvTZkeuD6TEmWGQlZRUOd3XFzL1QGJqKKEJYBe8oYV52F6ufTknlw/yQphUMTLc ZrRSIwjRbRJ6JGm00g4fowMMMkDfPjNifVofm7lEczR2G84tLg/MS9SSSea6bXlCcC4e 5NciZO4LgyuRgrghVMOw6lCb9LNEnO+L/IA8Z82Ueg3bqZSyUDcWide85h9/LXtaiopA YL7Q== X-Forwarded-Encrypted: i=1; AJvYcCVN8O2A49nlMsesxEmIlMuO7m8MDhQOA2D3cKwm9nDqlvpSGfqilJs9ofI3ZGPz4zCFJbs0lv1zJDu1UmfC@vger.kernel.org, AJvYcCW/c71UCo05EFVdh8D10E7AzEg456rPztJ/AcoYqk9Bho1cWj7n4cuuLWWyMtcyRPaBh2Zic0yBnZ556RQh@vger.kernel.org X-Gm-Message-State: AOJu0YzpC3GdZr33nrxJIQsHQwnZFfxfEvCY/ctDsKeJsxX5Icuftb4s y8dOutqXpQ0XDFOUn+JgcwzDK6HmCLykVdewDEuLPpNtIXirdCxJLyN9 X-Gm-Gg: ASbGncs4oYKDErlRgYxy0+o20zGh3oVNt1AeUvMqlRV2MIjIsRiSw0pvrWVNBKrHYO9 E8Zyim9RtrZ40vz9iB2iqifhIVk9IjOgJl0Fe7JiCkTdtvYxvJDKpaYwPEt1RncS1QbGAZMvCgy pNBLYiYFyrPRc+/e3CcHLlCYw4AWpoiUQN/S4PKzMhoB+uEp2TPcSdK13Kdt2sMDH+0VbKE+h7/ FN2i4sYPhoHkWZ3VGqCqqHnakuoNHuvBnF9Jk6VSKYqcmicLgGwAKyjsVUwIlOHq9awFTpUWZja R7Gv/8xT1wYAVQF5ojpVfKgDmh/JiVkv87kwwIahgIXkCKMfzGCV/8oU7hYTtPIDOtVxmOhxgeI GVtkIFRCnQfhjLU8j4/hSKSqfVg== X-Google-Smtp-Source: AGHT+IHrmLudJpGr4AV2O9eH5ch69mKTZqWzfXo1Zl0S1OWsw1bZvzg52Ne9++UTY1xHmvTFxaiFnA== X-Received: by 2002:a17:903:fa4:b0:22f:f747:8fbe with SMTP id d9443c01a7336-231d458284bmr197335365ad.53.1747677514284; Mon, 19 May 2025 10:58:34 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4ebb9f6sm63061805ad.207.2025.05.19.10.58.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:33 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 28/40] drm/msm: rd dumping support for sparse Date: Mon, 19 May 2025 10:57:25 -0700 Message-ID: <20250519175755.13037-16-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark As with devcoredump, we need to iterate the VMAs to figure out what to dump. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_rd.c | 48 +++++++++++++++++++++++++----------- 1 file changed, 33 insertions(+), 15 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_rd.c b/drivers/gpu/drm/msm/msm_rd.c index edbcb93410a9..54493a94dcb7 100644 --- a/drivers/gpu/drm/msm/msm_rd.c +++ b/drivers/gpu/drm/msm/msm_rd.c @@ -372,25 +372,43 @@ void msm_rd_dump_submit(struct msm_rd_state *rd, struct msm_gem_submit *submit, rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); - for (i = 0; i < submit->nr_bos; i++) { - struct drm_gem_object *obj = submit->bos[i].obj; - bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + if (msm_context_is_vmbind(submit->queue->ctx)) { + struct drm_gpuva *vma; - snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); - } + drm_gpuvm_resv_assert_held(submit->vm); - for (i = 0; i < submit->nr_cmds; i++) { - uint32_t szd = submit->cmd[i].size; /* in dwords */ - int idx = submit->cmd[i].idx; - bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); + drm_gpuvm_for_each_va (vma, submit->vm) { + bool dump = rd_full || (vma->flags & MSM_VMA_DUMP); + + /* Skip MAP_NULL/PRR VMAs: */ + if (!vma->gem.obj) + continue; + + snapshot_buf(rd, vma->gem.obj, vma->va.addr, dump, + vma->gem.offset, vma->va.range); + } + + } else { + for (i = 0; i < submit->nr_bos; i++) { + struct drm_gem_object *obj = submit->bos[i].obj; + bool dump = rd_full || (submit->bos[i].flags & MSM_SUBMIT_BO_DUMP); + + snapshot_buf(rd, obj, submit->bos[i].iova, dump, 0, obj->size); + } + + for (i = 0; i < submit->nr_cmds; i++) { + uint32_t szd = submit->cmd[i].size; /* in dwords */ + int idx = submit->cmd[i].idx; + bool dump = rd_full || (submit->bos[idx].flags & MSM_SUBMIT_BO_DUMP); - /* snapshot cmdstream bo's (if we haven't already): */ - if (!dump) { - struct drm_gem_object *obj = submit->bos[idx].obj; - size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; + /* snapshot cmdstream bo's (if we haven't already): */ + if (!dump) { + struct drm_gem_object *obj = submit->bos[idx].obj; + size_t offset = submit->cmd[i].iova - submit->bos[idx].iova; - snapshot_buf(rd, obj, submit->cmd[i].iova, true, - offset, szd * 4); + snapshot_buf(rd, obj, submit->cmd[i].iova, true, + offset, szd * 4); + } } } From patchwork Mon May 19 17:57:26 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892057 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2477B28CF4A; Mon, 19 May 2025 17:58:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677519; cv=none; b=uBfdC0uKhwvhITTmFCDruNijmMprsyzvJIDlxOCpdtIIumZaJYxqoGLymMmnnHLafuxJL5Q84qfPni6WQqBfRoyJd0RsaBOJ2Z2eiX0GI+gVMxQWgEGRGSbQEOVQO4L7oNK3XJrR2kpKnMEzXvjKEohee8+LUJjYqnaryc5TXK8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677519; c=relaxed/simple; bh=jG4HobhwucdIR8FiKaw3dwjAyUJvZK+eWfYr6ftrCks=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rQHdl04eqKeZ01AKi1ktIOCUjQHxEDIX71pT4qRFcd/lJJ9LU1jTA9Spu8d884SC0ilDbIKOLrlznRQrF0bIUvwZ6Jb5TICpOu+bLM44ySY9qFg5+YEHpxxbk2TXNX1K6r16/AyAnEoe6FqyAQkHy/nZFhBwliFFGAcTjcGNJ8E= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=hswNTVHd; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="hswNTVHd" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-73c17c770a7so5119776b3a.2; Mon, 19 May 2025 10:58:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677516; x=1748282316; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=TSLYVQjOTOZYiQMTrnfPyfZsaxnus9kkFYB9Wzg3Y0s=; b=hswNTVHdxMJ/nm3QTceC7ltU5du4r2xBSKcd14GmNlz9j5HC6BpLQWzzCg+gLOPReq N5bT2oASU87nED27EUxdtpSm/acxLpk8WmyHD4QD8XATG2UIfNAQg+yghKsQ78TcLl3Z WvVPh05jh2WOvaH5JP3Y5iWbJkUh3OBskQedkY4TwhjvPnnPoFLlg31KX5K7vz9lJ4Nm yGZH1/d8p62xnOgzvfZhzodu4HMfdU/2QtTMg1p5Ibmsbf1JI7v7Wd8PYRvtpIgMz7T5 HrjcOV/DQfLPYjOyEzYicscVZQXmN8eP1afoMZOLDFaqesxA8Xo7eG3THOtd+ujm7+PY 69KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677516; x=1748282316; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=TSLYVQjOTOZYiQMTrnfPyfZsaxnus9kkFYB9Wzg3Y0s=; b=ODmqCnFMMHI0pCz3ODkOlzQIXFo9tqfn7MmPWt2qQi3NPJGGdrDIRLWyN/n6XDbTS7 Pkf53PrsptcPYmyfTrelp0/iK20i1o4FNt0wv+lIyyJZjM8bcYhE0lgGyFUwauNxXVM/ InNZuctTUqpGMV6JuEUuD30JWyoK3Vm3D1r/hTwYNyKIFhgFgOwU1iwmn8+D6ilY2nue TuSQ6LJb78Zh2G7EsqosavUr9Xa4uxD+uX1CyKvl5hRgPzN6T5yBUMX2qGm9EjP8Eb4O Psfv8cytISPLwPxQZtFCEsVn/b+v+wkiZcQy3XDaQA/VPDP6NX+rYDXiX13FPNRvzPQ7 2sAA== X-Forwarded-Encrypted: i=1; AJvYcCVep92EsIdCPcNg1/x+rOAuWq/bFrPJFov2aTUTPxuLbm9c6U6x6hpE67OJM3QNppATzVxyRj3j8tQoGM8R@vger.kernel.org, AJvYcCXQheGJgVFfBummW+T/GSMb3aQy0zzJLk4+9xMb3KSJMily1ue6q+y2Z3dP62Yvnyqc848y7S42xPVEpa4Z@vger.kernel.org, AJvYcCXqkPn13ZO5fhKDYRmHnz+2NtKkJ8NmTByoKZkIuQOYRdVeCTqd+OHDs/CZUe0bzOWJ6Fr1BT9ZSWS1NVU=@vger.kernel.org X-Gm-Message-State: AOJu0YwJY5zmRIUdjmJWrrrZw5wy9zCZDlKW37dOZ06lkQXOzUT/U794 jIsO63Dn6JtHdRCHGgI2Fu1Ai0zwDaKCMwE9b6mmjpipmmVHBPUQVJ2O X-Gm-Gg: ASbGncvwm6B6TPOoA2JGlyrh6T0m19l/ko8KW9oJBm7YBQjw/48Aihr1qbVOYirCx4I u857Xas1dcKR7nROgcFAafkvKtRp0Q7BwpB0IuWYTe5s3i+tbkFW20lBZrYSk+FbQ5aDy2IdoeA i6YxW9LfjULoWX+sFU6YMtWaiKea1JWSLVXAaRziubAMiF86U2vS9ppajdNAqwC09dgo1/ayaV7 KsSgB81n/2ukpfCRxMZmFOS2mCQlqmCW5rSZe9uIKE8Ytt9+PTpI64sUufhjgkuFECraBhI46Q4 8soaWeao7BP6zCfjANngCxAsGcyHQFcxu75Ogm/IQkjnxaAwrf4zc0eZFTgTD8KGFbcAr4cdxQp vl414xm1xsHqs5Q0flToIlXWJpQ== X-Google-Smtp-Source: AGHT+IF40TTkGilyFoubydEYyxFSMUompa1jeTpGoGAZKV8o80mshojyviGGAX0jtbaVqJ6+GVV7xA== X-Received: by 2002:a05:6a00:2da5:b0:73f:f816:dd78 with SMTP id d2e1a72fcca58-742a98b2e69mr17791839b3a.15.1747677516315; Mon, 19 May 2025 10:58:36 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a97375e9sm6546833b3a.73.2025.05.19.10.58.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:35 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 29/40] drm/msm: Extract out syncobj helpers Date: Mon, 19 May 2025 10:57:26 -0700 Message-ID: <20250519175755.13037-17-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark We'll be re-using these for the VM_BIND ioctl. Also, rename a few things in the uapi header to reflect that syncobj use is not specific to the submit ioctl. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/Makefile | 1 + drivers/gpu/drm/msm/msm_gem_submit.c | 192 ++------------------------- drivers/gpu/drm/msm/msm_syncobj.c | 172 ++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_syncobj.h | 37 ++++++ include/uapi/drm/msm_drm.h | 26 ++-- 5 files changed, 235 insertions(+), 193 deletions(-) create mode 100644 drivers/gpu/drm/msm/msm_syncobj.c create mode 100644 drivers/gpu/drm/msm/msm_syncobj.h diff --git a/drivers/gpu/drm/msm/Makefile b/drivers/gpu/drm/msm/Makefile index 5df20cbeafb8..8af34f87e0c8 100644 --- a/drivers/gpu/drm/msm/Makefile +++ b/drivers/gpu/drm/msm/Makefile @@ -128,6 +128,7 @@ msm-y += \ msm_rd.o \ msm_ringbuffer.o \ msm_submitqueue.o \ + msm_syncobj.o \ msm_gpu_tracepoints.o \ msm-$(CONFIG_DRM_FBDEV_EMULATION) += msm_fbdev.o diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index f282d691087f..bfb8c5ac1f1e 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -16,6 +16,7 @@ #include "msm_gpu.h" #include "msm_gem.h" #include "msm_gpu_trace.h" +#include "msm_syncobj.h" /* For userspace errors, use DRM_UT_DRIVER.. so that userspace can enable * error msgs for debugging, but we don't spam dmesg by default @@ -486,173 +487,6 @@ void msm_submit_retire(struct msm_gem_submit *submit) } } -struct msm_submit_post_dep { - struct drm_syncobj *syncobj; - uint64_t point; - struct dma_fence_chain *chain; -}; - -static struct drm_syncobj **msm_parse_deps(struct msm_gem_submit *submit, - struct drm_file *file, - uint64_t in_syncobjs_addr, - uint32_t nr_in_syncobjs, - size_t syncobj_stride) -{ - struct drm_syncobj **syncobjs = NULL; - struct drm_msm_gem_submit_syncobj syncobj_desc = {0}; - int ret = 0; - uint32_t i, j; - - syncobjs = kcalloc(nr_in_syncobjs, sizeof(*syncobjs), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!syncobjs) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < nr_in_syncobjs; ++i) { - uint64_t address = in_syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret = -EFAULT; - break; - } - - if (syncobj_desc.point && - !drm_core_check_feature(submit->dev, DRIVER_SYNCOBJ_TIMELINE)) { - ret = SUBMIT_ERROR(EOPNOTSUPP, submit, "syncobj timeline unsupported"); - break; - } - - if (syncobj_desc.flags & ~MSM_SUBMIT_SYNCOBJ_FLAGS) { - ret = SUBMIT_ERROR(EINVAL, submit, "invalid syncobj flags: %x", syncobj_desc.flags); - break; - } - - ret = drm_sched_job_add_syncobj_dependency(&submit->base, file, - syncobj_desc.handle, syncobj_desc.point); - if (ret) - break; - - if (syncobj_desc.flags & MSM_SUBMIT_SYNCOBJ_RESET) { - syncobjs[i] = - drm_syncobj_find(file, syncobj_desc.handle); - if (!syncobjs[i]) { - ret = SUBMIT_ERROR(EINVAL, submit, "invalid syncobj handle: %u", i); - break; - } - } - } - - if (ret) { - for (j = 0; j <= i; ++j) { - if (syncobjs[j]) - drm_syncobj_put(syncobjs[j]); - } - kfree(syncobjs); - return ERR_PTR(ret); - } - return syncobjs; -} - -static void msm_reset_syncobjs(struct drm_syncobj **syncobjs, - uint32_t nr_syncobjs) -{ - uint32_t i; - - for (i = 0; syncobjs && i < nr_syncobjs; ++i) { - if (syncobjs[i]) - drm_syncobj_replace_fence(syncobjs[i], NULL); - } -} - -static struct msm_submit_post_dep *msm_parse_post_deps(struct drm_device *dev, - struct drm_file *file, - uint64_t syncobjs_addr, - uint32_t nr_syncobjs, - size_t syncobj_stride) -{ - struct msm_submit_post_dep *post_deps; - struct drm_msm_gem_submit_syncobj syncobj_desc = {0}; - int ret = 0; - uint32_t i, j; - - post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps), - GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); - if (!post_deps) - return ERR_PTR(-ENOMEM); - - for (i = 0; i < nr_syncobjs; ++i) { - uint64_t address = syncobjs_addr + i * syncobj_stride; - - if (copy_from_user(&syncobj_desc, - u64_to_user_ptr(address), - min(syncobj_stride, sizeof(syncobj_desc)))) { - ret = -EFAULT; - break; - } - - post_deps[i].point = syncobj_desc.point; - - if (syncobj_desc.flags) { - ret = UERR(EINVAL, dev, "invalid syncobj flags"); - break; - } - - if (syncobj_desc.point) { - if (!drm_core_check_feature(dev, - DRIVER_SYNCOBJ_TIMELINE)) { - ret = UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); - break; - } - - post_deps[i].chain = dma_fence_chain_alloc(); - if (!post_deps[i].chain) { - ret = -ENOMEM; - break; - } - } - - post_deps[i].syncobj = - drm_syncobj_find(file, syncobj_desc.handle); - if (!post_deps[i].syncobj) { - ret = UERR(EINVAL, dev, "invalid syncobj handle"); - break; - } - } - - if (ret) { - for (j = 0; j <= i; ++j) { - dma_fence_chain_free(post_deps[j].chain); - if (post_deps[j].syncobj) - drm_syncobj_put(post_deps[j].syncobj); - } - - kfree(post_deps); - return ERR_PTR(ret); - } - - return post_deps; -} - -static void msm_process_post_deps(struct msm_submit_post_dep *post_deps, - uint32_t count, struct dma_fence *fence) -{ - uint32_t i; - - for (i = 0; post_deps && i < count; ++i) { - if (post_deps[i].chain) { - drm_syncobj_add_point(post_deps[i].syncobj, - post_deps[i].chain, - fence, post_deps[i].point); - post_deps[i].chain = NULL; - } else { - drm_syncobj_replace_fence(post_deps[i].syncobj, - fence); - } - } -} - int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct drm_file *file) { @@ -663,7 +497,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_gpu *gpu = priv->gpu; struct msm_gpu_submitqueue *queue; struct msm_ringbuffer *ring; - struct msm_submit_post_dep *post_deps = NULL; + struct msm_syncobj_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; struct sync_file *sync_file = NULL; int out_fence_fd = -1; @@ -740,10 +574,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } if (args->flags & MSM_SUBMIT_SYNCOBJ_IN) { - syncobjs_to_reset = msm_parse_deps(submit, file, - args->in_syncobjs, - args->nr_in_syncobjs, - args->syncobj_stride); + syncobjs_to_reset = msm_syncobj_parse_deps(dev, &submit->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); if (IS_ERR(syncobjs_to_reset)) { ret = PTR_ERR(syncobjs_to_reset); goto out_unlock; @@ -751,10 +585,10 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, } if (args->flags & MSM_SUBMIT_SYNCOBJ_OUT) { - post_deps = msm_parse_post_deps(dev, file, - args->out_syncobjs, - args->nr_out_syncobjs, - args->syncobj_stride); + post_deps = msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); if (IS_ERR(post_deps)) { ret = PTR_ERR(post_deps); goto out_unlock; @@ -897,10 +731,8 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, args->fence = submit->fence_id; queue->last_fence = submit->fence_id; - msm_reset_syncobjs(syncobjs_to_reset, args->nr_in_syncobjs); - msm_process_post_deps(post_deps, args->nr_out_syncobjs, - submit->user_fence); - + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, submit->user_fence); out: submit_cleanup(submit, !!ret); diff --git a/drivers/gpu/drm/msm/msm_syncobj.c b/drivers/gpu/drm/msm/msm_syncobj.c new file mode 100644 index 000000000000..4baa9f522c54 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.c @@ -0,0 +1,172 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#include "drm/drm_drv.h" + +#include "msm_drv.h" +#include "msm_syncobj.h" + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride) +{ + struct drm_syncobj **syncobjs = NULL; + struct drm_msm_syncobj syncobj_desc = {0}; + int ret = 0; + uint32_t i, j; + + syncobjs = kcalloc(nr_in_syncobjs, sizeof(*syncobjs), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!syncobjs) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < nr_in_syncobjs; ++i) { + uint64_t address = in_syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret = -EFAULT; + break; + } + + if (syncobj_desc.point && + !drm_core_check_feature(dev, DRIVER_SYNCOBJ_TIMELINE)) { + ret = UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + if (syncobj_desc.flags & ~MSM_SYNCOBJ_FLAGS) { + ret = UERR(EINVAL, dev, "invalid syncobj flags: %x", syncobj_desc.flags); + break; + } + + ret = drm_sched_job_add_syncobj_dependency(job, file, + syncobj_desc.handle, + syncobj_desc.point); + if (ret) + break; + + if (syncobj_desc.flags & MSM_SYNCOBJ_RESET) { + syncobjs[i] = drm_syncobj_find(file, syncobj_desc.handle); + if (!syncobjs[i]) { + ret = UERR(EINVAL, dev, "invalid syncobj handle: %u", i); + break; + } + } + } + + if (ret) { + for (j = 0; j <= i; ++j) { + if (syncobjs[j]) + drm_syncobj_put(syncobjs[j]); + } + kfree(syncobjs); + return ERR_PTR(ret); + } + return syncobjs; +} + +void +msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs) +{ + uint32_t i; + + for (i = 0; syncobjs && i < nr_syncobjs; ++i) { + if (syncobjs[i]) + drm_syncobj_replace_fence(syncobjs[i], NULL); + } +} + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride) +{ + struct msm_syncobj_post_dep *post_deps; + struct drm_msm_syncobj syncobj_desc = {0}; + int ret = 0; + uint32_t i, j; + + post_deps = kcalloc(nr_syncobjs, sizeof(*post_deps), + GFP_KERNEL | __GFP_NOWARN | __GFP_NORETRY); + if (!post_deps) + return ERR_PTR(-ENOMEM); + + for (i = 0; i < nr_syncobjs; ++i) { + uint64_t address = syncobjs_addr + i * syncobj_stride; + + if (copy_from_user(&syncobj_desc, + u64_to_user_ptr(address), + min(syncobj_stride, sizeof(syncobj_desc)))) { + ret = -EFAULT; + break; + } + + post_deps[i].point = syncobj_desc.point; + + if (syncobj_desc.flags) { + ret = UERR(EINVAL, dev, "invalid syncobj flags"); + break; + } + + if (syncobj_desc.point) { + if (!drm_core_check_feature(dev, + DRIVER_SYNCOBJ_TIMELINE)) { + ret = UERR(EOPNOTSUPP, dev, "syncobj timeline unsupported"); + break; + } + + post_deps[i].chain = dma_fence_chain_alloc(); + if (!post_deps[i].chain) { + ret = -ENOMEM; + break; + } + } + + post_deps[i].syncobj = + drm_syncobj_find(file, syncobj_desc.handle); + if (!post_deps[i].syncobj) { + ret = UERR(EINVAL, dev, "invalid syncobj handle"); + break; + } + } + + if (ret) { + for (j = 0; j <= i; ++j) { + dma_fence_chain_free(post_deps[j].chain); + if (post_deps[j].syncobj) + drm_syncobj_put(post_deps[j].syncobj); + } + + kfree(post_deps); + return ERR_PTR(ret); + } + + return post_deps; +} + +void +msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence) +{ + uint32_t i; + + for (i = 0; post_deps && i < count; ++i) { + if (post_deps[i].chain) { + drm_syncobj_add_point(post_deps[i].syncobj, + post_deps[i].chain, + fence, post_deps[i].point); + post_deps[i].chain = NULL; + } else { + drm_syncobj_replace_fence(post_deps[i].syncobj, + fence); + } + } +} diff --git a/drivers/gpu/drm/msm/msm_syncobj.h b/drivers/gpu/drm/msm/msm_syncobj.h new file mode 100644 index 000000000000..bcaa15d01da0 --- /dev/null +++ b/drivers/gpu/drm/msm/msm_syncobj.h @@ -0,0 +1,37 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ +/* Copyright (C) 2020 Google, Inc */ + +#ifndef __MSM_GEM_SYNCOBJ_H__ +#define __MSM_GEM_SYNCOBJ_H__ + +#include "drm/drm_device.h" +#include "drm/drm_syncobj.h" +#include "drm/gpu_scheduler.h" + +struct msm_syncobj_post_dep { + struct drm_syncobj *syncobj; + uint64_t point; + struct dma_fence_chain *chain; +}; + +struct drm_syncobj ** +msm_syncobj_parse_deps(struct drm_device *dev, + struct drm_sched_job *job, + struct drm_file *file, + uint64_t in_syncobjs_addr, + uint32_t nr_in_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_reset(struct drm_syncobj **syncobjs, uint32_t nr_syncobjs); + +struct msm_syncobj_post_dep * +msm_syncobj_parse_post_deps(struct drm_device *dev, + struct drm_file *file, + uint64_t syncobjs_addr, + uint32_t nr_syncobjs, + size_t syncobj_stride); + +void msm_syncobj_process_post_deps(struct msm_syncobj_post_dep *post_deps, + uint32_t count, struct dma_fence *fence); + +#endif /* __MSM_GEM_SYNCOBJ_H__ */ diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 1bccc347945c..2c2fc4b284d0 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -220,6 +220,17 @@ struct drm_msm_gem_cpu_fini { * Cmdstream Submission: */ +#define MSM_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ +#define MSM_SYNCOBJ_FLAGS ( \ + MSM_SYNCOBJ_RESET | \ + 0) + +struct drm_msm_syncobj { + __u32 handle; /* in, syncobj handle. */ + __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ + __u64 point; /* in, timepoint for timeline syncobjs. */ +}; + /* The value written into the cmdstream is logically: * * ((relocbuf->gpuaddr + reloc_offset) << shift) | or @@ -309,17 +320,6 @@ struct drm_msm_gem_submit_bo { MSM_SUBMIT_FENCE_SN_IN | \ 0) -#define MSM_SUBMIT_SYNCOBJ_RESET 0x00000001 /* Reset syncobj after wait. */ -#define MSM_SUBMIT_SYNCOBJ_FLAGS ( \ - MSM_SUBMIT_SYNCOBJ_RESET | \ - 0) - -struct drm_msm_gem_submit_syncobj { - __u32 handle; /* in, syncobj handle. */ - __u32 flags; /* in, from MSM_SUBMIT_SYNCOBJ_FLAGS */ - __u64 point; /* in, timepoint for timeline syncobjs. */ -}; - /* Each cmdstream submit consists of a table of buffers involved, and * one or more cmdstream buffers. This allows for conditional execution * (context-restore), and IB buffers needed for per tile/bin draw cmds. @@ -333,8 +333,8 @@ struct drm_msm_gem_submit { __u64 cmds; /* in, ptr to array of submit_cmd's */ __s32 fence_fd; /* in/out fence fd (see MSM_SUBMIT_FENCE_FD_IN/OUT) */ __u32 queueid; /* in, submitqueue id */ - __u64 in_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ - __u64 out_syncobjs; /* in, ptr to array of drm_msm_gem_submit_syncobj */ + __u64 in_syncobjs; /* in, ptr to array of drm_msm_syncobj */ + __u64 out_syncobjs; /* in, ptr to array of drm_msm_syncobj */ __u32 nr_in_syncobjs; /* in, number of entries in in_syncobj */ __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ From patchwork Mon May 19 17:57:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891126 Received: from mail-pl1-f180.google.com (mail-pl1-f180.google.com [209.85.214.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FF4F28D85B; Mon, 19 May 2025 17:58:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.180 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677520; cv=none; b=rslmD6qb1ZZBT7f/vUAz+2tAe0m0IQek2p0h9cMNzdN81emLqRzhSGjMulH2gemY3kxdeMYKpdE7/u87IePwC3eCqbA3KRYgCKAMabsj9Z1nwj+WenfwOlo3e5qSQ+NPWOXKkiAl7nQS6jOgMwLCEn30tGS2ue/ZEeFtLdtqZ+4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677520; c=relaxed/simple; bh=he5qvIDKEYSpUpScoEQkO4nstTj9ZaWWJob1vo4UMGQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SygBhBSSeiEHhsLavcJTq8Go4XJ2TRCsH2BrDRKCZHx1QqH05aHXInrC7w7A1NI9CGppBMsTVbA25ghYehPxcZxSChEbFuxNpez3EZgCU0I4tliyxE30wsnSIU4USKK2uSWD5iFP8Br8XbtokI3gayErUrXyi/FdNNed232mk2s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=k5sg7ur6; arc=none smtp.client-ip=209.85.214.180 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="k5sg7ur6" Received: by mail-pl1-f180.google.com with SMTP id d9443c01a7336-22e331215dbso43202025ad.1; Mon, 19 May 2025 10:58:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677518; x=1748282318; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Yw8+5Lu92Jvnuz9rkyDiNgf2Y33qWiuGoKmOSlycwFo=; b=k5sg7ur6LYAmEaRreX14apAqbLEufAOxNEGBT8NPOvbL8xPPrvJkk2Fe5Fi5yYk5No ZQv1AfH9qZSjiDslaOtvgXZz/WGn8G7gxYPek0tniFAHL0Wty2OvqVFa5Juc6pZqgwaY 8IeF38tDplatuulySpC61J5VCw4An0PM7TO99DatU1jQtKsQSZVaHF5zqWMiXSikE5tI aMkjCZvqOYTame0qdDATCZHJ/mX+3BCeTODZbvlKLJGBvLEdHAYqKYG77FW+gZ++47xk ymW6lO5nhV9Aw7Lyy7Vr6u7y5k0gesnWnBaSaq7UkqBqz5mFMbikYlHSJjzDRx2EyBUb Y6LQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677518; x=1748282318; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Yw8+5Lu92Jvnuz9rkyDiNgf2Y33qWiuGoKmOSlycwFo=; b=bIpeAJojDwfknc35zYdH5XzVWcvPBZGsnSEIF9EOOpm+JrfDsUYmciv33JLHh+D6/R E5POxVMd9cR8h2rnPTVsyE4NVPJclyngyxCRv5fu/JzZsnJUvoUiXscYhplgyPq+TVs8 RuqFrM8ejo3BeAx6OmzKM79u9L78MvJefenpvZFmh7bDop7qnB83LShbtWO/rrwEVHmO lAIERM6os/ny4HBJFX0fv84JYm7iH6KCihHF7qcTSO/GslHYBs3mHx78e060IP6tpDiZ OITJnLrKQm6zLuq1/wIJvKAkFfR6A+2nm9AWlGUyRp/nr0ozrr04w2qkoJhW99H8HJ7A +VuA== X-Forwarded-Encrypted: i=1; AJvYcCV7KjbIs2BSFqE54SSCI0pVDOcUTpja0WQLBD+vixhogOUGeS8TN6SNwaFdbbqzo+rDxGnVA4lM7LQJ/Htn@vger.kernel.org, AJvYcCXQqexUrn1zxcNvYpSNEbAaOR75S+LKS2B807AhjlWx45W3DWnTYOrbrZqcbU0irdM329YgjfHhLToSlkod@vger.kernel.org X-Gm-Message-State: AOJu0YxvySe5UuryNbvWxeDGy8WifAXTpqxhFXcHSa51GRfl9JFoKbI4 J21/KRpTXv78NU27U3/dTv1PNo/lX+IVEp8dfESNoUCOkebHLaudwXZelwI69w== X-Gm-Gg: ASbGnctHZPm85yXBPY3nq1qFVYh4jvo1sQyAuZDYVv7bhImTEbZX7yiExnX1eTkVKxS cihQCt3YnL1YIKpf1avXkoLTogjhTUsUfh/oc4R+PQ84cpU84jhqUJW0osaO8HknLL4Y2YttKku Bn+pSTq8Jzyq83vDNk9wFSHr/3xjJoEMPkiepfjOHEZTT73Jzh/BHS3lHuK4b4RHpM+qmiZf7zQ 1kPq76k2aA9ULzfBOp8DQLgKhCqMW8Ww9/YJW/aJG2rNbo6GRGKzWmS6Bl4oMY7nJCe6g57RC7f DY4A8pF1iP5tAG+luAvV3W1s7Fj7YvV4HmGCTwiKgJzVlNIsG46JtWoXcF9bDffQaQvYVenefoB +CINR0dAb3IXlzGE+CdyXh5UiLg== X-Google-Smtp-Source: AGHT+IGs6KrnF1n7CohVGZ2xmw1s8L1aM9us0S4M1mcH1itEFyn1K0EgB83X+frh9MTVQgjtuCFIYQ== X-Received: by 2002:a17:902:f543:b0:21f:1348:10e6 with SMTP id d9443c01a7336-231d4d2041dmr208599415ad.13.1747677517708; Mon, 19 May 2025 10:58:37 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4e97dadsm63119275ad.141.2025.05.19.10.58.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:37 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 30/40] drm/msm: Use DMA_RESV_USAGE_BOOKKEEP/KERNEL Date: Mon, 19 May 2025 10:57:27 -0700 Message-ID: <20250519175755.13037-18-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Any place we wait for a BO to become idle, we should use BOOKKEEP usage, to ensure that it waits for _any_ activity. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 6 +++--- drivers/gpu/drm/msm/msm_gem_shrinker.c | 2 +- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 65ec99526f82..cf509ca42da0 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -76,8 +76,8 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) * TODO we might need to kick this to a queue to avoid blocking * in CLOSE ioctl */ - dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_READ, false, - msecs_to_jiffies(1000)); + dma_resv_wait_timeout(obj->resv, DMA_RESV_USAGE_BOOKKEEP, false, + MAX_SCHEDULE_TIMEOUT); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); put_iova_spaces(obj, ctx->vm, true); @@ -879,7 +879,7 @@ bool msm_gem_active(struct drm_gem_object *obj) if (to_msm_bo(obj)->pin_count) return true; - return !dma_resv_test_signaled(obj->resv, dma_resv_usage_rw(true)); + return !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_BOOKKEEP); } int msm_gem_cpu_prep(struct drm_gem_object *obj, uint32_t op, ktime_t *timeout) diff --git a/drivers/gpu/drm/msm/msm_gem_shrinker.c b/drivers/gpu/drm/msm/msm_gem_shrinker.c index 5faf6227584a..1039e3c0a47b 100644 --- a/drivers/gpu/drm/msm/msm_gem_shrinker.c +++ b/drivers/gpu/drm/msm/msm_gem_shrinker.c @@ -139,7 +139,7 @@ evict(struct drm_gem_object *obj, struct ww_acquire_ctx *ticket) static bool wait_for_idle(struct drm_gem_object *obj) { - enum dma_resv_usage usage = dma_resv_usage_rw(true); + enum dma_resv_usage usage = DMA_RESV_USAGE_BOOKKEEP; return dma_resv_wait_timeout(obj->resv, usage, false, 10) > 0; } From patchwork Mon May 19 17:57:28 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892056 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4084B28D8FA; Mon, 19 May 2025 17:58:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677522; cv=none; b=kp4GU3JH8njhMceQumzVQ8AScSKTNlTNfHflJWblxgJFZvU6U7JgVGBYbo9zI4MgXIKiN8mESOPKkGWNjd31Z8G5cQOcPRCS086IBLGh1OHVkapFqIdYs5jLOnJpUisp0ngLbBDZmAl7JCidb1/O10Gwc+8XCaALxMvwx5Spk98= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677522; c=relaxed/simple; bh=gUgeLfObpp0RplEjfPu9r30MQfbQFCvf7+tZl0QRbaY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RcQWNhvUkPxxfo9/HerjZFCZVoMJZrAJohUtCUpM9cEq7KrQjbcilz1U7aM00aQMsbMvR+v7R3DnD6CeghylvBy9QtsBtdj0u16itavdN0utRImTuqdDBq4yRsE9Puf/2lMW40/y/aVDxaIjm4rguefdXN8m41JZ2BuEVJSJWBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=O6hqLJd2; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="O6hqLJd2" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-74019695377so3720204b3a.3; Mon, 19 May 2025 10:58:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677519; x=1748282319; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=DzZp//YJOYo/fi99uTT2lLUqF6eYnVj7ze0qufulWdA=; b=O6hqLJd2J9wI/WBnJ9PQ+X3J+cf9XMAqIEE6bqOQtQ4uXynUsAfn3FVP2Kb7CnvEJJ gzbI2zAzP0lS3F/HxkJ52ElPKmOFthIbu+DMks1fJxYUrY6p5yEl3yuhCwjQv/EBYchy WiuM5ep9SZhALL5cPoQQjar5Gnr75qdox5iQ6jl17sJst5Oncav5H3Y6Z5Zt2k4Go72t aqZ3W0gblz5rP3bPU+xY5XJnOuCzcJWdDQayigZeVYW34vkNuVnbbz2gBtcnYeoYoZ7k 4lEzrIvTz8H9G2lnn7F6BE1ldnV5KAeZMzIHdNHeBE3cV12u5a/rz3AUaf8bG1Do7ORo /xkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677519; x=1748282319; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=DzZp//YJOYo/fi99uTT2lLUqF6eYnVj7ze0qufulWdA=; b=SI14Pcq2vREP2GZkcwuoaYyd01tn9V4adFayGyPSQPC6RdxM/T1KCGYfWIH4102SCk 7Z9gZEwqvNln5xBtHnz3FDNqihG+3TnbBAilVLpBQe1tmqjcKplul0Gl38o7XqDVx9PF M3pG/gElkFmvCN6aYPcVW0EuCmnvZIeash0+yHUpMqmQ8xZrop76gafJ7cZUgBJ1cs2n Y2tE/JVjgB6adxU3d2Ia8fU8PJ9EBWh12okiqjVPsN5rRf4HSpGNsrCl5qv8z/weNyRw 39J0SI2dwKdd+Tp2e9H+KZ2PTJy4/nNO/opinM6JU2DqDrwr2B/yJ+SOHn2cIcSpdkdo 2jqQ== X-Forwarded-Encrypted: i=1; AJvYcCUN3MWmVa9A0/d1pPYmjMiN/9HykHZJTkajhjvpzrYWJn1lGn9Wd20+LX/HJ8sDdFFP4zaF4wmOiLpsJzOp@vger.kernel.org, AJvYcCXDA6L5kIHGyg1UB7JHdzjEX4pBdf3FZCYmfhuPhadTY6kqAUDJQWN1fFiHjviRUGXO5BX9IW5xn1kqFSC8@vger.kernel.org, AJvYcCXRrBfavhk6r9wQOeLfdVEBEONeCWEgoMFIXt1ekA4/kHzm5srGiQ1Z6UE9thK0eIy4hw+MzhKAJntcQPU=@vger.kernel.org X-Gm-Message-State: AOJu0Ywps5v0R6phYCo/81KmZrfS5Seci+uvkkOcU7cgigs6SvYbaPVK xJBQgFD0HNSJDoAn1BKpb1ZJDfdq2Da4VIfZ9Qc6C+rs5YhypTaY1QLU X-Gm-Gg: ASbGnctSKRGLGJbx6gvu9vTSg8ESXxw++t9I0hJGz43IaJ05lKDG7FEjF3OiQbbaLTC Tyff1dYBbTCjrzB3vGCjS84RGAP/VMhr4kGSe8zXe11D81u8OdanmbGr/nF6XOCl7eMsHSa1x2n b75Vm9y5Z7N4aiThqrgfx6eAscNvYsU6IJMANJ+VRc8XZO41YtxtT/7tq1J0OEofV0KIG1NBD/5 48+iMqvs67lpiFkdDhZmgmGOzhpbpydibI4zWpvA/sPK7GInA4w4/M4DOQnK975FGvsqSgv/Vzn p9Zwv/jl6/5VlW10sToDNzv+R8pWxBjizmkUC+AuIt0vvbGt1BmJVjGCTZotRJJZgDUQCXCJig6 ZMyqAzeyQwNZqmPHQaBHK+Fb/eA== X-Google-Smtp-Source: AGHT+IFg3EnYtS/gW+XjgCG8FLnKsv0/rmYium2a4IFxC5xZRL6gKN7aJTskG4v+HyU95WpgLQhnwg== X-Received: by 2002:a17:90b:2f08:b0:2ee:d63f:d8f with SMTP id 98e67ed59e1d1-30e7d52b830mr18833754a91.13.1747677519250; Mon, 19 May 2025 10:58:39 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e7d46efd0sm7708113a91.4.2025.05.19.10.58.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:38 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 31/40] drm/msm: Add VM_BIND submitqueue Date: Mon, 19 May 2025 10:57:28 -0700 Message-ID: <20250519175755.13037-19-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This submitqueue type isn't tied to a hw ringbuffer, but instead executes on the CPU for performing async VM_BIND ops. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 12 +++++ drivers/gpu/drm/msm/msm_gem_submit.c | 60 +++++++++++++++++++--- drivers/gpu/drm/msm/msm_gem_vma.c | 71 +++++++++++++++++++++++++++ drivers/gpu/drm/msm/msm_gpu.h | 3 ++ drivers/gpu/drm/msm/msm_submitqueue.c | 67 +++++++++++++++++++------ include/uapi/drm/msm_drm.h | 9 +++- 6 files changed, 197 insertions(+), 25 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index f7b85084e228..c1581bd4b5fd 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -53,6 +53,13 @@ struct msm_gem_vm { /** @base: Inherit from drm_gpuvm. */ struct drm_gpuvm base; + /** + * @sched: Scheduler used for asynchronous VM_BIND request. + * + * Unused for kernel managed VMs (where all operations are synchronous). + */ + struct drm_gpu_scheduler sched; + /** * @mm: Memory management for kernel managed VA allocations * @@ -71,6 +78,9 @@ struct msm_gem_vm { */ struct pid *pid; + /** @last_fence: Fence for last pending work scheduled on the VM */ + struct dma_fence *last_fence; + /** @faults: the number of GPU hangs associated with this address space */ int faults; @@ -100,6 +110,8 @@ struct drm_gpuvm * msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); +void msm_gem_vm_close(struct drm_gpuvm *gpuvm); + struct msm_fence_context; #define MSM_VMA_DUMP (DRM_GPUVA_USERBITS << 0) diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index bfb8c5ac1f1e..053e6c65780f 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -4,6 +4,7 @@ * Author: Rob Clark */ +#include #include #include #include @@ -258,30 +259,43 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, static int submit_lock_objects(struct msm_gem_submit *submit) { unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; + struct drm_exec *exec = &submit->exec; int ret; -// TODO need to add vm_bind path which locks vm resv + external objs drm_exec_init(&submit->exec, flags, submit->nr_bos); + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_exec_until_all_locked (&submit->exec) { + ret = drm_gpuvm_prepare_vm(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + ret = drm_gpuvm_prepare_objects(submit->vm, exec, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + + return 0; + } + drm_exec_until_all_locked (&submit->exec) { ret = drm_exec_lock_obj(&submit->exec, drm_gpuvm_resv_obj(submit->vm)); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; ret = drm_exec_prepare_obj(&submit->exec, obj, 1); drm_exec_retry_on_contention(&submit->exec); if (ret) - goto error; + return ret; } } return 0; - -error: - return ret; } static int submit_fence_sync(struct msm_gem_submit *submit) @@ -366,9 +380,18 @@ static void submit_unpin_objects(struct msm_gem_submit *submit) static void submit_attach_object_fences(struct msm_gem_submit *submit) { - int i; + struct msm_gem_vm *vm = to_msm_vm(submit->vm); + struct dma_fence *last_fence; + + if (msm_context_is_vmbind(submit->queue->ctx)) { + drm_gpuvm_resv_add_fence(submit->vm, &submit->exec, + submit->user_fence, + DMA_RESV_USAGE_BOOKKEEP, + DMA_RESV_USAGE_BOOKKEEP); + return; + } - for (i = 0; i < submit->nr_bos; i++) { + for (unsigned i = 0; i < submit->nr_bos; i++) { struct drm_gem_object *obj = submit->bos[i].obj; if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) @@ -378,6 +401,10 @@ static void submit_attach_object_fences(struct msm_gem_submit *submit) dma_resv_add_fence(obj->resv, submit->user_fence, DMA_RESV_USAGE_READ); } + + last_fence = vm->last_fence; + vm->last_fence = dma_fence_unwrap_merge(submit->user_fence, last_fence); + dma_fence_put(last_fence); } static int submit_bo(struct msm_gem_submit *submit, uint32_t idx, @@ -532,6 +559,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (!queue) return -ENOENT; + if (queue->flags & MSM_SUBMITQUEUE_VM_BIND) { + ret = UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + ring = gpu->rb[queue->ring_nr]; if (args->flags & MSM_SUBMIT_FENCE_FD_OUT) { @@ -721,6 +753,18 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, submit_attach_object_fences(submit); + if (msm_context_is_vmbind(ctx)) { + /* + * If we are not using VM_BIND, submit_pin_vmas() will validate + * just the BOs attached to the submit. In that case we don't + * need to validate the _entire_ vm, because userspace tracked + * what BOs are associated with the submit. + */ + ret = drm_gpuvm_validate(submit->vm, &submit->exec); + if (ret) + goto out; + } + /* The scheduler owns a ref now: */ msm_gem_submit_get(submit); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 72667316df51..73baa9451ada 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -16,6 +16,7 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) drm_mm_takedown(&vm->mm); if (vm->mmu) vm->mmu->funcs->destroy(vm->mmu); + dma_fence_put(vm->last_fence); put_pid(vm->pid); kfree(vm); } @@ -154,6 +155,9 @@ static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, }; +static const struct drm_sched_backend_ops msm_vm_bind_ops = { +}; + /** * msm_gem_vm_create() - Create and initialize a &msm_gem_vm * @drm: the drm device @@ -196,6 +200,21 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, goto err_free_vm; } + if (!managed) { + struct drm_sched_init_args args = { + .ops = &msm_vm_bind_ops, + .num_rqs = 1, + .credit_limit = 1, + .timeout = MAX_SCHEDULE_TIMEOUT, + .name = "msm-vm-bind", + .dev = drm->dev, + }; + + ret = drm_sched_init(&vm->sched, &args); + if (ret) + goto err_free_dummy; + } + drm_gpuvm_init(&vm->base, name, flags, drm, dummy_gem, va_start, va_size, 0, 0, &msm_gpuvm_ops); drm_gem_object_put(dummy_gem); @@ -207,8 +226,60 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, return &vm->base; +err_free_dummy: + drm_gem_object_put(dummy_gem); + err_free_vm: kfree(vm); return ERR_PTR(ret); } + +/** + * msm_gem_vm_close() - Close a VM + * @gpuvm: The VM to close + * + * Called when the drm device file is closed, to tear down VM related resources + * (which will drop refcounts to GEM objects that were still mapped into the + * VM at the time). + */ +void +msm_gem_vm_close(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm = to_msm_vm(gpuvm); + struct drm_gpuva *vma, *tmp; + + /* + * For kernel managed VMs, the VMAs are torn down when the handle is + * closed, so nothing more to do. + */ + if (vm->managed) + return; + + if (vm->last_fence) + dma_fence_wait(vm->last_fence, false); + + /* Kill the scheduler now, so we aren't racing with it for cleanup: */ + drm_sched_stop(&vm->sched, NULL); + drm_sched_fini(&vm->sched); + + /* Tear down any remaining mappings: */ + dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj = vma->gem.obj; + + if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { + drm_gem_object_get(obj); + msm_gem_lock(obj); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { + msm_gem_unlock(obj); + drm_gem_object_put(obj); + } + } + dma_resv_unlock(drm_gpuvm_resv(gpuvm)); +} diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 448ebf721bd8..9cbf155ff222 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -570,6 +570,9 @@ struct msm_gpu_submitqueue { struct mutex lock; struct kref ref; struct drm_sched_entity *entity; + + /** @_vm_bind_entity: used for @entity pointer for VM_BIND queues */ + struct drm_sched_entity _vm_bind_entity[0]; }; struct msm_gpu_state_bo { diff --git a/drivers/gpu/drm/msm/msm_submitqueue.c b/drivers/gpu/drm/msm/msm_submitqueue.c index 8ced49c7557b..8617a82cd6b3 100644 --- a/drivers/gpu/drm/msm/msm_submitqueue.c +++ b/drivers/gpu/drm/msm/msm_submitqueue.c @@ -72,6 +72,9 @@ void msm_submitqueue_destroy(struct kref *kref) idr_destroy(&queue->fence_idr); + if (queue->entity == &queue->_vm_bind_entity[0]) + drm_sched_entity_destroy(queue->entity); + msm_context_put(queue->ctx); kfree(queue); @@ -102,7 +105,7 @@ struct msm_gpu_submitqueue *msm_submitqueue_get(struct msm_context *ctx, void msm_submitqueue_close(struct msm_context *ctx) { - struct msm_gpu_submitqueue *entry, *tmp; + struct msm_gpu_submitqueue *queue, *tmp; if (!ctx) return; @@ -111,10 +114,17 @@ void msm_submitqueue_close(struct msm_context *ctx) * No lock needed in close and there won't * be any more user ioctls coming our way */ - list_for_each_entry_safe(entry, tmp, &ctx->submitqueues, node) { - list_del(&entry->node); - msm_submitqueue_put(entry); + list_for_each_entry_safe(queue, tmp, &ctx->submitqueues, node) { + if (queue->entity == &queue->_vm_bind_entity[0]) + drm_sched_entity_flush(queue->entity, MAX_WAIT_SCHED_ENTITY_Q_EMPTY); + list_del(&queue->node); + msm_submitqueue_put(queue); } + + if (!ctx->vm) + return; + + msm_gem_vm_close(ctx->vm); } static struct drm_sched_entity * @@ -160,8 +170,6 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, struct msm_drm_private *priv = drm->dev_private; struct msm_gpu_submitqueue *queue; enum drm_sched_priority sched_prio; - extern int enable_preemption; - bool preemption_supported; unsigned ring_nr; int ret; @@ -171,26 +179,53 @@ int msm_submitqueue_create(struct drm_device *drm, struct msm_context *ctx, if (!priv->gpu) return -ENODEV; - preemption_supported = priv->gpu->nr_rings == 1 && enable_preemption != 0; + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + unsigned sz; - if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) - return -EINVAL; + /* Not allowed for kernel managed VMs (ie. kernel allocs VA) */ + if (!msm_context_is_vmbind(ctx)) + return -EINVAL; - ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); - if (ret) - return ret; + if (prio) + return -EINVAL; + + sz = struct_size(queue, _vm_bind_entity, 1); + queue = kzalloc(sz, GFP_KERNEL); + } else { + extern int enable_preemption; + bool preemption_supported = + priv->gpu->nr_rings == 1 && enable_preemption != 0; + + if (flags & MSM_SUBMITQUEUE_ALLOW_PREEMPT && preemption_supported) + return -EINVAL; - queue = kzalloc(sizeof(*queue), GFP_KERNEL); + ret = msm_gpu_convert_priority(priv->gpu, prio, &ring_nr, &sched_prio); + if (ret) + return ret; + + queue = kzalloc(sizeof(*queue), GFP_KERNEL); + } if (!queue) return -ENOMEM; kref_init(&queue->ref); queue->flags = flags; - queue->ring_nr = ring_nr; - queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], - ring_nr, sched_prio); + if (flags & MSM_SUBMITQUEUE_VM_BIND) { + struct drm_gpu_scheduler *sched = &to_msm_vm(msm_context_vm(drm, ctx))->sched; + + queue->entity = &queue->_vm_bind_entity[0]; + + drm_sched_entity_init(queue->entity, DRM_SCHED_PRIORITY_KERNEL, + &sched, 1, NULL); + } else { + queue->ring_nr = ring_nr; + + queue->entity = get_sched_entity(ctx, priv->gpu->rb[ring_nr], + ring_nr, sched_prio); + } + if (IS_ERR(queue->entity)) { ret = PTR_ERR(queue->entity); kfree(queue); diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 2c2fc4b284d0..6d6cd1219926 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -385,12 +385,19 @@ struct drm_msm_gem_madvise { /* * Draw queues allow the user to set specific submission parameter. Command * submissions specify a specific submitqueue to use. ID 0 is reserved for - * backwards compatibility as a "default" submitqueue + * backwards compatibility as a "default" submitqueue. + * + * Because VM_BIND async updates happen on the CPU, they must run on a + * virtual queue created with the flag MSM_SUBMITQUEUE_VM_BIND. If we had + * a way to do pgtable updates on the GPU, we could drop this restriction. */ #define MSM_SUBMITQUEUE_ALLOW_PREEMPT 0x00000001 +#define MSM_SUBMITQUEUE_VM_BIND 0x00000002 /* virtual queue for VM_BIND ops */ + #define MSM_SUBMITQUEUE_FLAGS ( \ MSM_SUBMITQUEUE_ALLOW_PREEMPT | \ + MSM_SUBMITQUEUE_VM_BIND | \ 0) /* From patchwork Mon May 19 17:57:29 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891125 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 81E5328DB5D; Mon, 19 May 2025 17:58:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677523; cv=none; b=WooUo9Qg1bx7i9u2MY9J3r89kJgLU5eD1nHG8CZ/CusCcW7TwIJl41Uhh4JEOco9v/pXmd6OWUc9fHuvydUR6Eu+P+2aHgy9erDMfsRkoKMTaXcdNLqm5UoxZOquYm9a7vo9UkblCP/1PciORqY1N18qTMnt1gPwwqbiGqpxhwU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677523; c=relaxed/simple; bh=O/nV/ZBdihwESJbKRGMGIMsUx9sBpWSdFb0C10uWsVY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iKdja4VzipGAcT5KBXZd5E8ecz48er3odg+K8NamyZXEDggoYmI2BeYESWLJZpHa0ZDD2lPENXPe3JL+jjn44VJNQipzXY+BgC+6dBoskuFaT3UiRAFJbkU1Mbx83LtSCltbNEXImoNFtOUdyOdbD3TIQD+HSTYoDQhJ0D0Uczg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=PFs/dCCK; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="PFs/dCCK" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-7399a2dc13fso6067859b3a.2; Mon, 19 May 2025 10:58:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677521; x=1748282321; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zsDuMkK6+ntYHJetvzHtNuC1/rYImV4dSwAsx5YSXi4=; b=PFs/dCCKUE05HLD5Dzns/eM9XJasJIpdlqca5UMrHckjpEKFkv8mY93VHl0P4YXHoy 8MpHwX/w4fNHHIqwx8n3odZ0qEXP6D4s92XgtZdST9CGkBJNpibbzCRdtrQspoNpP1Ww 29g2bM4OZH47BLW3OxUoyycnHnvxU51YcusbBuyDraESJrwMwfUv6j/Yz9mkfeOrpFF2 Nnu403Ffc2SVH8b/vlprNeOcZuXNpot9R2fb/wTCoNQk29/931XiBUVeNbsY50i28P2x HAp6Enm2oADjzkDBKMbY/sG7mGD/veMUIii9vzIVCYQN/tzCSv+9gcvElENncCfbR0rU CAXA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677521; x=1748282321; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=zsDuMkK6+ntYHJetvzHtNuC1/rYImV4dSwAsx5YSXi4=; b=eoRP36TxieqXEZO6qw2Wzv9UvAXrQK7Vg/Aq3SW5MOeWbKyDeN0otXaywaUVwUX6mg ddedT4mwuZ/HrBDx5AGuPl9OhdqZvVYdDRCUMYv4lh4QbpwScCl7UaFelJsbu6fZBKBL 2eLVpqDqxBkvxwEGGTGKjrXFJjzjrNJYTvCVh1B55RukvbiGibZay00n6rkIwM+7E0M9 Tn3uG1dFg+X4v1ODTQh0Ve22aqxaNlF5x0z1edRX3l42rZ6JZkzhT+R5r7CKrgmQ5Cim CVD/Zs3oUXP2r0zdtCSGfUyDWZ8eUQJlZMcmM9ENd0bF7IhESscTQ/amn1aHU3GI449L /bpQ== X-Forwarded-Encrypted: i=1; AJvYcCUpqk09cO+xrKVECt56t4jSZe0zQh720WqSFqWEP9ospw7wJuNWq4EEBwzpaFmovAKABETL6DTchtt//Htn@vger.kernel.org, AJvYcCVRH+E0FOFLipXLdmeaqxFlk199V4hXEWGt+EC/j3dWHT8M0OcweRik4YJE+qGG8nqG+PhN9a3TdIoxAQlf@vger.kernel.org X-Gm-Message-State: AOJu0YwgwVt+okyDvwoY2+M0/bjFmLvwZmLFtJtXQmlwmGETpjQ7nmxz 3BeTHaCmYp77reApMH3AbzFBVEqOVmjJQQ2/A8KvZ9hpy5WKQxfZ3Ya8 X-Gm-Gg: ASbGncs3O2bYq9GTJjEPtfIBsWOw4UjINcc/0k70dOK3ThcCGxXYSV6q4iU8wLuI2LJ cllGzTwV46M7ULEmJWZXNa//rpKO0h9EqBHmwKtXSvHiU05gY7DOiPghfdnv+Kq1kRVkE67nomA SbVy7nmKTZpW9//HEcamUYsLweCiggyGl7xFY/cXDJRL2KM7v6wzdP8mxLMEg0lewyNqbCPuOz6 bHFYffPqRTDN3SSWg7OL0JQAtoazIXVqw+e8oM6jy77VcPeoXbWjGguh2m1drZw6NZNAkhGrz31 f2JIMZD06TvHw9IXsGHfGJQZEPsLz4qZenvVc+lfVlBGbO/IrZZfhQ8QmYkqeg1cUW3+bIjuHsv dCNETx4eluJYD+k4jLKro2zIjYw== X-Google-Smtp-Source: AGHT+IEpPUhsp2Km3MfeKJLcLTe8fytrJtiMjwY97/JiD5mpGxU47A0gjeAM8i2SRnvZ5UivSJVNRw== X-Received: by 2002:a05:6a00:4fc2:b0:73e:598:7e5b with SMTP id d2e1a72fcca58-742acc906ccmr17537689b3a.1.1747677520798; Mon, 19 May 2025 10:58:40 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a970954asm6498324b3a.46.2025.05.19.10.58.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:40 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 32/40] drm/msm: Support IO_PGTABLE_QUIRK_NO_WARN_ON Date: Mon, 19 May 2025 10:57:29 -0700 Message-ID: <20250519175755.13037-20-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark With user managed VMs and multiple queues, it is in theory possible to trigger map/unmap errors. These will (in a later patch) mark the VM as unusable. But we want to tell the io-pgtable helpers not to spam the log. In addition, in the unmap path, we don't want to bail early from the unmap, to ensure we don't leave some dangling pages mapped. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/a6xx_gpu.c | 2 +- drivers/gpu/drm/msm/msm_iommu.c | 23 ++++++++++++++++++----- drivers/gpu/drm/msm/msm_mmu.h | 2 +- 3 files changed, 20 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c index f0e37733c65d..83fba02ca1df 100644 --- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c +++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c @@ -2267,7 +2267,7 @@ a6xx_create_private_vm(struct msm_gpu *gpu, bool kernel_managed) { struct msm_mmu *mmu; - mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu); + mmu = msm_iommu_pagetable_create(to_msm_vm(gpu->vm)->mmu, kernel_managed); if (IS_ERR(mmu)) return ERR_CAST(mmu); diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 756bd55ee94f..237d298d0eeb 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -94,15 +94,24 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, { struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); struct io_pgtable_ops *ops = pagetable->pgtbl_ops; + int ret = 0; while (size) { - size_t unmapped, pgsize, count; + size_t pgsize, count; + ssize_t unmapped; pgsize = calc_pgsize(pagetable, iova, iova, size, &count); unmapped = ops->unmap_pages(ops, iova, pgsize, count, NULL); - if (!unmapped) - break; + if (unmapped <= 0) { + ret = -EINVAL; + /* + * Continue attempting to unamp the remained of the + * range, so we don't end up with some dangling + * mapped pages + */ + unmapped = PAGE_SIZE; + } iova += unmapped; size -= unmapped; @@ -110,7 +119,7 @@ static int msm_iommu_pagetable_unmap(struct msm_mmu *mmu, u64 iova, iommu_flush_iotlb_all(to_msm_iommu(pagetable->parent)->domain); - return (size == 0) ? 0 : -EINVAL; + return ret; } static int msm_iommu_pagetable_map_prr(struct msm_mmu *mmu, u64 iova, size_t len, int prot) @@ -324,7 +333,7 @@ static const struct iommu_flush_ops tlb_ops = { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg); -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed) { struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); struct msm_iommu *iommu = to_msm_iommu(parent); @@ -358,6 +367,10 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent) ttbr0_cfg.quirks &= ~IO_PGTABLE_QUIRK_ARM_TTBR1; ttbr0_cfg.tlb = &tlb_ops; + if (!kernel_managed) { + ttbr0_cfg.quirks |= IO_PGTABLE_QUIRK_NO_WARN_ON; + } + pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c874852b7331..c70c71fb1a4a 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -52,7 +52,7 @@ static inline void msm_mmu_set_fault_handler(struct msm_mmu *mmu, void *arg, mmu->handler = handler; } -struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent); +struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed); int msm_iommu_pagetable_params(struct msm_mmu *mmu, phys_addr_t *ttbr, int *asid); From patchwork Mon May 19 17:57:30 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892055 Received: from mail-pf1-f181.google.com (mail-pf1-f181.google.com [209.85.210.181]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 28BE128DF34; Mon, 19 May 2025 17:58:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.181 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677525; cv=none; b=oorTbY641p90rUrBqXELQWqfZA0E8qhT3aRw+uwItZNqShH85yG/QP+ZsuvPR+gIZzgSo5siCEfntNgBe+rVwlpyxG+g3O+17EanwOUmgOJxTTAh94QIRiwQ0rPnTT1yIYNo8lnO0UDfdQSa8fvoMnuYM7uRjJGS8wLhH2+Qq1A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677525; c=relaxed/simple; bh=2NLVoyGx2+HRTucW06yNdmxmUZwh4sRYBQQH8nJ6frg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ego5uB9AJehoWS5EwYOascU/HTqUzvwEIHeGgS8PLMBgPgydtXXee2NIAxjTGzC9GDKdl9yNxhNB7C264WPLgkZXqpiZp3csCztvzFO5HZtEu5iS8bj4Zb9wToR67TeIg5pdWDWdrvQ8g/CoYjGuFKsAMP0RZNaNzzthnTsH4BQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nA5FAxeb; arc=none smtp.client-ip=209.85.210.181 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nA5FAxeb" Received: by mail-pf1-f181.google.com with SMTP id d2e1a72fcca58-74019695377so3720237b3a.3; Mon, 19 May 2025 10:58:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677522; x=1748282322; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gj5zA+GLTPjFovdCDwPs0h75nWKw3GPa9DTcGc5vsCY=; b=nA5FAxebZr3mLZauGfOIpvxU6SA98s+wJI3fiAgnrveeffwLft0/8HUkVyhqhawd9c DD4H0spAI1Zwxk+nvBsbxb3FBFQa+Ra8GGBSgXrSsZbbyVL9kFvvThOw7pBFiwtHWlJJ /GRJ9SS6Uc6elL5FUOQkFiG1KlwwKvNTZEiHOFd2pND6FEHgNkljXYDfZrx24mhHABbv qnByMOCGyAFdQXuu2N1w2egeDAM+WjMGVDbwPnc8AT1D5oDUdak53unzhmr4XBecA26y z9H/kNhsn94I8WsL+AKrSRk95YA0aFuSnJdhgu6Mfb+Fj6v+qxj/3l50XMiWIGbaC6+N TNIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677522; x=1748282322; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gj5zA+GLTPjFovdCDwPs0h75nWKw3GPa9DTcGc5vsCY=; b=IgsDOJIrQ289uq8HNCXAYxJmHr7P8QFSz9QxluxVYvtA2njU1MMwMAZH6EvlFcKift c42EAkwSFTTY7ZxQwMFg7fuJKviJsXv8r3mBa+HcO0hohp7s6qnVcQaIFtzuVME+AXmM 0+9ttfnfivYhzA7bB5iLJKpZyEBJLQJY/OAJMSHpZ+XN+AYsOXBHhwVfXrioPAL5XjVc AEikkJ1C7wsT6JmaJseaggMEVmcERlyvX1GVVnNwOWLDVs/3s97X4W4vKvlHkQNHWUUc 9cI0ENrVekKBPdQ+t4RNcmLnMZ8NQKrjGcplJz5eAwqCXHHpa8LAgNpsOrWOAgKo7hKN zNpQ== X-Forwarded-Encrypted: i=1; AJvYcCVgV+6O7xCHk+xOgybiwn6mcP9Gp1bi/iEMzvxUGha15XkhQoTfeUezmvmGdJ/PgomrFFN1YjzX5o9Gka9m@vger.kernel.org, AJvYcCWBsqjICIyYq1Ra3JNdpU+/S3H05mJSzirgK/Bfhvue+hOqZ9htrRfQPhUjQjB2xG1HPSSJHG8B+O+iXeJt@vger.kernel.org X-Gm-Message-State: AOJu0Yxpp5SJn6Pq9o6uwHwNnTqrRoxsp/xrf0HR0SWfRE9pwGGuajVR rhlFmQLUMBA1ETjXt3N86bBNskB3wXfGjLZwT6JpNQBr/+zebsLzwTp0 X-Gm-Gg: ASbGncv7FncmgClQCE40RTNpjdBzr0YLUUv3sJyuR17C7VOvZk351mYKGj8pSHmpGE+ o80L6hD/pFr0arEW6DvsLvE3V2Z6P4uyX4AKWjR1zzGpaPjk+IbPiN5PzjtqYRaorG/T6Ujauqi RKkxjTiEQM7am2JiuMaGSsC6PYytjwq7c7J/FUeed77j36QSG45UxBXaxNsVlV5jth8IvbxXE9r q5mHLNA81+2oB8Stq++wdjVXFG3HnL1mwmN2Br4L3IYbqjIGM/r8MpjOYO3oZ2JrRAjY2j2OlL4 jRsfq2iSPxZ+dpwtzD0deBvVZOt0aZj7vHx9lSLLuL+C7mqJvtIR3S5xtK28tFYQw7BLhDBZZXW P+z9XXA6lDyYI61z3y/wf/R1pTcKUAtU5sy00 X-Google-Smtp-Source: AGHT+IEzlVOiExkHKwYkbYmEdmduUgo7KRm2n0MzBYWg/DFtfZKGZo4lGlpIqhiGzCqRJVuMMw4MYA== X-Received: by 2002:a05:6a20:3d83:b0:215:dfd0:fd21 with SMTP id adf61e73a8af0-21621a13e89mr23866142637.34.1747677522252; Mon, 19 May 2025 10:58:42 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9829bbesm6518420b3a.89.2025.05.19.10.58.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:41 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 33/40] drm/msm: Support pgtable preallocation Date: Mon, 19 May 2025 10:57:30 -0700 Message-ID: <20250519175755.13037-21-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Introduce a mechanism to count the worst case # of pages required in a VM_BIND op. Note that previously we would have had to somehow account for allocations in unmap, when splitting a block. This behavior was removed in commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior)" Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.h | 1 + drivers/gpu/drm/msm/msm_iommu.c | 191 +++++++++++++++++++++++++++++++- drivers/gpu/drm/msm/msm_mmu.h | 34 ++++++ 3 files changed, 225 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index c1581bd4b5fd..8ad25927c604 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -7,6 +7,7 @@ #ifndef __MSM_GEM_H__ #define __MSM_GEM_H__ +#include "msm_mmu.h" #include #include #include "drm/drm_exec.h" diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index 237d298d0eeb..d04837461c3d 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -6,6 +6,7 @@ #include #include +#include #include "msm_drv.h" #include "msm_mmu.h" @@ -14,6 +15,8 @@ struct msm_iommu { struct iommu_domain *domain; atomic_t pagetables; struct page *prr_page; + + struct kmem_cache *pt_cache; }; #define to_msm_iommu(x) container_of(x, struct msm_iommu, base) @@ -27,6 +30,9 @@ struct msm_iommu_pagetable { unsigned long pgsize_bitmap; /* Bitmap of page sizes in use */ phys_addr_t ttbr; u32 asid; + + /** @root_page_table: Stores the root page table pointer. */ + void *root_page_table; }; static struct msm_iommu_pagetable *to_pagetable(struct msm_mmu *mmu) { @@ -282,7 +288,145 @@ msm_iommu_pagetable_walk(struct msm_mmu *mmu, unsigned long iova, uint64_t ptes[ return 0; } +static void +msm_iommu_pagetable_prealloc_count(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len) +{ + u64 pt_count; + + /* + * L1, L2 and L3 page tables. + * + * We could optimize L3 allocation by iterating over the sgt and merging + * 2M contiguous blocks, but it's simpler to over-provision and return + * the pages if they're not used. + * + * The first level descriptor (v8 / v7-lpae page table format) encodes + * 30 bits of address. The second level encodes 29. For the 3rd it is + * 39. + * + * https://developer.arm.com/documentation/ddi0406/c/System-Level-Architecture/Virtual-Memory-System-Architecture--VMSA-/Long-descriptor-translation-table-format/Long-descriptor-translation-table-format-descriptors?lang=en#BEIHEFFB + */ + pt_count = ((ALIGN(iova + len, 1ull << 39) - ALIGN_DOWN(iova, 1ull << 39)) >> 39) + + ((ALIGN(iova + len, 1ull << 30) - ALIGN_DOWN(iova, 1ull << 30)) >> 30) + + ((ALIGN(iova + len, 1ull << 21) - ALIGN_DOWN(iova, 1ull << 21)) >> 21); + + p->count += pt_count; +} + +static struct kmem_cache * +get_pt_cache(struct msm_mmu *mmu) +{ + struct msm_iommu_pagetable *pagetable = to_pagetable(mmu); + return to_msm_iommu(pagetable->parent)->pt_cache; +} + +static int +msm_iommu_pagetable_prealloc_allocate(struct msm_mmu *mmu, struct msm_mmu_prealloc *p) +{ + struct kmem_cache *pt_cache = get_pt_cache(mmu); + int ret; + + p->pages = kvmalloc_array(p->count, sizeof(p->pages), GFP_KERNEL); + if (!p->pages) + return -ENOMEM; + + ret = kmem_cache_alloc_bulk(pt_cache, GFP_KERNEL, p->count, p->pages); + if (ret != p->count) { + p->count = ret; + return -ENOMEM; + } + + return 0; +} + +static void +msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_prealloc *p) +{ + struct kmem_cache *pt_cache = get_pt_cache(mmu); + uint32_t remaining_pt_count = p->count - p->ptr; + + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); + kvfree(p->pages); +} + +/** + * alloc_pt() - Custom page table allocator + * @cookie: Cookie passed at page table allocation time. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + * @gfp: GFP flags. + * + * We want a custom allocator so we can use a cache for page table + * allocations and amortize the cost of the over-reservation that's + * done to allow asynchronous VM operations. + * + * Return: non-NULL on success, NULL if the allocation failed for any + * reason. + */ +static void * +msm_iommu_pagetable_alloc_pt(void *cookie, size_t size, gfp_t gfp) +{ + struct msm_iommu_pagetable *pagetable = cookie; + struct msm_mmu_prealloc *p = pagetable->base.prealloc; + void *page; + + /* Allocation of the root page table happening during init. */ + if (unlikely(!pagetable->root_page_table)) { + struct page *p; + + p = alloc_pages_node(dev_to_node(pagetable->iommu_dev), + gfp | __GFP_ZERO, get_order(size)); + page = p ? page_address(p) : NULL; + pagetable->root_page_table = page; + return page; + } + + if (WARN_ON(!p) || WARN_ON(p->ptr >= p->count)) + return NULL; + + page = p->pages[p->ptr++]; + memset(page, 0, size); + + /* + * Page table entries don't use virtual addresses, which trips out + * kmemleak. kmemleak_alloc_phys() might work, but physical addresses + * are mixed with other fields, and I fear kmemleak won't detect that + * either. + * + * Let's just ignore memory passed to the page-table driver for now. + */ + kmemleak_ignore(page); + + return page; +} + + +/** + * free_pt() - Custom page table free function + * @cookie: Cookie passed at page table allocation time. + * @data: Page table to free. + * @size: Size of the page table. This size should be fixed, + * and determined at creation time based on the granule size. + */ +static void +msm_iommu_pagetable_free_pt(void *cookie, void *data, size_t size) +{ + struct msm_iommu_pagetable *pagetable = cookie; + + if (unlikely(pagetable->root_page_table == data)) { + free_pages((unsigned long)data, get_order(size)); + pagetable->root_page_table = NULL; + return; + } + + kmem_cache_free(get_pt_cache(&pagetable->base), data); +} + static const struct msm_mmu_funcs pagetable_funcs = { + .prealloc_count = msm_iommu_pagetable_prealloc_count, + .prealloc_allocate = msm_iommu_pagetable_prealloc_allocate, + .prealloc_cleanup = msm_iommu_pagetable_prealloc_cleanup, .map = msm_iommu_pagetable_map, .unmap = msm_iommu_pagetable_unmap, .destroy = msm_iommu_pagetable_destroy, @@ -333,6 +477,17 @@ static const struct iommu_flush_ops tlb_ops = { static int msm_gpu_fault_handler(struct iommu_domain *domain, struct device *dev, unsigned long iova, int flags, void *arg); +static size_t get_tblsz(const struct io_pgtable_cfg *cfg) +{ + int pg_shift, bits_per_level; + + pg_shift = __ffs(cfg->pgsize_bitmap); + /* arm_lpae_iopte is u64: */ + bits_per_level = pg_shift - ilog2(sizeof(u64)); + + return sizeof(u64) << bits_per_level; +} + struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_managed) { struct adreno_smmu_priv *adreno_smmu = dev_get_drvdata(parent->dev); @@ -369,8 +524,34 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_m if (!kernel_managed) { ttbr0_cfg.quirks |= IO_PGTABLE_QUIRK_NO_WARN_ON; + + /* + * With userspace managed VM (aka VM_BIND), we need to pre- + * allocate pages ahead of time for map/unmap operations, + * handing them to io-pgtable via custom alloc/free ops as + * needed: + */ + ttbr0_cfg.alloc = msm_iommu_pagetable_alloc_pt; + ttbr0_cfg.free = msm_iommu_pagetable_free_pt; + + /* + * Restrict to single page granules. Otherwise we may run + * into a situation where userspace wants to unmap/remap + * only a part of a larger block mapping, which is not + * possible without unmapping the entire block. Which in + * turn could cause faults if the GPU is accessing other + * parts of the block mapping. + * + * Note that prior to commit 33729a5fc0ca ("iommu/io-pgtable-arm: + * Remove split on unmap behavior)" this was handled in + * io-pgtable-arm. But this apparently does not work + * correctly on SMMUv3. + */ + WARN_ON(!(ttbr0_cfg.pgsize_bitmap & PAGE_SIZE)); + ttbr0_cfg.pgsize_bitmap = PAGE_SIZE; } + pagetable->iommu_dev = ttbr1_cfg->iommu_dev; pagetable->pgtbl_ops = alloc_io_pgtable_ops(ARM_64_LPAE_S1, &ttbr0_cfg, pagetable); @@ -414,7 +595,6 @@ struct msm_mmu *msm_iommu_pagetable_create(struct msm_mmu *parent, bool kernel_m /* Needed later for TLB flush */ pagetable->parent = parent; pagetable->tlb = ttbr1_cfg->tlb; - pagetable->iommu_dev = ttbr1_cfg->iommu_dev; pagetable->pgsize_bitmap = ttbr0_cfg.pgsize_bitmap; pagetable->ttbr = ttbr0_cfg.arm_lpae_s1_cfg.ttbr; @@ -522,6 +702,7 @@ static void msm_iommu_destroy(struct msm_mmu *mmu) { struct msm_iommu *iommu = to_msm_iommu(mmu); iommu_domain_free(iommu->domain); + kmem_cache_destroy(iommu->pt_cache); kfree(iommu); } @@ -596,6 +777,14 @@ struct msm_mmu *msm_iommu_gpu_new(struct device *dev, struct msm_gpu *gpu, unsig return mmu; iommu = to_msm_iommu(mmu); + if (adreno_smmu && adreno_smmu->cookie) { + const struct io_pgtable_cfg *cfg = + adreno_smmu->get_ttbr1_cfg(adreno_smmu->cookie); + size_t tblsz = get_tblsz(cfg); + + iommu->pt_cache = + kmem_cache_create("msm-mmu-pt", tblsz, tblsz, 0, NULL); + } iommu_set_fault_handler(iommu->domain, msm_gpu_fault_handler, iommu); /* Enable stall on iommu fault: */ diff --git a/drivers/gpu/drm/msm/msm_mmu.h b/drivers/gpu/drm/msm/msm_mmu.h index c70c71fb1a4a..76d7dcc1c977 100644 --- a/drivers/gpu/drm/msm/msm_mmu.h +++ b/drivers/gpu/drm/msm/msm_mmu.h @@ -9,8 +9,16 @@ #include +struct msm_mmu_prealloc; +struct msm_mmu; +struct msm_gpu; + struct msm_mmu_funcs { void (*detach)(struct msm_mmu *mmu); + void (*prealloc_count)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p, + uint64_t iova, size_t len); + int (*prealloc_allocate)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); + void (*prealloc_cleanup)(struct msm_mmu *mmu, struct msm_mmu_prealloc *p); int (*map)(struct msm_mmu *mmu, uint64_t iova, struct sg_table *sgt, size_t off, size_t len, int prot); int (*unmap)(struct msm_mmu *mmu, uint64_t iova, size_t len); @@ -25,12 +33,38 @@ enum msm_mmu_type { MSM_MMU_IOMMU_PAGETABLE, }; +/** + * struct msm_mmu_prealloc - Tracking for pre-allocated pages for MMU updates. + */ +struct msm_mmu_prealloc { + /** @count: Number of pages reserved. */ + uint32_t count; + /** @ptr: Index of first unused page in @pages */ + uint32_t ptr; + /** + * @pages: Array of pages preallocated for MMU table updates. + * + * After a VM operation, there might be free pages remaining in this + * array (since the amount allocated is a worst-case). These are + * returned to the pt_cache at mmu->prealloc_cleanup(). + */ + void **pages; +}; + struct msm_mmu { const struct msm_mmu_funcs *funcs; struct device *dev; int (*handler)(void *arg, unsigned long iova, int flags, void *data); void *arg; enum msm_mmu_type type; + + /** + * @prealloc: pre-allocated pages for pgtable + * + * Set while a VM_BIND job is running, serialized under + * msm_gem_vm::mmu_lock. + */ + struct msm_mmu_prealloc *prealloc; }; static inline void msm_mmu_init(struct msm_mmu *mmu, struct device *dev, From patchwork Mon May 19 17:57:31 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891124 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 98E9A28DF4E; Mon, 19 May 2025 17:58:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677526; cv=none; b=MppWhNC6Pok0JMAzWcr0WwH4h5OqlNIPdUtI8Kyx8ruUaB2HESnOU8Ts7eoc4/N8GaqZTlWD6qMJJ/Tww3ozgzKbg5WOMFLltAmLlO6G+T7DotZzzT6IaMAcZsCfqrzOlspN5mSXdlPHWv19VZUFKuW/dDisFIMmIkl2t5K5c/4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677526; c=relaxed/simple; bh=pnQLoKZ1vywrckNj3NloDzeebCX7xk3jDKnHs/ziE+I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=demKfXmd3uf03JeT6EBvbWTHjVwSJc0R5NZN079QXOlJu9N7S0AEquEtvnRv0WWGEebE2uprr+KY8r+hxChdiy5+qxYhmqAW+kKRm78F2kJkujj4M6tYLGGtPs2MWevhEvpU+zWFg0cRUbe97W3ikJM2XMaLWpCuNuFepD5TTcY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Rq4QTDYN; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Rq4QTDYN" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-7390d21bb1cso4321484b3a.2; Mon, 19 May 2025 10:58:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677524; x=1748282324; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=hDifTqtGVvkdBFnZ2aqlffmPu8mY951wNojwhaWPbg0=; b=Rq4QTDYN9YuvtRmWV62DPkNbJbfUIgmQbePp/iBVPIJo2PgZD/LbZmy9+YyPNWBon0 D7cIp+nTFOALoYRBbWvhV/mXYVssNZ/55+f4TcK10jhcIzb+Uw+n3WRZ5zGJrqbrcJCf kafGVQUGwV15uR5ELsX3hMVpp5HTLHt3l1WEszW1LlukN675iaigaBdsUmxyOZaw6bUN GzdYDPJBv1FLIwaA4UxHzP59heVfQqE6BACm/8rE6quE1JW0aaQWYo/io/tkE9oboIaA igrmvKQgCvrB51E3pkf6cx/DYaywljzyWWMWypYvc/sFx4TZJTVNdvziDswiVayzqUTe O9Yw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677524; x=1748282324; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=hDifTqtGVvkdBFnZ2aqlffmPu8mY951wNojwhaWPbg0=; b=n9v5mdNs94he45Ov1KkDAgLTee+9e0irfJnZIisrOFy1LMNtSRKWFOTPrX7m5Cx06U bc970+HWgWQWAOnqxrdvQjKP5QadOIG2J/q3OjFe6kX4AVt7WK/J4nASrsf+yWePrvSa ls9WVegMQocIq47FJ7zE5/7vqPhyECk0s7UDh8KsrxVnXplOCZ3ThcWBKkXvJXRQM+Kv 9txY4YVLwXMgCm2kCgh39rjoF7TbkdawJ58LBSP0mMOL4IceTIMIEcaC9q+kB3jOYOt3 A8F/DEoQPksvMJ7TitbRfxQh7oN6sLwwNbPNHxKx0sk19dLEbR7nU3zf54kvrU2pJ1l2 X08Q== X-Forwarded-Encrypted: i=1; AJvYcCWDehDamlksZRJlqOUATPktCYz4SWj3+ZPmZC/dc/4NRdvAvT1+n+wZ1sF8sGn/B3Lu6lh50S2sAtPrKMvE@vger.kernel.org, AJvYcCXlfPkdxyqRxDtrf9XtLn0KB4FiPz2HXgEp3qUgrWEE8QlQnkZIuBZuv1qK9eh14WwWMBmVOLia2/Inu9tB@vger.kernel.org X-Gm-Message-State: AOJu0YxIjuakrSv+VqDOK0IcSNU7QHAA+dW8y+81YIWrGhpwQBrgoZSs Q0/ea/z9rKm9HotAwYEM3ibRuwEwiFl0FkiDFTPK8Vh90Wyr++6Eh93z X-Gm-Gg: ASbGncvwEKrQJZ0JWps5Tx2ZKbkgluoyIqEi89jFYhexO3c/gVFKMYWCdGTDOtVZjZA pjZa+bhgTRNP62GXbnDECLyKXdv9CBW+fE2tyEbYuJt4e+ggWVwocAviXHqGH4zi8LthsZKcSyW sDilwI16NUtP7rGab281Hk95Oqy04MP7ykvhcTiZQZqf/ZqVO312BNXYi6TgSdZ6NhIMz3CbMEq Lb5+9OH7fUrkiiKq0YqmBDHoPkVIUlxQuLqfJhUb6TDXL5mZVYq89vrRbuP7DXfXWql2FSBYHZ0 tAgEZrDTP5MaoBnoqVKFwnapefl0jQdH6fDcPKdhlVmwOt+CO1Pmr0Ulf0oSawvgPAqf2XvxPje Ng63lQ/3k7yvkfCvBTW8QfbDzzA== X-Google-Smtp-Source: AGHT+IGmYHPRNVkZlfycSsEN2aiJ8PCBCc4hbt1q3NPUc/bEuyH3gRv8qHnGFRo+5j3JGV1XMEkfZw== X-Received: by 2002:a05:6a20:d705:b0:215:de5f:febc with SMTP id adf61e73a8af0-216219bd5fcmr19558870637.27.1747677523827; Mon, 19 May 2025 10:58:43 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9876788sm6488742b3a.126.2025.05.19.10.58.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:42 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 34/40] drm/msm: Split out map/unmap ops Date: Mon, 19 May 2025 10:57:31 -0700 Message-ID: <20250519175755.13037-22-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark With async VM_BIND, the actual pgtable updates are deferred. Synchronously, a list of map/unmap ops will be generated, but the actual pgtable changes are deferred. To support that, split out op handlers and change the existing non-VM_BIND paths to use them. Note in particular, the vma itself may already be destroyed/freed by the time an UNMAP op runs (or even a MAP op if there is a later queued UNMAP). For this reason, the op handlers cannot reference the vma pointer. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem_vma.c | 63 +++++++++++++++++++++++++++---- 1 file changed, 56 insertions(+), 7 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index 73baa9451ada..a105aed82cae 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -8,6 +8,34 @@ #include "msm_gem.h" #include "msm_mmu.h" +#define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) + +/** + * struct msm_vm_map_op - create new pgtable mapping + */ +struct msm_vm_map_op { + /** @iova: start address for mapping */ + uint64_t iova; + /** @range: size of the region to map */ + uint64_t range; + /** @offset: offset into @sgt to map */ + uint64_t offset; + /** @sgt: pages to map, or NULL for a PRR mapping */ + struct sg_table *sgt; + /** @prot: the mapping protection flags */ + int prot; +}; + +/** + * struct msm_vm_unmap_op - unmap a range of pages from pgtable + */ +struct msm_vm_unmap_op { + /** @iova: start address for unmap */ + uint64_t iova; + /** @range: size of region to unmap */ + uint64_t range; +}; + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -21,18 +49,36 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) kfree(vm); } +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); +} + +static int +vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) +{ + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + + return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, + op->range, op->prot); +} + /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); - unsigned size = vma->va.range; /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm->mmu->funcs->unmap(vm->mmu, vma->va.addr, size); + vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + .iova = vma->va.addr, + .range = vma->va.range, + }); msm_vma->mapped = false; } @@ -42,7 +88,6 @@ int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { struct msm_gem_vma *msm_vma = to_msm_vma(vma); - struct msm_gem_vm *vm = to_msm_vm(vma->vm); int ret; if (GEM_WARN_ON(!vma->va.addr)) @@ -62,9 +107,13 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm->mmu->funcs->map(vm->mmu, vma->va.addr, sgt, - vma->gem.offset, vma->va.range, - prot); + ret = vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + .iova = vma->va.addr, + .range = vma->va.range, + .offset = vma->gem.offset, + .sgt = sgt, + .prot = prot, + }); if (ret) { msm_vma->mapped = false; } From patchwork Mon May 19 17:57:32 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892054 Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DBDB128CF4A; Mon, 19 May 2025 17:58:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677529; cv=none; b=EU/Ffis3rh28EInzwn641+3wPsoYe4aoXtZGvM5KOz9ctqeYqX5k2I0tDrVqkhJggz9DKBssZMljQh6OZmNB4Ky6cI4DCGLRvoyy+voYQ4o3Ebes2LpUHIZoO0gYAL8s+LcHp47Yf5/QGV4ItK7tOFoL+GcjMjcSnQ+81tGMNAM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677529; c=relaxed/simple; bh=KNnplvjW7xiwe6MV6NjFuLxC393AOi/CTZ2ubKUynQE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O6BU517Tap6d4OWUqyUcgeMsEu4dtJfrI8QiyJdzhoXXyULI2TvHlBryyAuMz80d1WjWSQQeQG/shxQbT+BJM/3VW/jqBcIEqsqdgIx8rgJfFZicbjCzZVcAvMzxqJwQGtXlkte4CoF0nnPkwzay1Ey4AV2XekXs3HUGoAJNC8Y= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WEEzJyu2; arc=none smtp.client-ip=209.85.210.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WEEzJyu2" Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-742c9907967so1557797b3a.1; Mon, 19 May 2025 10:58:46 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677526; x=1748282326; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=whvnaJKbd/LuG/OjoM3gFUtfFgomP7fEf6PbLtO81yk=; b=WEEzJyu2qnEZR6tuAP0z2xFPY1Wt4jU4hQdCiWrxPg2oVSVjuA+LSiy3xfZ+ib3APj 4/ayYYEkViVoqwm7ijcG4SMXiiZFHPWqthtaRm7l68guy3n58q6wY4HAwFbPFmTcruZy xsGfO3Z4pCL0kSrE4ybYNQhrfZaAMEcaILE1/tuP74OgsHklVZFuCH1NEuIz5aTHBiQy n+dG4nBLjrFzwRCi1z51fewahAu1cRbBiEFssxkJFHqJ+2DUqEL3y/2ZJYXohqX5prf8 uVU61CyxZINUetl9kyqTcU2D+lIx8YlxUCjugGHXT/ISQeiYIFKE/MJHrKou7qL7+uv+ TvUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677526; x=1748282326; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=whvnaJKbd/LuG/OjoM3gFUtfFgomP7fEf6PbLtO81yk=; b=nr5+cmythiCQXjdUTxcRqKkpFhJJLR8joe2uaQo3YfekkE4nQHkALreJr+Lr/Un6yr 5jix5KbPY386/4C6WrE6Ic802mZpinHBFv/8VSZBK9x+PpPwX7iFA3degPNkCa+0CRc4 kB8Ko3W/u6bL0cN4TnhX2rN4Q573IDFfWsSNQBYBK5hBDwGj05gdZy0XRuyvousEG22t dF+eWQQ2UIFvggNSh8bt00bGTPFfoZuZUPUk9aYD8CI05tFPPGYkYcvSdPgXKZgCj6BQ RP9DYJBbRpEGRy+3bGH/BQgDl0bEBj5D1gj6MRd1j4AUacgijP61tnFpBAPchsR25Bzk lNug== X-Forwarded-Encrypted: i=1; AJvYcCVybL3vzMPRg9274LOzTWkP8Cqr3xUTxPuBUGqkOGethLCzpdAD3FgxYSyB/rG53Ai1yyGN0uKiE2PKfKIq@vger.kernel.org, AJvYcCWkPdsQdd/MpNCeLowLZon2/B3/uAwKZAbMII/Q+TUWzHpv7oTo816C4CQQt1fKeJzFexLr+XFLu7IiuKzZ@vger.kernel.org, AJvYcCXw20pxHPFtcK8sC9fLX13lDptclLzu66rCdNQqaEvtkkrsGr+VO2mFohlU1mXyTk4q/650XLzlDOk9tzk=@vger.kernel.org X-Gm-Message-State: AOJu0YxV/e8ZHwpcWEH132bLr7ve4FJov+h1T/Dk/HkvJBiS6SeuBGFO ZmjWODw8ixZVo/XntmuVSqdhnb4hOUoEhCzxeXXmdHmmnJVhjDkXOYbS X-Gm-Gg: ASbGnct2P3IIyzO1TG8IhJ7Sfy0HlSPbDiMP0FJSTS+Y6GkIydt8wpyTqAQQj0P4Ipw zSKhVV9g8FnYphGlMgLC+WP4zkK9C5Zy1uIH/3oddyO3jBSlciZz8qS3S03mMpTiQ6tdk5gc/rY rYSh3avIInPF4Mc1bq249hLML4Ee4wyubZpV/cxMPjWRLpt7curAxhr9MAgDz3AieHmwcbl3kno dc13TYIMShQxY+kwwahN66UkjZN2dH8X45bgPK73KTakdFD1YetwzF/iw95gYE4U2E0c60vyAY3 w5ZvXw1w9BDv9qvvS3bj7yljw9usK5UXvqnPdWSbj3q5lbnBu7g7n7d/MmiiGQI2xg8xyUD3GoA r2KOLa7MKhvhLoHNcsIHYLVZs9Q== X-Google-Smtp-Source: AGHT+IHGTGmdC4oymPjcXcSZInpeJnfRTIRYkGizCUsh3QwicMtfBlmBDPiJZgnUL+t3WkAkrwdFQg== X-Received: by 2002:a17:90b:56cc:b0:30e:3338:8c0e with SMTP id 98e67ed59e1d1-30e7d5a8b5amr20639074a91.27.1747677525677; Mon, 19 May 2025 10:58:45 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30f2f6602f0sm219249a91.0.2025.05.19.10.58.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:45 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , Konrad Dybcio , Maarten Lankhorst , Maxime Ripard , Thomas Zimmermann , Sumit Semwal , =?utf-8?q?Christian_K=C3=B6nig?= , linux-kernel@vger.kernel.org (open list), linux-media@vger.kernel.org (open list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b), linaro-mm-sig@lists.linaro.org (moderated list:DMA BUFFER SHARING FRAMEWORK:Keyword:\bdma_(?:buf|fence|resv)\b) Subject: [PATCH v5 35/40] drm/msm: Add VM_BIND ioctl Date: Mon, 19 May 2025 10:57:32 -0700 Message-ID: <20250519175755.13037-23-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Add a VM_BIND ioctl for binding/unbinding buffers into a VM. This is only supported if userspace has opted in to MSM_PARAM_EN_VM_BIND. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 1 + drivers/gpu/drm/msm/msm_drv.h | 4 +- drivers/gpu/drm/msm/msm_gem.c | 40 +- drivers/gpu/drm/msm/msm_gem.h | 4 + drivers/gpu/drm/msm/msm_gem_submit.c | 22 +- drivers/gpu/drm/msm/msm_gem_vma.c | 1073 +++++++++++++++++++++++++- include/uapi/drm/msm_drm.h | 74 +- 7 files changed, 1185 insertions(+), 33 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index 89cb7820064f..bdf775897de8 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -791,6 +791,7 @@ static const struct drm_ioctl_desc msm_ioctls[] = { DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_NEW, msm_ioctl_submitqueue_new, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_CLOSE, msm_ioctl_submitqueue_close, DRM_RENDER_ALLOW), DRM_IOCTL_DEF_DRV(MSM_SUBMITQUEUE_QUERY, msm_ioctl_submitqueue_query, DRM_RENDER_ALLOW), + DRM_IOCTL_DEF_DRV(MSM_VM_BIND, msm_ioctl_vm_bind, DRM_RENDER_ALLOW), }; static void msm_show_fdinfo(struct drm_printer *p, struct drm_file *file) diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h index b0add236cbb3..33240afc6365 100644 --- a/drivers/gpu/drm/msm/msm_drv.h +++ b/drivers/gpu/drm/msm/msm_drv.h @@ -232,7 +232,9 @@ struct drm_gpuvm *msm_kms_init_vm(struct drm_device *dev); bool msm_use_mmu(struct drm_device *dev); int msm_ioctl_gem_submit(struct drm_device *dev, void *data, - struct drm_file *file); + struct drm_file *file); +int msm_ioctl_vm_bind(struct drm_device *dev, void *data, + struct drm_file *file); #ifdef CONFIG_DEBUG_FS unsigned long msm_gem_shrinker_shrink(struct drm_device *dev, unsigned long nr_to_scan); diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index cf509ca42da0..040f0539baa5 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -233,8 +233,7 @@ static void put_pages(struct drm_gem_object *obj) } } -static struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, - unsigned madv) +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv) { struct msm_gem_object *msm_obj = to_msm_bo(obj); @@ -1036,18 +1035,37 @@ static void msm_gem_free_object(struct drm_gem_object *obj) /* * We need to lock any VMs the object is still attached to, but not * the object itself (see explaination in msm_gem_assert_locked()), - * so just open-code this special case: + * so just open-code this special case. + * + * Note that we skip the dance if we aren't attached to any VM. This + * is load bearing. The driver needs to support two usage models: + * + * 1. Legacy kernel managed VM: Userspace expects the VMA's to be + * implicitly torn down when the object is freed, the VMA's do + * not hold a hard reference to the BO. + * + * 2. VM_BIND, userspace managed VM: The VMA holds a reference to the + * BO. This can be dropped when the VM is closed and it's associated + * VMAs are torn down. (See msm_gem_vm_close()). + * + * In the latter case the last reference to a BO can be dropped while + * we already have the VM locked. It would have already been removed + * from the gpuva list, but lockdep doesn't know that. Or understand + * the differences between the two usage models. */ - drm_exec_init(&exec, 0, 0); - drm_exec_until_all_locked (&exec) { - struct drm_gpuvm_bo *vm_bo; - drm_gem_for_each_gpuvm_bo (vm_bo, obj) { - drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(vm_bo->vm)); - drm_exec_retry_on_contention(&exec); + if (!list_empty(&obj->gpuva.list)) { + drm_exec_init(&exec, 0, 0); + drm_exec_until_all_locked (&exec) { + struct drm_gpuvm_bo *vm_bo; + drm_gem_for_each_gpuvm_bo (vm_bo, obj) { + drm_exec_lock_obj(&exec, + drm_gpuvm_resv_obj(vm_bo->vm)); + drm_exec_retry_on_contention(&exec); + } } + put_iova_spaces(obj, NULL, true); + drm_exec_fini(&exec); /* drop locks */ } - put_iova_spaces(obj, NULL, true); - drm_exec_fini(&exec); /* drop locks */ if (obj->import_attach) { GEM_WARN_ON(msm_obj->vaddr); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 8ad25927c604..bfeb0f584ae5 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -73,6 +73,9 @@ struct msm_gem_vm { /** @mmu: The mmu object which manages the pgtables */ struct msm_mmu *mmu; + /** @mmu_lock: Protects access to the mmu */ + struct mutex mmu_lock; + /** * @pid: For address spaces associated with a specific process, this * will be non-NULL: @@ -205,6 +208,7 @@ int msm_gem_get_and_pin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm, uint64_t *iova); void msm_gem_unpin_iova(struct drm_gem_object *obj, struct drm_gpuvm *vm); void msm_gem_pin_obj_locked(struct drm_gem_object *obj); +struct page **msm_gem_get_pages_locked(struct drm_gem_object *obj, unsigned madv); struct page **msm_gem_pin_pages_locked(struct drm_gem_object *obj); void msm_gem_unpin_pages_locked(struct drm_gem_object *obj); int msm_gem_dumb_create(struct drm_file *file, struct drm_device *dev, diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c index 053e6c65780f..9809918d8eb4 100644 --- a/drivers/gpu/drm/msm/msm_gem_submit.c +++ b/drivers/gpu/drm/msm/msm_gem_submit.c @@ -193,6 +193,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit, static int submit_lookup_cmds(struct msm_gem_submit *submit, struct drm_msm_gem_submit *args, struct drm_file *file) { + struct msm_context *ctx = file->driver_priv; unsigned i; size_t sz; int ret = 0; @@ -224,6 +225,20 @@ static int submit_lookup_cmds(struct msm_gem_submit *submit, goto out; } + if (msm_context_is_vmbind(ctx)) { + if (submit_cmd.nr_relocs) { + ret = SUBMIT_ERROR(EINVAL, submit, "nr_relocs must be zero"); + goto out; + } + + if (submit_cmd.submit_idx || submit_cmd.submit_offset) { + ret = SUBMIT_ERROR(EINVAL, submit, "submit_idx/offset must be zero"); + goto out; + } + + submit->cmd[i].iova = submit_cmd.iova; + } + submit->cmd[i].type = submit_cmd.type; submit->cmd[i].size = submit_cmd.size / 4; submit->cmd[i].offset = submit_cmd.submit_offset / 4; @@ -527,6 +542,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, struct msm_syncobj_post_dep *post_deps = NULL; struct drm_syncobj **syncobjs_to_reset = NULL; struct sync_file *sync_file = NULL; + unsigned cmds_to_parse; int out_fence_fd = -1; unsigned i; int ret; @@ -650,7 +666,9 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, if (ret) goto out; - for (i = 0; i < args->nr_cmds; i++) { + cmds_to_parse = msm_context_is_vmbind(ctx) ? 0 : args->nr_cmds; + + for (i = 0; i < cmds_to_parse; i++) { struct drm_gem_object *obj; uint64_t iova; @@ -681,7 +699,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, goto out; } - submit->nr_cmds = i; + submit->nr_cmds = args->nr_cmds; idr_preload(GFP_KERNEL); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index a105aed82cae..fe41b7a042c3 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -4,9 +4,16 @@ * Author: Rob Clark */ +#include "drm/drm_file.h" +#include "drm/msm_drm.h" +#include "linux/file.h" +#include "linux/sync_file.h" + #include "msm_drv.h" #include "msm_gem.h" +#include "msm_gpu.h" #include "msm_mmu.h" +#include "msm_syncobj.h" #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) @@ -36,6 +43,97 @@ struct msm_vm_unmap_op { uint64_t range; }; +/** + * struct msm_vma_op - A MAP or UNMAP operation + */ +struct msm_vm_op { + /** @op: The operation type */ + enum { + MSM_VM_OP_MAP = 1, + MSM_VM_OP_UNMAP, + } op; + union { + /** @map: Parameters used if op == MSM_VMA_OP_MAP */ + struct msm_vm_map_op map; + /** @unmap: Parameters used if op == MSM_VMA_OP_UNMAP */ + struct msm_vm_unmap_op unmap; + }; + /** @node: list head in msm_vm_bind_job::vm_ops */ + struct list_head node; + + /** + * @obj: backing object for pages to be mapped/unmapped + * + * Async unmap ops, in particular, must hold a reference to the + * original GEM object backing the mapping that will be unmapped. + * But the same can be required in the map path, for example if + * there is not a corresponding unmap op, such as process exit. + * + * This ensures that the pages backing the mapping are not freed + * before the mapping is torn down. + */ + struct drm_gem_object *obj; +}; + +/** + * struct msm_vm_bind_job - Tracking for a VM_BIND ioctl + * + * A table of userspace requested VM updates (MSM_VM_BIND_OP_UNMAP/MAP/MAP_NULL) + * gets applied to the vm, generating a list of VM ops (MSM_VM_OP_MAP/UNMAP) + * which are applied to the pgtables asynchronously. For example a userspace + * requested MSM_VM_BIND_OP_MAP could end up generating both an MSM_VM_OP_UNMAP + * to unmap an existing mapping, and a MSM_VM_OP_MAP to apply the new mapping. + */ +struct msm_vm_bind_job { + /** @base: base class for drm_sched jobs */ + struct drm_sched_job base; + /** @vm: The VM being operated on */ + struct drm_gpuvm *vm; + /** @fence: The fence that is signaled when job completes */ + struct dma_fence *fence; + /** @queue: The queue that the job runs on */ + struct msm_gpu_submitqueue *queue; + /** @prealloc: Tracking for pre-allocated MMU pgtable pages */ + struct msm_mmu_prealloc prealloc; + /** @vm_ops: a list of struct msm_vm_op */ + struct list_head vm_ops; + /** @bos_pinned: are the GEM objects being bound pinned? */ + bool bos_pinned; + /** @nr_ops: the number of userspace requested ops */ + unsigned int nr_ops; + /** + * @ops: the userspace requested ops + * + * The userspace requested ops are copied/parsed and validated + * before we start applying the updates to try to do as much up- + * front error checking as possible, to avoid the VM being in an + * undefined state due to partially executed VM_BIND. + * + * This table also serves to hold a reference to the backing GEM + * objects. + */ + struct msm_vm_bind_op { + uint32_t op; + uint32_t flags; + union { + struct drm_gem_object *obj; + uint32_t handle; + }; + uint64_t obj_offset; + uint64_t iova; + uint64_t range; + } ops[]; +}; + +#define job_foreach_bo(obj, _job) \ + for (unsigned i = 0; i < (_job)->nr_ops; i++) \ + if ((obj = (_job)->ops[i].obj)) + +static inline struct msm_vm_bind_job *to_msm_vm_bind_job(struct drm_sched_job *job) +{ + return container_of(job, struct msm_vm_bind_job, base); +} + static void msm_gem_vm_free(struct drm_gpuvm *gpuvm) { @@ -52,6 +150,9 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); @@ -60,6 +161,9 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { + if (!vm->managed) + lockdep_assert_held(&vm->mmu_lock); + vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, @@ -69,17 +173,29 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) /* Actually unmap memory for the vma */ void msm_gem_vma_unmap(struct drm_gpuva *vma) { + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); /* Don't do anything if the memory isn't mapped */ if (!msm_vma->mapped) return; - vm_unmap_op(to_msm_vm(vma->vm), &(struct msm_vm_unmap_op){ + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + + vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova = vma->va.addr, .range = vma->va.range, }); + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + msm_vma->mapped = false; } @@ -87,6 +203,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) { + struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); int ret; @@ -98,6 +215,14 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) msm_vma->mapped = true; + /* + * The mmu_lock is only needed when preallocation is used. But + * in that case we don't need to worry about recursion into + * shrinker + */ + if (!vm->managed) + mutex_lock(&vm->mmu_lock); + /* * NOTE: iommu/io-pgtable can allocate pages, so we cannot hold * a lock across map/unmap which is also used in the job_run() @@ -107,16 +232,19 @@ msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt) * Revisit this if we can come up with a scheme to pre-alloc pages * for the pgtable in map/unmap ops. */ - ret = vm_map_op(to_msm_vm(vma->vm), &(struct msm_vm_map_op){ + ret = vm_map_op(vm, &(struct msm_vm_map_op){ .iova = vma->va.addr, .range = vma->va.range, .offset = vma->gem.offset, .sgt = sgt, .prot = prot, }); - if (ret) { + + if (!vm->managed) + mutex_unlock(&vm->mmu_lock); + + if (ret) msm_vma->mapped = false; - } return ret; } @@ -131,6 +259,9 @@ void msm_gem_vma_close(struct drm_gpuva *vma) drm_gpuvm_resv_assert_held(&vm->base); + if (vma->gem.obj) + msm_gem_assert_locked(vma->gem.obj); + if (vma->va.addr && vm->managed) drm_mm_remove_node(&msm_vma->node); @@ -158,6 +289,7 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, if (vm->managed) { BUG_ON(offset != 0); + BUG_ON(!obj); /* NULL mappings not valid for kernel managed VM */ ret = drm_mm_insert_node_in_range(&vm->mm, &vma->node, obj->size, PAGE_SIZE, 0, range_start, range_end, 0); @@ -169,7 +301,8 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, range_end = range_start + obj->size; } - GEM_WARN_ON((range_end - range_start) > obj->size); + if (obj) + GEM_WARN_ON((range_end - range_start) > obj->size); drm_gpuva_init(&vma->base, range_start, range_end - range_start, obj, offset); vma->mapped = false; @@ -178,6 +311,9 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, if (ret) goto err_free_range; + if (!obj) + return &vma->base; + vm_bo = drm_gpuvm_bo_obtain(&vm->base, obj); if (IS_ERR(vm_bo)) { ret = PTR_ERR(vm_bo); @@ -200,11 +336,297 @@ msm_gem_vma_new(struct drm_gpuvm *gpuvm, struct drm_gem_object *obj, return ERR_PTR(ret); } +static int +msm_gem_vm_bo_validate(struct drm_gpuvm_bo *vm_bo, struct drm_exec *exec) +{ + struct drm_gem_object *obj = vm_bo->obj; + struct drm_gpuva *vma; + int ret; + + vm_dbg("validate: %p", obj); + + msm_gem_assert_locked(obj); + + drm_gpuvm_bo_for_each_va (vma, vm_bo) { + ret = msm_gem_pin_vma_locked(obj, vma); + if (ret) + return ret; + } + + return 0; +} + +struct op_arg { + unsigned flags; + struct msm_vm_bind_job *job; +}; + +static void +vm_op_enqueue(struct op_arg *arg, struct msm_vm_op _op) +{ + struct msm_vm_op *op = kmalloc(sizeof(*op), GFP_KERNEL); + *op = _op; + list_add_tail(&op->node, &arg->job->vm_ops); + + if (op->obj) + drm_gem_object_get(op->obj); +} + +static struct drm_gpuva * +vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) +{ + return msm_gem_vma_new(arg->job->vm, op->gem.obj, op->gem.offset, + op->va.addr, op->va.addr + op->va.range); +} + +static int +msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gem_object *obj = op->map.gem.obj; + struct drm_gpuva *vma; + struct sg_table *sgt; + unsigned prot; + + vma = vma_from_op(arg, &op->map); + if (WARN_ON(IS_ERR(vma))) + return PTR_ERR(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + vma->flags = ((struct op_arg *)arg)->flags; + + if (obj) { + sgt = to_msm_bo(obj)->sgt; + prot = msm_gem_prot(obj); + } else { + sgt = NULL; + prot = IOMMU_READ | IOMMU_WRITE; + } + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op = MSM_VM_OP_MAP, + .map = { + .sgt = sgt, + .iova = vma->va.addr, + .range = vma->va.range, + .offset = vma->gem.offset, + .prot = prot, + }, + .obj = vma->gem.obj, + }); + + to_msm_vma(vma)->mapped = true; + + return 0; +} + +static int +msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) +{ + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; + struct drm_gpuvm *vm = job->vm; + struct drm_gpuva *orig_vma = op->remap.unmap->va; + struct drm_gpuva *prev_vma = NULL, *next_vma = NULL; + struct drm_gpuvm_bo *vm_bo = orig_vma->vm_bo; + bool mapped = to_msm_vma(orig_vma)->mapped; + unsigned flags; + + vm_dbg("orig_vma: %p:%p:%p: %016llx %016llx", vm, orig_vma, + orig_vma->gem.obj, orig_vma->va.addr, orig_vma->va.range); + + if (mapped) { + uint64_t unmap_start, unmap_range; + + drm_gpuva_op_remap_to_unmap_range(&op->remap, &unmap_start, &unmap_range); + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op = MSM_VM_OP_UNMAP, + .unmap = { + .iova = unmap_start, + .range = unmap_range, + }, + .obj = orig_vma->gem.obj, + }); + + /* + * Part of this GEM obj is still mapped, but we're going to kill the + * existing VMA and replace it with one or two new ones (ie. two if + * the unmapped range is in the middle of the existing (unmap) VMA). + * So just set the state to unmapped: + */ + to_msm_vma(orig_vma)->mapped = false; + } + + /* + * Hold a ref to the vm_bo between the msm_gem_vma_close() and the + * creation of the new prev/next vma's, in case the vm_bo is tracked + * in the VM's evict list: + */ + if (vm_bo) + drm_gpuvm_bo_get(vm_bo); + + /* + * The prev_vma and/or next_vma are replacing the unmapped vma, and + * therefore should preserve it's flags: + */ + flags = orig_vma->flags; + + msm_gem_vma_close(orig_vma); + + if (op->remap.prev) { + prev_vma = vma_from_op(arg, op->remap.prev); + if (WARN_ON(IS_ERR(prev_vma))) + return PTR_ERR(prev_vma); + + vm_dbg("prev_vma: %p:%p: %016llx %016llx", vm, prev_vma, prev_vma->va.addr, prev_vma->va.range); + to_msm_vma(prev_vma)->mapped = mapped; + prev_vma->flags = flags; + } + + if (op->remap.next) { + next_vma = vma_from_op(arg, op->remap.next); + if (WARN_ON(IS_ERR(next_vma))) + return PTR_ERR(next_vma); + + vm_dbg("next_vma: %p:%p: %016llx %016llx", vm, next_vma, next_vma->va.addr, next_vma->va.range); + to_msm_vma(next_vma)->mapped = mapped; + next_vma->flags = flags; + } + + if (!mapped) + drm_gpuvm_bo_evict(vm_bo, true); + + /* Drop the previous ref: */ + drm_gpuvm_bo_put(vm_bo); + + return 0; +} + +static int +msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) +{ + struct drm_gpuva *vma = op->unmap.va; + struct msm_gem_vma *msm_vma = to_msm_vma(vma); + + vm_dbg("%p:%p:%p: %016llx %016llx", vma->vm, vma, vma->gem.obj, + vma->va.addr, vma->va.range); + + if (!msm_vma->mapped) + goto out_close; + + vm_op_enqueue(arg, (struct msm_vm_op){ + .op = MSM_VM_OP_UNMAP, + .unmap = { + .iova = vma->va.addr, + .range = vma->va.range, + }, + .obj = vma->gem.obj, + }); + + msm_vma->mapped = false; + +out_close: + msm_gem_vma_close(vma); + + return 0; +} + static const struct drm_gpuvm_ops msm_gpuvm_ops = { .vm_free = msm_gem_vm_free, + .vm_bo_validate = msm_gem_vm_bo_validate, + .sm_step_map = msm_gem_vm_sm_step_map, + .sm_step_remap = msm_gem_vm_sm_step_remap, + .sm_step_unmap = msm_gem_vm_sm_step_unmap, }; +static struct dma_fence * +msm_vma_job_run(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job = to_msm_vm_bind_job(_job); + struct msm_gem_vm *vm = to_msm_vm(job->vm); + struct drm_gem_object *obj; + int ret = vm->unusable ? -EINVAL : 0; + + vm_dbg(""); + + mutex_lock(&vm->mmu_lock); + vm->mmu->prealloc = &job->prealloc; + + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op = + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + + switch (op->op) { + case MSM_VM_OP_MAP: + /* + * On error, stop trying to map new things.. but we + * still want to process the unmaps (or in particular, + * the drm_gem_object_put()s) + */ + if (!ret) + ret = vm_map_op(vm, &op->map); + break; + case MSM_VM_OP_UNMAP: + vm_unmap_op(vm, &op->unmap); + break; + } + drm_gem_object_put(op->obj); + list_del(&op->node); + kfree(op); + } + + vm->mmu->prealloc = NULL; + mutex_unlock(&vm->mmu_lock); + + /* + * We failed to perform at least _some_ of the pgtable updates, so + * now the VM is in an undefined state. Game over! + */ + if (ret) + vm->unusable = true; + + job_foreach_bo (obj, job) { + msm_gem_lock(obj); + msm_gem_unpin_locked(obj); + msm_gem_unlock(obj); + } + + /* VM_BIND ops are synchronous, so no fence to wait on: */ + return NULL; +} + +static void +msm_vma_job_free(struct drm_sched_job *_job) +{ + struct msm_vm_bind_job *job = to_msm_vm_bind_job(_job); + struct msm_mmu *mmu = to_msm_vm(job->vm)->mmu; + struct drm_gem_object *obj; + + mmu->funcs->prealloc_cleanup(mmu, &job->prealloc); + + drm_sched_job_cleanup(_job); + + job_foreach_bo (obj, job) + drm_gem_object_put(obj); + + msm_submitqueue_put(job->queue); + dma_fence_put(job->fence); + + /* In error paths, we could have unexecuted ops: */ + while (!list_empty(&job->vm_ops)) { + struct msm_vm_op *op = + list_first_entry(&job->vm_ops, struct msm_vm_op, node); + list_del(&op->node); + kfree(op); + } + + kfree(job); +} + static const struct drm_sched_backend_ops msm_vm_bind_ops = { + .run_job = msm_vma_job_run, + .free_job = msm_vma_job_free }; /** @@ -254,6 +676,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, .ops = &msm_vm_bind_ops, .num_rqs = 1, .credit_limit = 1, + .enqueue_credit_limit = 1024, .timeout = MAX_SCHEDULE_TIMEOUT, .name = "msm-vm-bind", .dev = drm->dev, @@ -269,6 +692,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_gem_object_put(dummy_gem); vm->mmu = mmu; + mutex_init(&vm->mmu_lock); vm->managed = managed; drm_mm_init(&vm->mm, va_start, va_size); @@ -281,7 +705,6 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, err_free_vm: kfree(vm); return ERR_PTR(ret); - } /** @@ -297,6 +720,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) { struct msm_gem_vm *vm = to_msm_vm(gpuvm); struct drm_gpuva *vma, *tmp; + struct drm_exec exec; /* * For kernel managed VMs, the VMAs are torn down when the handle is @@ -313,22 +737,635 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_sched_fini(&vm->sched); /* Tear down any remaining mappings: */ - dma_resv_lock(drm_gpuvm_resv(gpuvm), NULL); - drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { - struct drm_gem_object *obj = vma->gem.obj; + drm_exec_init(&exec, 0, 2); + drm_exec_until_all_locked (&exec) { + drm_exec_lock_obj(&exec, drm_gpuvm_resv_obj(gpuvm)); + drm_exec_retry_on_contention(&exec); + + drm_gpuvm_for_each_va_safe (vma, tmp, gpuvm) { + struct drm_gem_object *obj = vma->gem.obj; + + /* + * MSM_BO_NO_SHARE objects share the same resv as the + * VM, in which case the obj is already locked: + */ + if (obj && (obj->resv == drm_gpuvm_resv(gpuvm))) + obj = NULL; + + if (obj) { + drm_exec_lock_obj(&exec, obj); + drm_exec_retry_on_contention(&exec); + } + + msm_gem_vma_unmap(vma); + msm_gem_vma_close(vma); + + if (obj) { + drm_exec_unlock_obj(&exec, obj); + } + } + } + drm_exec_fini(&exec); +} + + +static struct msm_vm_bind_job * +vm_bind_job_create(struct drm_device *dev, struct msm_gpu *gpu, + struct msm_gpu_submitqueue *queue, uint32_t nr_ops) +{ + struct msm_vm_bind_job *job; + uint64_t sz; + int ret; + + sz = struct_size(job, ops, nr_ops); + + if (sz > SIZE_MAX) + return ERR_PTR(-ENOMEM); + + job = kzalloc(sz, GFP_KERNEL | __GFP_NOWARN); + if (!job) + return ERR_PTR(-ENOMEM); + + ret = drm_sched_job_init(&job->base, queue->entity, 1, queue); + if (ret) { + kfree(job); + return ERR_PTR(ret); + } + + job->vm = msm_context_vm(dev, queue->ctx); + job->queue = queue; + INIT_LIST_HEAD(&job->vm_ops); + + return job; +} + +static bool invalid_alignment(uint64_t addr) +{ + /* + * Technically this is about GPU alignment, not CPU alignment. But + * I've not seen any qcom SoC where the SMMU does not support the + * CPU's smallest page size. + */ + return !PAGE_ALIGNED(addr); +} + +static int +lookup_op(struct msm_vm_bind_job *job, const struct drm_msm_vm_bind_op *op) +{ + struct drm_device *dev = job->vm->drm; + int i = job->nr_ops++; + int ret = 0; + + job->ops[i].op = op->op; + job->ops[i].handle = op->handle; + job->ops[i].obj_offset = op->obj_offset; + job->ops[i].iova = op->iova; + job->ops[i].range = op->range; + job->ops[i].flags = op->flags; + + if (op->flags & ~MSM_VM_BIND_OP_FLAGS) + ret = UERR(EINVAL, dev, "invalid flags: %x\n", op->flags); + + if (invalid_alignment(op->iova)) + ret = UERR(EINVAL, dev, "invalid address: %016llx\n", op->iova); + + if (invalid_alignment(op->obj_offset)) + ret = UERR(EINVAL, dev, "invalid bo_offset: %016llx\n", op->obj_offset); + + if (invalid_alignment(op->range)) + ret = UERR(EINVAL, dev, "invalid range: %016llx\n", op->range); + + + /* + * MAP must specify a valid handle. But the handle MBZ for + * UNMAP or MAP_NULL. + */ + if (op->op == MSM_VM_BIND_OP_MAP) { + if (!op->handle) + ret = UERR(EINVAL, dev, "invalid handle\n"); + } else if (op->handle) { + ret = UERR(EINVAL, dev, "handle must be zero\n"); + } + + switch (op->op) { + case MSM_VM_BIND_OP_MAP: + case MSM_VM_BIND_OP_MAP_NULL: + case MSM_VM_BIND_OP_UNMAP: + break; + default: + ret = UERR(EINVAL, dev, "invalid op: %u\n", op->op); + break; + } + + return ret; +} + +/* + * ioctl parsing, parameter validation, and GEM handle lookup + */ +static int +vm_bind_job_lookup_ops(struct msm_vm_bind_job *job, struct drm_msm_vm_bind *args, + struct drm_file *file, int *nr_bos) +{ + struct drm_device *dev = job->vm->drm; + int ret = 0; + int cnt = 0; + + if (args->nr_ops == 1) { + /* Single op case, the op is inlined: */ + ret = lookup_op(job, &args->op); + } else { + for (unsigned i = 0; i < args->nr_ops; i++) { + struct drm_msm_vm_bind_op op; + void __user *userptr = + u64_to_user_ptr(args->ops + (i * sizeof(op))); + + /* make sure we don't have garbage flags, in case we hit + * error path before flags is initialized: + */ + job->ops[i].flags = 0; + + if (copy_from_user(&op, userptr, sizeof(op))) { + ret = -EFAULT; + break; + } + + ret = lookup_op(job, &op); + if (ret) + break; + } + } + + if (ret) { + job->nr_ops = 0; + goto out; + } + + spin_lock(&file->table_lock); + + for (unsigned i = 0; i < args->nr_ops; i++) { + struct drm_gem_object *obj; + + if (!job->ops[i].handle) { + job->ops[i].obj = NULL; + continue; + } + + /* + * normally use drm_gem_object_lookup(), but for bulk lookup + * all under single table_lock just hit object_idr directly: + */ + obj = idr_find(&file->object_idr, job->ops[i].handle); + if (!obj) { + ret = UERR(EINVAL, dev, "invalid handle %u at index %u\n", job->ops[i].handle, i); + goto out_unlock; + } + + drm_gem_object_get(obj); + + job->ops[i].obj = obj; + cnt++; + } + + *nr_bos = cnt; + +out_unlock: + spin_unlock(&file->table_lock); + +out: + return ret; +} + +static void +prealloc_count(struct msm_vm_bind_job *job, + struct msm_vm_bind_op *first, + struct msm_vm_bind_op *last) +{ + struct msm_mmu *mmu = to_msm_vm(job->vm)->mmu; + + if (!first) + return; + + uint64_t start_iova = first->iova; + uint64_t end_iova = last->iova + last->range; + + mmu->funcs->prealloc_count(mmu, &job->prealloc, start_iova, end_iova - start_iova); +} + +static bool +ops_are_same_pte(struct msm_vm_bind_op *first, struct msm_vm_bind_op *next) +{ + /* + * Last level pte covers 2MB.. so we should merge two ops, from + * the PoV of figuring out how much pgtable pages to pre-allocate + * if they land in the same 2MB range: + */ + uint64_t pte_mask = ~(SZ_2M - 1); + return ((first->iova + first->range) & pte_mask) == (next->iova & pte_mask); +} + +/* + * Determine the amount of memory to prealloc for pgtables. For sparse images, + * in particular, userspace plays some tricks with the order of page mappings + * to get the desired swizzle pattern, resulting in a large # of tiny MAP ops. + * So detect when multiple MAP operations are physically contiguous, and count + * them as a single mapping. Otherwise the prealloc_count() will not realize + * they can share pagetable pages and vastly overcount. + */ +static void +vm_bind_prealloc_count(struct msm_vm_bind_job *job) +{ + struct msm_vm_bind_op *first = NULL, *last = NULL; + + for (int i = 0; i < job->nr_ops; i++) { + struct msm_vm_bind_op *op = &job->ops[i]; + + /* We only care about MAP/MAP_NULL: */ + if (op->op == MSM_VM_BIND_OP_UNMAP) + continue; + + /* + * If op is contiguous with last in the current range, then + * it becomes the new last in the range and we continue + * looping: + */ + if (last && ops_are_same_pte(last, op)) { + last = op; + continue; + } + + /* + * If op is not contiguous with the current range, flush + * the current range and start anew: + */ + prealloc_count(job, first, last); + first = last = op; + } + + /* Flush the remaining range: */ + prealloc_count(job, first, last); + + job->base.enqueue_credits = job->prealloc.count; +} + +/* + * Lock VM and GEM objects + */ +static int +vm_bind_job_lock_objects(struct msm_vm_bind_job *job, struct drm_exec *exec) +{ + struct drm_gem_object *obj; + int ret; + + /* Lock VM and objects: */ + drm_exec_until_all_locked(exec) { + ret = drm_exec_lock_obj(exec, drm_gpuvm_resv_obj(job->vm)); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + + job_foreach_bo (obj, job) { + ret = drm_exec_prepare_obj(exec, obj, 1); + drm_exec_retry_on_contention(exec); + if (ret) + return ret; + } + } + + return 0; +} + +/* + * Pin GEM objects, ensuring that we have backing pages. Pinning will move + * the object to the pinned LRU so that the shrinker knows to first consider + * other objects for evicting. + */ +static int +vm_bind_job_pin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + /* + * First loop, before holding the LRU lock, avoids holding the + * LRU lock while calling msm_gem_pin_vma_locked (which could + * trigger get_pages()) + */ + job_foreach_bo (obj, job) { + struct page **pages; + + pages = msm_gem_get_pages_locked(obj, MSM_MADV_WILLNEED); + if (IS_ERR(pages)) + return PTR_ERR(pages); + } + + struct msm_drm_private *priv = job->vm->drm->dev_private; + + /* + * A second loop while holding the LRU lock (a) avoids acquiring/dropping + * the LRU lock for each individual bo, while (b) avoiding holding the + * LRU lock while calling msm_gem_pin_vma_locked() (which could trigger + * get_pages() which could trigger reclaim.. and if we held the LRU lock + * could trigger deadlock with the shrinker). + */ + mutex_lock(&priv->lru.lock); + job_foreach_bo (obj, job) + msm_gem_pin_obj_locked(obj); + mutex_unlock(&priv->lru.lock); + + job->bos_pinned = true; + + return 0; +} + +/* + * Unpin GEM objects. Normally this is done after the bind job is run. + */ +static void +vm_bind_job_unpin_objects(struct msm_vm_bind_job *job) +{ + struct drm_gem_object *obj; + + if (!job->bos_pinned) + return; + + job_foreach_bo (obj, job) + msm_gem_unpin_locked(obj); - if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { - drm_gem_object_get(obj); - msm_gem_lock(obj); + job->bos_pinned = false; +} + +/* + * Pre-allocate pgtable memory, and translate the VM bind requests into a + * sequence of pgtable updates to be applied asynchronously. + */ +static int +vm_bind_job_prepare(struct msm_vm_bind_job *job) +{ + struct msm_gem_vm *vm = to_msm_vm(job->vm); + struct msm_mmu *mmu = vm->mmu; + int ret; + + ret = mmu->funcs->prealloc_allocate(mmu, &job->prealloc); + if (ret) + return ret; + + for (unsigned i = 0; i < job->nr_ops; i++) { + const struct msm_vm_bind_op *op = &job->ops[i]; + struct op_arg arg = { + .job = job, + }; + + switch (op->op) { + case MSM_VM_BIND_OP_UNMAP: + ret = drm_gpuvm_sm_unmap(job->vm, &arg, op->iova, + op->obj_offset); + break; + case MSM_VM_BIND_OP_MAP: + if (op->flags & MSM_VM_BIND_OP_DUMP) + arg.flags |= MSM_VMA_DUMP; + fallthrough; + case MSM_VM_BIND_OP_MAP_NULL: + ret = drm_gpuvm_sm_map(job->vm, &arg, op->iova, + op->range, op->obj, op->obj_offset); + break; + default: + /* + * lookup_op() should have already thrown an error for + * invalid ops + */ + BUG_ON("unreachable"); } - msm_gem_vma_unmap(vma); - msm_gem_vma_close(vma); + if (ret) { + /* + * If we've already started modifying the vm, we can't + * adequetly describe to userspace the intermediate + * state the vm is in. So throw up our hands! + */ + if (i > 0) + vm->unusable = true; + return ret; + } + } + + return 0; +} + +/* + * Attach fences to the GEM objects being bound. This will signify to + * the shrinker that they are busy even after dropping the locks (ie. + * drm_exec_fini()) + */ +static void +vm_bind_job_attach_fences(struct msm_vm_bind_job *job) +{ + for (unsigned i = 0; i < job->nr_ops; i++) { + struct drm_gem_object *obj = job->ops[i].obj; + + if (!obj) + continue; + + dma_resv_add_fence(obj->resv, job->fence, + DMA_RESV_USAGE_KERNEL); + } +} + +int +msm_ioctl_vm_bind(struct drm_device *dev, void *data, struct drm_file *file) +{ + struct msm_drm_private *priv = dev->dev_private; + struct drm_msm_vm_bind *args = data; + struct msm_context *ctx = file->driver_priv; + struct msm_vm_bind_job *job = NULL; + struct msm_gpu *gpu = priv->gpu; + struct msm_gpu_submitqueue *queue; + struct msm_syncobj_post_dep *post_deps = NULL; + struct drm_syncobj **syncobjs_to_reset = NULL; + struct sync_file *sync_file = NULL; + struct dma_fence *fence; + int out_fence_fd = -1; + int ret, nr_bos = 0; + unsigned i; + + if (!gpu) + return -ENXIO; + + /* + * Maybe we could allow just UNMAP ops? OTOH userspace should just + * immediately close the device file and all will be torn down. + */ + if (to_msm_vm(ctx->vm)->unusable) + return UERR(EPIPE, dev, "context is unusable"); + + /* + * Technically, you cannot create a VM_BIND submitqueue in the first + * place, if you haven't opted in to VM_BIND context. But it is + * cleaner / less confusing, to check this case directly. + */ + if (!msm_context_is_vmbind(ctx)) + return UERR(EINVAL, dev, "context does not support vmbind"); + + if (args->flags & ~MSM_VM_BIND_FLAGS) + return UERR(EINVAL, dev, "invalid flags"); - if (obj && obj->resv != drm_gpuvm_resv(gpuvm)) { - msm_gem_unlock(obj); - drm_gem_object_put(obj); + queue = msm_submitqueue_get(ctx, args->queue_id); + if (!queue) + return -ENOENT; + + if (!(queue->flags & MSM_SUBMITQUEUE_VM_BIND)) { + ret = UERR(EINVAL, dev, "Invalid queue type"); + goto out_post_unlock; + } + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + out_fence_fd = get_unused_fd_flags(O_CLOEXEC); + if (out_fence_fd < 0) { + ret = out_fence_fd; + goto out_post_unlock; } } - dma_resv_unlock(drm_gpuvm_resv(gpuvm)); + + job = vm_bind_job_create(dev, gpu, queue, args->nr_ops); + if (IS_ERR(job)) { + ret = PTR_ERR(job); + goto out_post_unlock; + } + + ret = mutex_lock_interruptible(&queue->lock); + if (ret) + goto out_post_unlock; + + if (args->flags & MSM_VM_BIND_FENCE_FD_IN) { + struct dma_fence *in_fence; + + in_fence = sync_file_get_fence(args->fence_fd); + + if (!in_fence) { + ret = UERR(EINVAL, dev, "invalid in-fence"); + goto out_unlock; + } + + ret = drm_sched_job_add_dependency(&job->base, in_fence); + if (ret) + goto out_unlock; + } + + if (args->in_syncobjs > 0) { + syncobjs_to_reset = msm_syncobj_parse_deps(dev, &job->base, + file, args->in_syncobjs, + args->nr_in_syncobjs, + args->syncobj_stride); + if (IS_ERR(syncobjs_to_reset)) { + ret = PTR_ERR(syncobjs_to_reset); + goto out_unlock; + } + } + + if (args->out_syncobjs > 0) { + post_deps = msm_syncobj_parse_post_deps(dev, file, + args->out_syncobjs, + args->nr_out_syncobjs, + args->syncobj_stride); + if (IS_ERR(post_deps)) { + ret = PTR_ERR(post_deps); + goto out_unlock; + } + } + + ret = vm_bind_job_lookup_ops(job, args, file, &nr_bos); + if (ret) + goto out_unlock; + + vm_bind_prealloc_count(job); + + struct drm_exec exec; + unsigned flags = DRM_EXEC_IGNORE_DUPLICATES | DRM_EXEC_INTERRUPTIBLE_WAIT; + drm_exec_init(&exec, flags, nr_bos + 1); + + ret = vm_bind_job_lock_objects(job, &exec); + if (ret) + goto out; + + ret = vm_bind_job_pin_objects(job); + if (ret) + goto out; + + ret = vm_bind_job_prepare(job); + if (ret) + goto out; + + drm_sched_job_arm(&job->base); + + job->fence = dma_fence_get(&job->base.s_fence->finished); + + if (args->flags & MSM_VM_BIND_FENCE_FD_OUT) { + sync_file = sync_file_create(job->fence); + if (!sync_file) { + ret = -ENOMEM; + } else { + fd_install(out_fence_fd, sync_file->file); + args->fence_fd = out_fence_fd; + } + } + + if (ret) + goto out; + + vm_bind_job_attach_fences(job); + + /* + * The job can be free'd (and fence unref'd) at any point after + * drm_sched_entity_push_job(), so we need to hold our own ref + */ + fence = dma_fence_get(job->fence); + + ret = drm_sched_entity_push_job(&job->base); + + msm_syncobj_reset(syncobjs_to_reset, args->nr_in_syncobjs); + msm_syncobj_process_post_deps(post_deps, args->nr_out_syncobjs, fence); + + dma_fence_put(fence); + +out: + if (ret) + vm_bind_job_unpin_objects(job); + + drm_exec_fini(&exec); +out_unlock: + mutex_unlock(&queue->lock); +out_post_unlock: + if (ret && (out_fence_fd >= 0)) { + put_unused_fd(out_fence_fd); + if (sync_file) + fput(sync_file->file); + } + + if (!IS_ERR_OR_NULL(job)) { + if (ret) + msm_vma_job_free(&job->base); + } else { + /* + * If the submit hasn't yet taken ownership of the queue + * then we need to drop the reference ourself: + */ + msm_submitqueue_put(queue); + } + + if (!IS_ERR_OR_NULL(post_deps)) { + for (i = 0; i < args->nr_out_syncobjs; ++i) { + kfree(post_deps[i].chain); + drm_syncobj_put(post_deps[i].syncobj); + } + kfree(post_deps); + } + + if (!IS_ERR_OR_NULL(syncobjs_to_reset)) { + for (i = 0; i < args->nr_in_syncobjs; ++i) { + if (syncobjs_to_reset[i]) + drm_syncobj_put(syncobjs_to_reset[i]); + } + kfree(syncobjs_to_reset); + } + + return ret; } diff --git a/include/uapi/drm/msm_drm.h b/include/uapi/drm/msm_drm.h index 6d6cd1219926..5c67294edc95 100644 --- a/include/uapi/drm/msm_drm.h +++ b/include/uapi/drm/msm_drm.h @@ -272,7 +272,10 @@ struct drm_msm_gem_submit_cmd { __u32 size; /* in, cmdstream size */ __u32 pad; __u32 nr_relocs; /* in, number of submit_reloc's */ - __u64 relocs; /* in, ptr to array of submit_reloc's */ + union { + __u64 relocs; /* in, ptr to array of submit_reloc's */ + __u64 iova; /* cmdstream address (for VM_BIND contexts) */ + }; }; /* Each buffer referenced elsewhere in the cmdstream submit (ie. the @@ -339,7 +342,74 @@ struct drm_msm_gem_submit { __u32 nr_out_syncobjs; /* in, number of entries in out_syncobj. */ __u32 syncobj_stride; /* in, stride of syncobj arrays. */ __u32 pad; /*in, reserved for future use, always 0. */ +}; + +#define MSM_VM_BIND_OP_UNMAP 0 +#define MSM_VM_BIND_OP_MAP 1 +#define MSM_VM_BIND_OP_MAP_NULL 2 + +#define MSM_VM_BIND_OP_DUMP 1 +#define MSM_VM_BIND_OP_FLAGS ( \ + MSM_VM_BIND_OP_DUMP | \ + 0) +/** + * struct drm_msm_vm_bind_op - bind/unbind op to run + */ +struct drm_msm_vm_bind_op { + /** @op: one of MSM_VM_BIND_OP_x */ + __u32 op; + /** @handle: GEM object handle, MBZ for UNMAP or MAP_NULL */ + __u32 handle; + /** @obj_offset: Offset into GEM object, MBZ for UNMAP or MAP_NULL */ + __u64 obj_offset; + /** @iova: Address to operate on */ + __u64 iova; + /** @range: Number of bites to to map/unmap */ + __u64 range; + /** @flags: Bitmask of MSM_VM_BIND_OP_FLAG_x */ + __u32 flags; + /** @pad: MBZ */ + __u32 pad; +}; + +#define MSM_VM_BIND_FENCE_FD_IN 0x00000001 +#define MSM_VM_BIND_FENCE_FD_OUT 0x00000002 +#define MSM_VM_BIND_FLAGS ( \ + MSM_VM_BIND_FENCE_FD_IN | \ + MSM_VM_BIND_FENCE_FD_OUT | \ + 0) + +/** + * struct drm_msm_vm_bind - Input of &DRM_IOCTL_MSM_VM_BIND + */ +struct drm_msm_vm_bind { + /** @flags: in, bitmask of MSM_VM_BIND_x */ + __u32 flags; + /** @nr_ops: the number of bind ops in this ioctl */ + __u32 nr_ops; + /** @fence_fd: in/out fence fd (see MSM_VM_BIND_FENCE_FD_IN/OUT) */ + __s32 fence_fd; + /** @queue_id: in, submitqueue id */ + __u32 queue_id; + /** @in_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 in_syncobjs; + /** @out_syncobjs: in, ptr to array of drm_msm_gem_syncobj */ + __u64 out_syncobjs; + /** @nr_in_syncobjs: in, number of entries in in_syncobj */ + __u32 nr_in_syncobjs; + /** @nr_out_syncobjs: in, number of entries in out_syncobj */ + __u32 nr_out_syncobjs; + /** @syncobj_stride: in, stride of syncobj arrays */ + __u32 syncobj_stride; + /** @op_stride: sizeof each struct drm_msm_vm_bind_op in @ops */ + __u32 op_stride; + union { + /** @op: used if num_ops == 1 */ + struct drm_msm_vm_bind_op op; + /** @ops: userptr to array of drm_msm_vm_bind_op if num_ops > 1 */ + __u64 ops; + }; }; #define MSM_WAIT_FENCE_BOOST 0x00000001 @@ -435,6 +505,7 @@ struct drm_msm_submitqueue_query { #define DRM_MSM_SUBMITQUEUE_NEW 0x0A #define DRM_MSM_SUBMITQUEUE_CLOSE 0x0B #define DRM_MSM_SUBMITQUEUE_QUERY 0x0C +#define DRM_MSM_VM_BIND 0x0D #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GET_PARAM, struct drm_msm_param) #define DRM_IOCTL_MSM_SET_PARAM DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SET_PARAM, struct drm_msm_param) @@ -448,6 +519,7 @@ struct drm_msm_submitqueue_query { #define DRM_IOCTL_MSM_SUBMITQUEUE_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_NEW, struct drm_msm_submitqueue) #define DRM_IOCTL_MSM_SUBMITQUEUE_CLOSE DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_CLOSE, __u32) #define DRM_IOCTL_MSM_SUBMITQUEUE_QUERY DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_SUBMITQUEUE_QUERY, struct drm_msm_submitqueue_query) +#define DRM_IOCTL_MSM_VM_BIND DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_VM_BIND, struct drm_msm_vm_bind) #if defined(__cplusplus) } From patchwork Mon May 19 17:57:33 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891123 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D25528E606; Mon, 19 May 2025 17:58:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677531; cv=none; b=afsGHG+Pd/Eb7UTF/RGwlinOxYJaOJtN122eDiKD9AYQInxf/mqa5PoCNz3hAZC4mbq2ZhxpWa6fVEhXKk4Wox/TlnX968x7taGSymiG4JPWRxhZhZIhd2058z2RXwArovSVNl+tzXgTMIzrU6MOY3QW1NsRMEzMH9J3mQc4oGo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677531; c=relaxed/simple; bh=dKdw8DHdr4de+pLf0G0fohs0iChjVU8bfF+DE91GQnE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=s8ODfkeg8kE3qFyIwo0VdkYCvAsqBBMIY+bD+jJiC29SA+j0+beQF3pQPb5EtMJ+58AtzM8UNGlUo29LP1CVaaUHGHfi/v1mVbwxKxYMDTKhEUo/ty4LRhAsoKUXX3mFC5/LLMLA39rXWVJwzdHpqyzTMBYg3tA/R1ayecU8Aus= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=F1Xlq0LN; arc=none smtp.client-ip=209.85.216.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="F1Xlq0LN" Received: by mail-pj1-f54.google.com with SMTP id 98e67ed59e1d1-30820167b47so3873537a91.0; Mon, 19 May 2025 10:58:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677529; x=1748282329; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=MlkCAAKtM3wOcQWyTtWQWVVEqfus5C6fj3S4qB7dQ/Y=; b=F1Xlq0LNnCXVjsmfaxOLADKXY8WZS6g9MFvHPPnQXKQFzvCJJAzfbPxoDwJJzxnUNz +HESzpPTvkq2+0cCuU9hBuweZYxR2Fwd/xrduXA+0rMGAuu76FLdyzrS/0y3JEGb4qbh 8ES+SsHd5gV8FvXNdUpm5vagwJSq1fomiUn3hLZQziQ/e3n2eR5E3+98FF3+NO8Hk7Vq boRg6iUE6mG0lHe5Yx4JYoyvTuq68Hzk9HzG/74DGLV/r2DWwHHcZAxQO/8BApD35yoh PsDIfIsLXKB7AIzqZ+vT1DClUu+q+yJ34NZXxQ7GI7Ku9l2DGDM6EG1ZnBHG3ilixEYl OvAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677529; x=1748282329; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=MlkCAAKtM3wOcQWyTtWQWVVEqfus5C6fj3S4qB7dQ/Y=; b=Bpdar8Cfs8+LBIRN/2/v2OPXkrsLXh/I9UgvLJwL/kGFT6j9Ak4Pycudl8R2Wr8Y1w 1avlugB/WW29qhcoEcaKydtl3z67vgULDNBx3sBQMoVQzxZSFWNbLvKb33TxyqhGe0Dn Nm3zZsn9bWsBdn8+yInwWgD2R1VkMA0E4zKmEUmdK/q1OtoudrZfhKSrziKks9XgAiw/ Aykoyp9jZ+56hjWKVKDqNEtux1sScu61WNtwiQY0ylHJ6RL5XWNa0ZMJoEMNmnsJqjeZ 5R9/IIzPprHs08UYifTKyqbWxEWC8LSu0gDEx/ikXUqYg1GD/4aL7zJCLH4uVcIP9zRt nNrA== X-Forwarded-Encrypted: i=1; AJvYcCU2pM2i7LuRZSyTikgnhlK4uUeakQ//UuIIknwkpgsKjpuFytVaymqJo6nsAHhVygrqQM0j4lDXbTyQvBGS@vger.kernel.org, AJvYcCWgjYUsiq8+3xWZttswDJYzp5xFjeOyS+ru/DfDm0OLLP6Rn+72cmLvnpUzMNS/gA1x8anwiCJ7CG3aj21S@vger.kernel.org X-Gm-Message-State: AOJu0YwhbR6y1gQgG8Bys/WFsGqAy334SDLjOPLovdv6+oeduz7qRwWL gyJffZAzyI0M2Q0QtRtgsQ/aw8m/TaN25Hc3tNHbdwanoPBaMjqkjffy X-Gm-Gg: ASbGnctTCGWsVxg9ec/5cNtsqVrXdcYa847LzPjved7DJ0FxmzZyhcur+FO/R/8in9B 79/pq4V0qfblMoaxoFn68sLSnhmbVWAPLLqEsF2K4A0FonPz5DLHc5XDS/VMr6GG4IZC8Pu3krP gwfR+3VkDs3aoAjpnRJlkhJbYcF398d6EYH3G+Ij/iSKYb06VKiD+4mNsJoj0b9HU+oVN2p2JFR Kwupu3xdvLYLZI7ajl6Q5k1rxXnnQXGCsP5Cxp8H9KdpkicGXhxtJn3O8OiaKkYtKRAjyF/xebX sVIddqkthyT3PJdofKC3fU3lVERSwuVYx26GiOXR0VBYF09/Ldub9kEyIb7kXI9ksKxfJWktT1w lz8mVxgSBJdaX2M14c6u4uxxVFQ== X-Google-Smtp-Source: AGHT+IHi3lW4kYLwbdf1uvFbUhc2rF9CF6d0zFjk8XG4rGHx76qnW4OPrzSggWJ5k2qzKMlnIyNJ3Q== X-Received: by 2002:a17:90b:524c:b0:2fa:42f3:e3e4 with SMTP id 98e67ed59e1d1-30e4dac99b4mr24907480a91.3.1747677528711; Mon, 19 May 2025 10:58:48 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e9239aaeesm5861729a91.15.2025.05.19.10.58.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:46 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Sean Paul , Konrad Dybcio , Abhinav Kumar , Dmitry Baryshkov , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 36/40] drm/msm: Add VM logging for VM_BIND updates Date: Mon, 19 May 2025 10:57:33 -0700 Message-ID: <20250519175755.13037-24-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark When userspace opts in to VM_BIND, the submit no longer holds references keeping the VMA alive. This makes it difficult to distinguish between UMD/KMD/app bugs. So add a debug option for logging the most recent VM updates and capturing these in GPU devcoredumps. The submitqueue id is also captured, a value of zero means the operation did not go via a submitqueue (ie. comes from msm_gem_vm_close() tearing down the remaining mappings when the device file is closed. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/adreno/adreno_gpu.c | 11 +++ drivers/gpu/drm/msm/msm_gem.h | 24 +++++ drivers/gpu/drm/msm/msm_gem_vma.c | 124 ++++++++++++++++++++++-- drivers/gpu/drm/msm/msm_gpu.c | 52 +++++++++- drivers/gpu/drm/msm/msm_gpu.h | 4 + 5 files changed, 202 insertions(+), 13 deletions(-) diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c index efe03f3f42ba..12b42ae2688c 100644 --- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c +++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c @@ -837,6 +837,7 @@ void adreno_gpu_state_destroy(struct msm_gpu_state *state) for (i = 0; state->bos && i < state->nr_bos; i++) kvfree(state->bos[i].data); + kfree(state->vm_logs); kfree(state->bos); kfree(state->comm); kfree(state->cmd); @@ -977,6 +978,16 @@ void adreno_show(struct msm_gpu *gpu, struct msm_gpu_state *state, info->ptes[0], info->ptes[1], info->ptes[2], info->ptes[3]); } + if (state->vm_logs) { + drm_puts(p, "vm-log:\n"); + for (i = 0; i < state->nr_vm_logs; i++) { + struct msm_gem_vm_log_entry *e = &state->vm_logs[i]; + drm_printf(p, " - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + } + drm_printf(p, "rbbm-status: 0x%08x\n", state->rbbm_status); drm_puts(p, "ringbuffer:\n"); diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index bfeb0f584ae5..4dc9b72b9193 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -24,6 +24,20 @@ #define MSM_BO_STOLEN 0x10000000 /* try to use stolen/splash memory */ #define MSM_BO_MAP_PRIV 0x20000000 /* use IOMMU_PRIV when mapping */ +/** + * struct msm_gem_vm_log_entry - An entry in the VM log + * + * For userspace managed VMs, a log of recent VM updates is tracked and + * captured in GPU devcore dumps, to aid debugging issues caused by (for + * example) incorrectly synchronized VM updates + */ +struct msm_gem_vm_log_entry { + const char *op; + uint64_t iova; + uint64_t range; + int queue_id; +}; + /** * struct msm_gem_vm - VM object * @@ -85,6 +99,15 @@ struct msm_gem_vm { /** @last_fence: Fence for last pending work scheduled on the VM */ struct dma_fence *last_fence; + /** @log: A log of recent VM updates */ + struct msm_gem_vm_log_entry *log; + + /** @log_shift: length of @log is (1 << @log_shift) */ + uint32_t log_shift; + + /** @log_idx: index of next @log entry to write */ + uint32_t log_idx; + /** @faults: the number of GPU hangs associated with this address space */ int faults; @@ -115,6 +138,7 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, u64 va_start, u64 va_size, bool managed); void msm_gem_vm_close(struct drm_gpuvm *gpuvm); +void msm_gem_vm_unusable(struct drm_gpuvm *gpuvm); struct msm_fence_context; diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index fe41b7a042c3..d349025924b4 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -17,6 +17,10 @@ #define vm_dbg(fmt, ...) pr_debug("%s:%d: "fmt"\n", __func__, __LINE__, ##__VA_ARGS__) +static uint vm_log_shift = 0; +MODULE_PARM_DESC(vm_log_shift, "Length of VM op log"); +module_param_named(vm_log_shift, vm_log_shift, uint, 0600); + /** * struct msm_vm_map_op - create new pgtable mapping */ @@ -31,6 +35,13 @@ struct msm_vm_map_op { struct sg_table *sgt; /** @prot: the mapping protection flags */ int prot; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; /** @@ -41,6 +52,13 @@ struct msm_vm_unmap_op { uint64_t iova; /** @range: size of region to unmap */ uint64_t range; + + /** + * @queue_id: The id of the submitqueue the operation is performed + * on, or zero for (in particular) UNMAP ops triggered outside of + * a submitqueue (ie. process cleanup) + */ + int queue_id; }; /** @@ -144,16 +162,87 @@ msm_gem_vm_free(struct drm_gpuvm *gpuvm) vm->mmu->funcs->destroy(vm->mmu); dma_fence_put(vm->last_fence); put_pid(vm->pid); + kfree(vm->log); kfree(vm); } +/** + * msm_gem_vm_unusable() - Mark a VM as unusable + * @vm: the VM to mark unusable + */ +void +msm_gem_vm_unusable(struct drm_gpuvm *gpuvm) +{ + struct msm_gem_vm *vm = to_msm_vm(gpuvm); + uint32_t vm_log_len = (1 << vm->log_shift); + uint32_t vm_log_mask = vm_log_len - 1; + uint32_t nr_vm_logs; + int first; + + vm->unusable = true; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first = vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + nr_vm_logs = MAX(0, first - 1); + first = 0; + } else { + nr_vm_logs = vm_log_len; + } + + pr_err("vm-log:\n"); + for (int i = 0; i < nr_vm_logs; i++) { + int idx = (i + first) & vm_log_mask; + struct msm_gem_vm_log_entry *e = &vm->log[idx]; + pr_err(" - %s:%d: 0x%016llx-0x%016llx\n", + e->op, e->queue_id, e->iova, + e->iova + e->range); + } + + mutex_unlock(&vm->mmu_lock); +} + static void -vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t range, int queue_id) { + int idx; + if (!vm->managed) lockdep_assert_held(&vm->mmu_lock); - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_dbg("%s:%p:%d: %016llx %016llx", op, vm, queue_id, iova, iova + range); + + if (!vm->log) + return; + + idx = vm->log_idx; + vm->log[idx].op = op; + vm->log[idx].iova = iova; + vm->log[idx].range = range; + vm->log[idx].queue_id = queue_id; + vm->log_idx = (vm->log_idx + 1) & ((1 << vm->log_shift) - 1); +} + +static void +vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) +{ + vm_log(vm, "unmap", op->iova, op->range, op->queue_id); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -161,10 +250,7 @@ vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) static int vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) { - if (!vm->managed) - lockdep_assert_held(&vm->mmu_lock); - - vm_dbg("%p: %016llx %016llx", vm, op->iova, op->iova + op->range); + vm_log(vm, "map", op->iova, op->range, op->queue_id); return vm->mmu->funcs->map(vm->mmu, op->iova, op->sgt, op->offset, op->range, op->prot); @@ -382,6 +468,7 @@ vma_from_op(struct op_arg *arg, struct drm_gpuva_op_map *op) static int msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; struct drm_gem_object *obj = op->map.gem.obj; struct drm_gpuva *vma; struct sg_table *sgt; @@ -412,6 +499,7 @@ msm_gem_vm_sm_step_map(struct drm_gpuva_op *op, void *arg) .range = vma->va.range, .offset = vma->gem.offset, .prot = prot, + .queue_id = job->queue->id, }, .obj = vma->gem.obj, }); @@ -445,6 +533,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) .unmap = { .iova = unmap_start, .range = unmap_range, + .queue_id = job->queue->id, }, .obj = orig_vma->gem.obj, }); @@ -506,6 +595,7 @@ msm_gem_vm_sm_step_remap(struct drm_gpuva_op *op, void *arg) static int msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) { + struct msm_vm_bind_job *job = ((struct op_arg *)arg)->job; struct drm_gpuva *vma = op->unmap.va; struct msm_gem_vma *msm_vma = to_msm_vma(vma); @@ -520,6 +610,7 @@ msm_gem_vm_sm_step_unmap(struct drm_gpuva_op *op, void *arg) .unmap = { .iova = vma->va.addr, .range = vma->va.range, + .queue_id = job->queue->id, }, .obj = vma->gem.obj, }); @@ -584,7 +675,7 @@ msm_vma_job_run(struct drm_sched_job *_job) * now the VM is in an undefined state. Game over! */ if (ret) - vm->unusable = true; + msm_gem_vm_unusable(job->vm); job_foreach_bo (obj, job) { msm_gem_lock(obj); @@ -697,6 +788,23 @@ msm_gem_vm_create(struct drm_device *drm, struct msm_mmu *mmu, const char *name, drm_mm_init(&vm->mm, va_start, va_size); + /* + * We don't really need vm log for kernel managed VMs, as the kernel + * is responsible for ensuring that GEM objs are mapped if they are + * used by a submit. Furthermore we piggyback on mmu_lock to serialize + * access to the log. + * + * Limit the max log_shift to 8 to prevent userspace from asking us + * for an unreasonable log size. + */ + if (!managed) + vm->log_shift = MIN(vm_log_shift, 8); + + if (vm->log_shift) { + vm->log = kmalloc_array(1 << vm->log_shift, sizeof(vm->log[0]), + GFP_KERNEL | __GFP_ZERO); + } + return &vm->base; err_free_dummy: @@ -1143,7 +1251,7 @@ vm_bind_job_prepare(struct msm_vm_bind_job *job) * state the vm is in. So throw up our hands! */ if (i > 0) - vm->unusable = true; + msm_gem_vm_unusable(job->vm); return ret; } } diff --git a/drivers/gpu/drm/msm/msm_gpu.c b/drivers/gpu/drm/msm/msm_gpu.c index b70355fc8570..210e756cb563 100644 --- a/drivers/gpu/drm/msm/msm_gpu.c +++ b/drivers/gpu/drm/msm/msm_gpu.c @@ -259,9 +259,6 @@ static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submi { extern bool rd_full; - if (!submit) - return; - if (msm_context_is_vmbind(submit->queue->ctx)) { struct drm_exec exec; struct drm_gpuva *vma; @@ -318,6 +315,48 @@ static void crashstate_get_bos(struct msm_gpu_state *state, struct msm_gem_submi } } +static void crashstate_get_vm_logs(struct msm_gpu_state *state, struct msm_gem_vm *vm) +{ + uint32_t vm_log_len = (1 << vm->log_shift); + uint32_t vm_log_mask = vm_log_len - 1; + int first; + + /* Bail if no log, or empty log: */ + if (!vm->log || !vm->log[0].op) + return; + + mutex_lock(&vm->mmu_lock); + + /* + * log_idx is the next entry to overwrite, meaning it is the oldest, or + * first, entry (other than the special case handled below where the + * log hasn't wrapped around yet) + */ + first = vm->log_idx; + + if (!vm->log[first].op) { + /* + * If the next log entry has not been written yet, then only + * entries 0 to idx-1 are valid (ie. we haven't wrapped around + * yet) + */ + state->nr_vm_logs = MAX(0, first - 1); + first = 0; + } else { + state->nr_vm_logs = vm_log_len; + } + + state->vm_logs = kmalloc_array( + state->nr_vm_logs, sizeof(vm->log[0]), GFP_KERNEL); + for (int i = 0; i < state->nr_vm_logs; i++) { + int idx = (i + first) & vm_log_mask; + + state->vm_logs[i] = vm->log[idx]; + } + + mutex_unlock(&vm->mmu_lock); +} + static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, struct msm_gem_submit *submit, char *comm, char *cmd) { @@ -349,7 +388,10 @@ static void msm_gpu_crashstate_capture(struct msm_gpu *gpu, msm_iommu_pagetable_walk(mmu, info->iova, info->ptes); } - crashstate_get_bos(state, submit); + if (submit) { + crashstate_get_vm_logs(state, to_msm_vm(submit->vm)); + crashstate_get_bos(state, submit); + } /* Set the active crash state to be dumped on failure */ gpu->crashstate = state; @@ -449,7 +491,7 @@ static void recover_worker(struct kthread_work *work) * VM_BIND) */ if (!vm->managed) - vm->unusable = true; + msm_gem_vm_unusable(submit->vm); } get_comm_cmdline(submit, &comm, &cmd); diff --git a/drivers/gpu/drm/msm/msm_gpu.h b/drivers/gpu/drm/msm/msm_gpu.h index 9cbf155ff222..31b83e9e3673 100644 --- a/drivers/gpu/drm/msm/msm_gpu.h +++ b/drivers/gpu/drm/msm/msm_gpu.h @@ -20,6 +20,7 @@ #include "msm_gem.h" struct msm_gem_submit; +struct msm_gem_vm_log_entry; struct msm_gpu_perfcntr; struct msm_gpu_state; struct msm_context; @@ -609,6 +610,9 @@ struct msm_gpu_state { struct msm_gpu_fault_info fault_info; + int nr_vm_logs; + struct msm_gem_vm_log_entry *vm_logs; + int nr_bos; struct msm_gpu_state_bo *bos; }; From patchwork Mon May 19 17:57:34 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892053 Received: from mail-pg1-f174.google.com (mail-pg1-f174.google.com [209.85.215.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0343E28E607; Mon, 19 May 2025 17:58:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677532; cv=none; b=X+1EA8QL+hHL5BJnwMyK/Z9v+5wgVOx7H8a5N5C6UU9KNDXe6dFgws/vmXCn0NEt0nguSWWZLfF8YXZFTqf7xu0LmGu4/3rByCbcMBtFukbs8ZkWLKYZqAN5DUv1q2bHYJ4pbQA+7pQYWVWh2+4gabWcxMgMPs3A072mK+ostjs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677532; c=relaxed/simple; bh=epRmHIRuy2Vf7a02MfjZMmzVd8AkGnSuTiO7naC5XiM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L75JZQqEsu5LbwmxJ87OEjEpBvImWOHOyMeHCv8w1GjROpI4Vp6Dl22Z0ZYMJTb2Kzyh9jkD4fC0ydV9iYStdLBhgwUI8Iv4n7igLNDI6ra1n8ciqWq2QZP0PIeHiz3QzktThjGtxTEdrVbtxJiozmYZ8rE7C7WVgrggEjQpq4c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=dgpcIYXo; arc=none smtp.client-ip=209.85.215.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="dgpcIYXo" Received: by mail-pg1-f174.google.com with SMTP id 41be03b00d2f7-879d2e419b9so4106785a12.2; Mon, 19 May 2025 10:58:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677530; x=1748282330; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=BcRvWqClmxzKDaPFD7FcWvt+afLEJ4x6S3/cKoULGmM=; b=dgpcIYXoHe0hzWXrYxtDMm4gXtaO1bVHdP2HtEegpIZnAaN9AMWKY78iG1zZ/rmofd mKX0cnjhTrRbBa523I2j1kN7k48OYDeF6/SmmnJkmdy5xfZpUE8iQsSBp4LHfZuEZmR5 0Qff5NPGHPrqp//+s/e9RaZAVZ2rh2ucV7aGO7+GkFvkLTdNwln7YrESnLp4V6yAmEAS 9vk+64HJU1JCFa2X0ePJvzTr9JfEUs3OYL92YkuvusVRELWpuiDMdbwdd3lfMWsSeUXQ mFDkm1Wwt91Mnl4uNH3vnz7QLeHoFhqFl7M51CUE6HD1qnqEJ77RoCj5958rIheQk2td BBrA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677530; x=1748282330; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BcRvWqClmxzKDaPFD7FcWvt+afLEJ4x6S3/cKoULGmM=; b=hZVXnzh82LEP0tRloH/7jNZhIbKP6KRDt6sLLRQfFjvwHIgtWdNJrssdkYUHGxJiUE Ga0c3FioJOGrz6dVdYiXt/co8geKLwiNmzWS1paUaHxroXl61QpXAmg6lzD9POSieq7C kFiJuUVsIigZ/8Zm8DV3p5Jy6oRJOIMVhIjp/ykEMtCS18E9mY3LJ5xBSUoOz9MuxDob Il/uc760Mf7+A88i2gexPnRJ7894V+KH+9mpXMDNp2ZfoLffcuZNZhFmGpYjmkqK1vgL d2QwIvKcWc6I52XR8buNgwwl6t6QHG9g5dhgUdWvwxKkMBEtwTyF/2uztx7V/vCzCq14 Tp3g== X-Forwarded-Encrypted: i=1; AJvYcCVbe4eDtOR1GZ3UnIKk3zdxQLb1Elp7p3IAYjVc0EY63kcaFGhhx9diPjHweDaa6fx1CZe2ItLZgnrOdRCx@vger.kernel.org, AJvYcCVjyqRD5INXbO6d/yh8D4PNH/W/lWhdF1Er6rOkcRvsrSj6geSzT4lz5fGRMeCzGFfX+niYAFVN/UFfw2B3@vger.kernel.org X-Gm-Message-State: AOJu0YzuKBg3UxHm+2kOekd1HE1gPGYuvoxxpNiBhNSaswfS9M2BCSUa 3dnY/4pSe3K8LRCxgq5u6a/WXiSqZSouQG12bYp62uhmxksZxOpKrZH8 X-Gm-Gg: ASbGnctcb7HkE2aHlXrIuDpaDr/xwX7CgMj85qMZ9n7yp4cTL/CyVj81+DJUEKYK7S2 9O/moi8WhHdTASNzRriMZzACQmgi2AUyciI9ibDpnlAHbEwic5GCEK5ue3RJwsuCso+d+pH24Kr nAcZFNxIAfqCbh+i1aisOxH0t+jW90n39yV3Uldf3Q24PuqGmK0s183hQCpUulVYzE6AgTKOik1 Z5W/6lCx1j11pDzu3wYz17uyQJhqpfzNdl5MFDBRNKWjLxNBzu3Rhx64r2BAUxdTetchG5bzzS8 OXJsrGfw/g/4oYwvqIcX30br9kbxagM/AqaKxT9TrNmP3HkMWn20l3t8TdhNWpa99AFN+QdcVlY LzXJ+1xfTIsWU1NhS0uSkR2+Xrg== X-Google-Smtp-Source: AGHT+IFSCRME9JDUW5z8EzZUYG4tLxuEeEsf9R+XNpuY5fatJnRhvLSh6kee0oW3SatZc/zFyCJFtQ== X-Received: by 2002:a17:90b:1ccc:b0:30e:823f:ef3a with SMTP id 98e67ed59e1d1-30e823ff012mr20546249a91.30.1747677530269; Mon, 19 May 2025 10:58:50 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-30e7d58ffa7sm7660063a91.43.2025.05.19.10.58.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:49 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 37/40] drm/msm: Add VMA unmap reason Date: Mon, 19 May 2025 10:57:34 -0700 Message-ID: <20250519175755.13037-25-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Make the VM log a bit more useful by providing a reason for the unmap (ie. closing VM vs evict/purge, etc) Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 18 ++++++++++-------- drivers/gpu/drm/msm/msm_gem.h | 2 +- drivers/gpu/drm/msm/msm_gem_vma.c | 15 ++++++++++++--- 3 files changed, 23 insertions(+), 12 deletions(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index 040f0539baa5..bdc99aff3130 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -47,7 +47,8 @@ static int msm_gem_open(struct drm_gem_object *obj, struct drm_file *file) return 0; } -static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close); +static void put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, + bool close, const char *reason); static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) { @@ -80,7 +81,7 @@ static void msm_gem_close(struct drm_gem_object *obj, struct drm_file *file) MAX_SCHEDULE_TIMEOUT); msm_gem_lock_vm_and_obj(&exec, obj, ctx->vm); - put_iova_spaces(obj, ctx->vm, true); + put_iova_spaces(obj, ctx->vm, true, "close"); drm_exec_fini(&exec); /* drop locks */ } @@ -407,7 +408,8 @@ static struct drm_gpuva *lookup_vma(struct drm_gem_object *obj, * mapping. */ static void -put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) +put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, + bool close, const char *reason) { struct drm_gpuvm_bo *vm_bo, *tmp; @@ -422,7 +424,7 @@ put_iova_spaces(struct drm_gem_object *obj, struct drm_gpuvm *vm, bool close) drm_gpuvm_bo_get(vm_bo); drm_gpuvm_bo_for_each_va_safe (vma, vmatmp, vm_bo) { - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, reason); if (close) msm_gem_vma_close(vma); } @@ -603,7 +605,7 @@ static int clear_iova(struct drm_gem_object *obj, if (!vma) return 0; - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, NULL); msm_gem_vma_close(vma); return 0; @@ -813,7 +815,7 @@ void msm_gem_purge(struct drm_gem_object *obj) GEM_WARN_ON(!is_purgeable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "purge"); msm_gem_vunmap(obj); @@ -851,7 +853,7 @@ void msm_gem_evict(struct drm_gem_object *obj) GEM_WARN_ON(is_unevictable(msm_obj)); /* Get rid of any iommu mapping(s): */ - put_iova_spaces(obj, NULL, false); + put_iova_spaces(obj, NULL, false, "evict"); drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); @@ -1063,7 +1065,7 @@ static void msm_gem_free_object(struct drm_gem_object *obj) drm_exec_retry_on_contention(&exec); } } - put_iova_spaces(obj, NULL, true); + put_iova_spaces(obj, NULL, true, "free"); drm_exec_fini(&exec); /* drop locks */ } diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 4dc9b72b9193..1e9ef09741eb 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -168,7 +168,7 @@ struct msm_gem_vma { struct drm_gpuva * msm_gem_vma_new(struct drm_gpuvm *vm, struct drm_gem_object *obj, u64 offset, u64 range_start, u64 range_end); -void msm_gem_vma_unmap(struct drm_gpuva *vma); +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason); int msm_gem_vma_map(struct drm_gpuva *vma, int prot, struct sg_table *sgt); void msm_gem_vma_close(struct drm_gpuva *vma); diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c index d349025924b4..313bde6447e4 100644 --- a/drivers/gpu/drm/msm/msm_gem_vma.c +++ b/drivers/gpu/drm/msm/msm_gem_vma.c @@ -53,6 +53,9 @@ struct msm_vm_unmap_op { /** @range: size of region to unmap */ uint64_t range; + /** @reason: The reason for the unmap */ + const char *reason; + /** * @queue_id: The id of the submitqueue the operation is performed * on, or zero for (in particular) UNMAP ops triggered outside of @@ -242,7 +245,12 @@ vm_log(struct msm_gem_vm *vm, const char *op, uint64_t iova, uint64_t range, int static void vm_unmap_op(struct msm_gem_vm *vm, const struct msm_vm_unmap_op *op) { - vm_log(vm, "unmap", op->iova, op->range, op->queue_id); + const char *reason = op->reason; + + if (!reason) + reason = "unmap"; + + vm_log(vm, reason, op->iova, op->range, op->queue_id); vm->mmu->funcs->unmap(vm->mmu, op->iova, op->range); } @@ -257,7 +265,7 @@ vm_map_op(struct msm_gem_vm *vm, const struct msm_vm_map_op *op) } /* Actually unmap memory for the vma */ -void msm_gem_vma_unmap(struct drm_gpuva *vma) +void msm_gem_vma_unmap(struct drm_gpuva *vma, const char *reason) { struct msm_gem_vm *vm = to_msm_vm(vma->vm); struct msm_gem_vma *msm_vma = to_msm_vma(vma); @@ -277,6 +285,7 @@ void msm_gem_vma_unmap(struct drm_gpuva *vma) vm_unmap_op(vm, &(struct msm_vm_unmap_op){ .iova = vma->va.addr, .range = vma->va.range, + .reason = reason, }); if (!vm->managed) @@ -865,7 +874,7 @@ msm_gem_vm_close(struct drm_gpuvm *gpuvm) drm_exec_retry_on_contention(&exec); } - msm_gem_vma_unmap(vma); + msm_gem_vma_unmap(vma, "close"); msm_gem_vma_close(vma); if (obj) { From patchwork Mon May 19 17:57:35 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891122 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9DD3C28EA62; Mon, 19 May 2025 17:58:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.179 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677534; cv=none; b=tvTRquT2qyEwc/fahMyH0uu7LteCEL3cPGTnRVKt8lTKMg4kVymWMvwGiCq0I0XF1niBn8+T81RL7zV6+wrbyCTDFXE4vucTPcgBgEx5p40KY6LI857dEezsm15NBliQdm+G7TrPrhykSrX7Cf5D5vxV6iZwrfOARxcI8y/mYNU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677534; c=relaxed/simple; bh=C9zlyB2KOAVgg++Lt92d9JRcxin5HlXVkR/UdSjceQc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=dYGLH/sERklUmweC+JI6Ha3U+vA0v1VzYa+NY4qC4+VNA1xpUixvBjEW7tiUebD5uz2nW+rmNe2s39DYrscc3NxcaRH4RXhQPkY6As8g0NoT3cOkGFx1qovOuyHUYZXsF4PVYVkERM7RcG6txKOIzfEEa61ZZTjwOParki3kzzk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=EhguchGk; arc=none smtp.client-ip=209.85.210.179 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="EhguchGk" Received: by mail-pf1-f179.google.com with SMTP id d2e1a72fcca58-7399a2dc13fso6068054b3a.2; Mon, 19 May 2025 10:58:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677532; x=1748282332; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=5vZUP3yiYvnGJEGnZA5yTtXN5qfGZSR/tLPaaIrC1kY=; b=EhguchGkkxVen9w3pxwD/FrD5s0q9QepH2267JC7kcwI+z7fPCX5gKvJ7kTFN0eLq7 Ip0ypoYIp1x3ZNUasiOgWBBydKSQ0Jm2ghp4wB9tv4lvZ4W5ESNOaZev3IqCgj8BL4tz AZ8M6EitA1zN+Q5cD0AalI73BGEPpo+qDijX85NEX/9YX9mVkaFZkHPYZD/MUJUJknQm 5YoEKCiDNG2YsdHqevuQRS8Jqb/dSNxgtJDwCCF9BrsBuFjvxrYiXODcgBzjTX62wlbj S1kkdmHkcxjjLJCRqVcaWblwzy1sQSMYidOEWJ4bKQMDWO7lRkgiNALj4va/GpzMUp2h w0Uw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677532; x=1748282332; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=5vZUP3yiYvnGJEGnZA5yTtXN5qfGZSR/tLPaaIrC1kY=; b=aqP0Y4iQyJxDSFDMoGaq6Sow17z+T2PzGmdaC7aAY+JkdYes5S468Azhm3xADNSatl M4BOPhBsxsVMHv1BrKynV5xEkI+627HHMSdyK9sYzqo+BEh5KTguuxJXMEyfRQAZqq9p w059C+t4YCInw3uqO7loq1kYBUcHxoUYoi+0x9bTlONbwj9PsE/POMeaQegGG0umiaAk AaiclzOJB6B9zTPBeK9XfNmeMkDibDD5fkbmrD5i21hVhA7WPa0u6ym2aqAoDVUxCL/I eP1BxjJjobaCldriZ8S4BOzFKMNoBMGiRpm2ll+aL/l21FbTJWJnrVZD02A0q0VvX4QY Xe0A== X-Forwarded-Encrypted: i=1; AJvYcCUeXuFuYDiUOXk4uGHQhQQgE+Go9bqVHr4UACuQtsfcOl+dAAQy2fwqyLY5nMs18je629fkc7GMQmKVe9jx@vger.kernel.org, AJvYcCWRvi3nFQgEVHJROpau2PJwkkHV4bjwmu2q0JwX64pWlUo7WGXPK06FTLFsHGWI4mqEtwU4IxtE8gAIZnRq@vger.kernel.org X-Gm-Message-State: AOJu0Yz8RaCjp0RUJo6QzZh2OmAoDQoso1vWvlt20Nm4obqnuI0MkIK+ uL3Hr9WSoDr5c4UJ4yeQvy/yxAPKqcnRNuGY93gZQ8N0h4PJBn1P3PiM X-Gm-Gg: ASbGnctCuEaBz4KTpsL7g3pKmSgFj7Uy4B3q4KNNTdPtdLgqRjP3MQg4TTWx0eQemYS +ra8cQQJvc63vu+zS1lL/SZvp823dekztXJ5nV4aA/ksPHMv4G8W1z3kx3dAtcVfLqPVdrPybES qWIJ67gKTDjAQE+Xv1xSFxi1jPdXuEWl14eNFiiYtdAD+/NruQns98uH4yZDrny0AoOiEUKX8DF q/+M3McfWfgObKk4ZO7qXrqLxMK+X0bNCD8HsELE+ehcxKvrbHZEUzKK1vOsyURd8Cet3kgkLzG xWb6K3V/274SFRLaN80r/q0sLGdQdISYeglqxJghe40pULHFT5KWduHSm7bGOowHIBsymyLJGqD dhmEHQDhHxceheCnk0hgOnVjm3g== X-Google-Smtp-Source: AGHT+IGRr3wC6zNEkbbZIOqB63IqRXMNCh4mPcpqHYlPlc47+qmcazwgSDB4ccUJf9IGlUSasHlnXw== X-Received: by 2002:a05:6a20:6a2b:b0:215:dfee:bb70 with SMTP id adf61e73a8af0-2170cde519dmr20916003637.29.1747677531712; Mon, 19 May 2025 10:58:51 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9876e28sm6726150b3a.139.2025.05.19.10.58.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:50 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 38/40] drm/msm: Add mmu prealloc tracepoint Date: Mon, 19 May 2025 10:57:35 -0700 Message-ID: <20250519175755.13037-26-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark So we can monitor how many pages are getting preallocated vs how many get used. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gpu_trace.h | 14 ++++++++++++++ drivers/gpu/drm/msm/msm_iommu.c | 4 ++++ 2 files changed, 18 insertions(+) diff --git a/drivers/gpu/drm/msm/msm_gpu_trace.h b/drivers/gpu/drm/msm/msm_gpu_trace.h index 7f863282db0d..781bbe5540bd 100644 --- a/drivers/gpu/drm/msm/msm_gpu_trace.h +++ b/drivers/gpu/drm/msm/msm_gpu_trace.h @@ -205,6 +205,20 @@ TRACE_EVENT(msm_gpu_preemption_irq, TP_printk("preempted to %u", __entry->ring_id) ); +TRACE_EVENT(msm_mmu_prealloc_cleanup, + TP_PROTO(u32 count, u32 remaining), + TP_ARGS(count, remaining), + TP_STRUCT__entry( + __field(u32, count) + __field(u32, remaining) + ), + TP_fast_assign( + __entry->count = count; + __entry->remaining = remaining; + ), + TP_printk("count=%u, remaining=%u", __entry->count, __entry->remaining) +); + #endif #undef TRACE_INCLUDE_PATH diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c index d04837461c3d..b5d019093380 100644 --- a/drivers/gpu/drm/msm/msm_iommu.c +++ b/drivers/gpu/drm/msm/msm_iommu.c @@ -8,6 +8,7 @@ #include #include #include "msm_drv.h" +#include "msm_gpu_trace.h" #include "msm_mmu.h" struct msm_iommu { @@ -346,6 +347,9 @@ msm_iommu_pagetable_prealloc_cleanup(struct msm_mmu *mmu, struct msm_mmu_preallo struct kmem_cache *pt_cache = get_pt_cache(mmu); uint32_t remaining_pt_count = p->count - p->ptr; + if (p->count > 0) + trace_msm_mmu_prealloc_cleanup(p->count, remaining_pt_count); + kmem_cache_free_bulk(pt_cache, remaining_pt_count, &p->pages[p->ptr]); kvfree(p->pages); } From patchwork Mon May 19 17:57:36 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 892052 Received: from mail-pg1-f173.google.com (mail-pg1-f173.google.com [209.85.215.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61CB928B41A; Mon, 19 May 2025 17:58:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.215.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677535; cv=none; b=DsLStdQ07yS8hnnvFnwJwlMAJTiddeSIH3CQ2pk9ErJWlTYZGVk2T9FVvnQlKPtaWquPPxdmufusZT8njQa4iYOS6FK1fQ6+o3amYqvT6r2rIZH31JUx/WzxAvs3sz9qb9ZrulzfBi3Z+Y0jamL6l7AvmIsSYOU72mf0zHeVM+A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677535; c=relaxed/simple; bh=dZeh/4kjMkBLMWW8F9qQId+qkvD2LeT1midcpI7kIPQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=APTbAp+ugaOjrrZhoeXdrg3bKJCxUoFjdFLghQLOplgoFRzBPmGKEigZFYLQYqrUaYT2DSzTv13lrDDNjz90+msEZl3I0puxS/QNZZxDODBa8/tnOMQZKty8BMeeb2Bx1k50kx0ZnVWmGPjwvtzj74bsP2lhUvJR7Ax8R8Y7bFg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Jq8jX7cu; arc=none smtp.client-ip=209.85.215.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Jq8jX7cu" Received: by mail-pg1-f173.google.com with SMTP id 41be03b00d2f7-b0b2d0b2843so3643720a12.2; Mon, 19 May 2025 10:58:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677533; x=1748282333; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oIwfIWvXBx6mekDHf0vT/9qUv4fpUfLK8aMEarZbbCM=; b=Jq8jX7cuZMBnSVAI8AhoTo+hA8UsWyQ+MPLfg7scCP9wOtV4g/2JxlU3yHh63xpZv0 oOsBLYCvXQItfbzo4OwnyzqeGKnR5yfrmfbOgb1966eq/7dWDnEJfNbQw/zRGx1RPFcd 9cEY8z60N/21gAONQphXdt3WmxV2syRxsfuxCuedX8nPgGBHAMus0lWDP/Fe5JOCExfL nDecOWazqc5Msh0iGbxZVYj/XDKadfNXA3QsaSCBO6/dmdsHcNmps38t0GHkF/9NsabZ 9ZxoSdJIV3iDCemacDMF8L8P3yhc/G0J9V4tALayjMxIHgMrFwM5Pr5K8Dr9V0aaWi1U 3lLQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677533; x=1748282333; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oIwfIWvXBx6mekDHf0vT/9qUv4fpUfLK8aMEarZbbCM=; b=PHHmlDg6ZO2H8rTLgMZq8cLOX4+n/Pl4VymcYxOesSPgZlzyMxKLhPCnAp/1JAk5YI l+XtUhbMzGLyeMG8BmvzT4K3BSZXus5uMzThTK4ajgJXVenx1KRos046VyrM/Jb8RJFZ dNqXGFY/BmG8Gx1YyvMyK/ddbcbq08x/7DwXcHBeQ3wz6CPPU71vHzdu+43puVBFedkk euIGHGVTHqv2GAy/f5dkB9CT7Am8O/ZFhtVyb9ma32isk2G7/S2rK/IZIsC2WCFejRys dIZpVFwyBRBx6DLXwa9YzHfn9GMiKsQPVdua4ejReLJAL5TD5wCCITZP6qQR/lHBxCjj 12rw== X-Forwarded-Encrypted: i=1; AJvYcCVGFL5Mo+nkRi+PK1hLDQGVR/9AHHP1KGLaGqojY4zJ1DIu4xzqxvFHDmVMvHjaI5XKGK7nflKZ07EPKsYP@vger.kernel.org, AJvYcCXtIgWNlT4rSpVTaFroruibamWivub7BQN7aEKhUTNWSEIKotL/w0tB1uBPt6AjrrEnfzaIXV3vdSxb/Crv@vger.kernel.org X-Gm-Message-State: AOJu0YxnYzXpNtNLlJTAvp+q8p0FeX76+pP0zlicsBm9d/1glqHZuvSg cFf7jUcGIEJvbS3QekaFbJYlje0dCVvLCA+/jzdXIxbx3Q/qzt9HM+gv X-Gm-Gg: ASbGnctnlfS8kGIgvfaDNWYk2AHyYPTRMvLn08jfdQ9wAuZnWkJDUGZVoXgy0+4io2D 36losCVauDNXUvA5ERMTSu4Mjuc6zmKOM9DCQp9R1+9gVI11xeVVD6kZ01TciaSun92nrZovzWb jEf1MNgOn7Yz7lIfGIewg0TpWQBQeJ7NqS/6XoA9vsp+1C624WZ+ibkcnrmqsePs9K50RNVng9I bvRmC3yy3lNEH+znFdtiCjYiaiR4QleinKoT4bshKCTHQsoEx34gRhEv2pVSv6ClSGYv1dZupUr OEk+sy/2UEoEsSBQXIPekFhyNU13kovEIL3JvI6vkN+rMJ4NAenpHgX7+xfTAJVWoB1+F/LeFCm 7CiqkJTYXOTCMOcaIKHIE+NyUOA== X-Google-Smtp-Source: AGHT+IGfHK87+UE6VTN287V0vY7+k/TNhX8IQ3w/Y0PFM/77BawdCwWPSjEqLHc3oa7QBi1AS6OLAA== X-Received: by 2002:a17:903:32ce:b0:220:ca39:d453 with SMTP id d9443c01a7336-231d43a3e56mr178577335ad.17.1747677533475; Mon, 19 May 2025 10:58:53 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-231d4afec8dsm62997475ad.92.2025.05.19.10.58.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:52 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 39/40] drm/msm: use trylock for debugfs Date: Mon, 19 May 2025 10:57:36 -0700 Message-ID: <20250519175755.13037-27-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark This resolves a potential deadlock vs msm_gem_vm_close(). Otherwise for _NO_SHARE buffers msm_gem_describe() could be trying to acquire the shared vm resv, while already holding priv->obj_lock. But _vm_close() might drop the last reference to a GEM obj while already holding the vm resv, and msm_gem_free_object() needs to grab priv->obj_lock, a locking inversion. OTOH this is only for debugfs and it isn't critical if we undercount by skipping a locked obj. So just use trylock() and move along if we can't get the lock. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_gem.c | 3 ++- drivers/gpu/drm/msm/msm_gem.h | 6 ++++++ 2 files changed, 8 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_gem.c b/drivers/gpu/drm/msm/msm_gem.c index bdc99aff3130..f10de8915ecb 100644 --- a/drivers/gpu/drm/msm/msm_gem.c +++ b/drivers/gpu/drm/msm/msm_gem.c @@ -922,7 +922,8 @@ void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m, uint64_t off = drm_vma_node_start(&obj->vma_node); const char *madv; - msm_gem_lock(obj); + if (!msm_gem_trylock(obj)) + return; stats->all.count++; stats->all.size += obj->size; diff --git a/drivers/gpu/drm/msm/msm_gem.h b/drivers/gpu/drm/msm/msm_gem.h index 1e9ef09741eb..733a458cea9e 100644 --- a/drivers/gpu/drm/msm/msm_gem.h +++ b/drivers/gpu/drm/msm/msm_gem.h @@ -280,6 +280,12 @@ msm_gem_lock(struct drm_gem_object *obj) dma_resv_lock(obj->resv, NULL); } +static inline bool __must_check +msm_gem_trylock(struct drm_gem_object *obj) +{ + return dma_resv_trylock(obj->resv); +} + static inline int msm_gem_lock_interruptible(struct drm_gem_object *obj) { From patchwork Mon May 19 17:57:37 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rob Clark X-Patchwork-Id: 891121 Received: from mail-pf1-f172.google.com (mail-pf1-f172.google.com [209.85.210.172]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9FE2028ECED; Mon, 19 May 2025 17:58:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.172 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677537; cv=none; b=RDZwdR+LlnwD0DZxlK6K50q8Q8j6Gbi+NFzaELn/c/yKdLlTyaFET9ryHWQewfnWSNxEGtYJavGmJE9c0bNRcn9eAPBWawW+Eb6MPpk51qMpqJYVb3jB8sDx/M3rv1iyBxIoXpTRH/rdICdslNr1h4DMXb7rbS0rHl0C184eH+s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1747677537; c=relaxed/simple; bh=zHv4z2Q3Jv4oPf5jd6GJH/LDQPxUZtdS3NmmX2/ISZI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ALrjOrrbFeqYxNl39IC/U7iof1kFaPMg9C9Ve+09xYtc6Xg/6SIgkLNeFoyMDeBPBDwNZnBoJED+jC3toZAsfDuaJYEOulbIQWmIhvMooDjxX4m1id7YOJv5pVGpopjpX5I56kkbjJk749BcmcdIzxyrfnG4Bbp2ELXEBx8Q/wc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=b4fklxbK; arc=none smtp.client-ip=209.85.210.172 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="b4fklxbK" Received: by mail-pf1-f172.google.com with SMTP id d2e1a72fcca58-7398d65476eso3775844b3a.1; Mon, 19 May 2025 10:58:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1747677535; x=1748282335; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=F/hXTxlFJMrvU7CPa2JHrx2ogSsPgBPZqjeavvvUR14=; b=b4fklxbKIOiruO60b5v6TGyMlYe7RbPD45+gsmkGtrI7OsqvE7Hmuve+vvAfncWPg/ u8WahXRJSFIELDHQtOsOwssd12rdSvuPNXeAnTNHp3ncdxv+T1jDVAIlCUhps9DJNc59 6UPGAv1vQzbB/AUGTN4zNiOqHJU26bVEZfymPvhvs35ZsOBUVTWgbeXXQeCbddJ0MOKp SIbjuQn+cnhhUYkGh+QY5xtksxIDoLtYGcKYB1NEmb5hD6Yf1J51iZDihWXXqU6EVQiQ 0KRYU370FbWP5Ujvwx5EZfkejdGh1CQH0cuUt+73Ui+J/0+Nmie33ybaznAEYi1+M5Ra 6Mtg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1747677535; x=1748282335; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=F/hXTxlFJMrvU7CPa2JHrx2ogSsPgBPZqjeavvvUR14=; b=hT0DLRK2I76/OEOJ1mnYE124aYb3UuRN/ZU13fYzbu8nW/1vJrhOW1Tan2vT4O4JwY QdClC1q+U9dCvjJCM0riADdEXGg9FG17al0i+B8k1daMPYEjYJqeT+Y6AmtdQxciI2fv yw4111CkK2dY+FEf93gjzWY4T0eJFLtahDwLT+T//4zsiCylN9FL351PHP+zOsiP6Yna asCTlc2xmazk4E/d+akQ43NLMqTJusjwdKvnyiAOR0+WVLEnJz5MIGaFGwY7IRb6V28+ 5oeHgyVFTZizYZnUjC2PzKVPFKgVIVKzcm9YhVora4qqg36bKOTjxLndWwob4Wf85hhC mjKQ== X-Forwarded-Encrypted: i=1; AJvYcCWpzLCRXNiynWcGh4c7Yn13pcOicuy+Iaa/sBSKdpV3Avi6A2bImigrpKZg3y3i2esP1pmfvJQArzs+w1rS@vger.kernel.org, AJvYcCXi4pmrgxlpSCG6KKvCMbTGwoNMFyvKlbCQPxtR1i+aSTevdluiTR2zBXFSTuFiRF+P/Vw+wFtmAjOw+JfR@vger.kernel.org X-Gm-Message-State: AOJu0YwMyzON/XD9mpZbu6fTBphqW6+q1zGzi57migxSNMk6zISe4+aA mxr7OnkXdNh+b16xoc1J34F+7+RUfih6CtrSeMRwlVFVlB8JQFwWwja0 X-Gm-Gg: ASbGnctT9pWHQfzG8kfjLzv0ghtLW4zVblXbwCzMqUqXjg8K9carT8dYZQRRCDWirxk tkZqz8aTFSR+EZefOAff/SVMFLr/99MjFQC4RktT3PjSSdE680bJXX3TqkSpqRh+IeniiH5ap4K OXUYaHGUdcbh0VuD/7VWcKkH8F+Vy44d/5HRcO/mq/n1lMkoo0gZU/erN5lIVwfQh8aFIspStBo Z9GRTStEi89tafi8UFAZWVp6SIg1Zq/h1nGWZNbYV8YEAeY//9zh0p5FYw9X8XohsTw/onpg02Y wgVZmtj/p7k9o2H9MGDhPKs1KUCq79Mpdk114ArefhXLV3aTnzvWVrARXBuV00Ag1lCIscMI/IB LafiZMWMvF4DvANPPt+Gydxr27w== X-Google-Smtp-Source: AGHT+IHO7vUmfXPvZMeyKDWjNf4LOGaBf9hXJlFDwffRUWBX2ZxfXn1HguqoFF3TkkZafBez0mFjWA== X-Received: by 2002:a05:6a00:e06:b0:742:cdf2:62c7 with SMTP id d2e1a72fcca58-742cdf26398mr7819210b3a.4.1747677534937; Mon, 19 May 2025 10:58:54 -0700 (PDT) Received: from localhost ([2a00:79e0:3e00:2601:3afc:446b:f0df:eadc]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-742a9709293sm6466435b3a.37.2025.05.19.10.58.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 19 May 2025 10:58:54 -0700 (PDT) From: Rob Clark To: dri-devel@lists.freedesktop.org Cc: freedreno@lists.freedesktop.org, linux-arm-msm@vger.kernel.org, Connor Abbott , Rob Clark , Rob Clark , Abhinav Kumar , Dmitry Baryshkov , Sean Paul , Marijn Suijten , David Airlie , Simona Vetter , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 40/40] drm/msm: Bump UAPI version Date: Mon, 19 May 2025 10:57:37 -0700 Message-ID: <20250519175755.13037-28-robdclark@gmail.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250519175755.13037-1-robdclark@gmail.com> References: <20250519175348.11924-1-robdclark@gmail.com> <20250519175755.13037-1-robdclark@gmail.com> Precedence: bulk X-Mailing-List: linux-arm-msm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Rob Clark Bump version to signal to userspace that VM_BIND is supported. Signed-off-by: Rob Clark --- drivers/gpu/drm/msm/msm_drv.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/drivers/gpu/drm/msm/msm_drv.c b/drivers/gpu/drm/msm/msm_drv.c index bdf775897de8..710046906229 100644 --- a/drivers/gpu/drm/msm/msm_drv.c +++ b/drivers/gpu/drm/msm/msm_drv.c @@ -41,9 +41,10 @@ * - 1.10.0 - Add MSM_SUBMIT_BO_NO_IMPLICIT * - 1.11.0 - Add wait boost (MSM_WAIT_FENCE_BOOST, MSM_PREP_BOOST) * - 1.12.0 - Add MSM_INFO_SET_METADATA and MSM_INFO_GET_METADATA + * - 1.13.0 - Add VM_BIND */ #define MSM_VERSION_MAJOR 1 -#define MSM_VERSION_MINOR 12 +#define MSM_VERSION_MINOR 13 #define MSM_VERSION_PATCHLEVEL 0 bool dumpstate;