From patchwork Fri Dec 4 15:11:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Ripard X-Patchwork-Id: 337878 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER, INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2E95C4361B for ; Fri, 4 Dec 2020 15:13:37 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8B49D22B49 for ; Fri, 4 Dec 2020 15:13:37 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730276AbgLDPNh (ORCPT ); Fri, 4 Dec 2020 10:13:37 -0500 Received: from new4-smtp.messagingengine.com ([66.111.4.230]:46807 "EHLO new4-smtp.messagingengine.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730388AbgLDPNg (ORCPT ); Fri, 4 Dec 2020 10:13:36 -0500 Received: from compute3.internal (compute3.nyi.internal [10.202.2.43]) by mailnew.nyi.internal (Postfix) with ESMTP id 3DAE758016F; Fri, 4 Dec 2020 10:11:51 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute3.internal (MEProxy); Fri, 04 Dec 2020 10:11:51 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cerno.tech; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; s=fm1; bh=KIbGMqANMpUGy omEfr64e+hVoWsMUljgRPHO1eQw31c=; b=HXtXLcZ+6VwMCqTbSTcMlCzbM8Sc8 tjtpM0ZcCzY8m29068d7tZCiIl9UB49XlkLiUWZBteHzdYOVXNQGutal3JPVKO9S fUYoDelFc1FC597m9Y3V2qXeOfVGw6LWq3eYH4NGAMemehv10stUC/Aw6OVblkMT jVHnbRv73kMn2yEj+wpMNzcrIrfy/550/vZmPYypFwcCJEJSvCeDCKf/DVCUmXF7 Nsp4Gln+BvrSL/hXVuCieKo0BpmWGbU4G1rJDyc/CDhn9J8Al8sbh3e3MwJwGar6 tfeae+yveXXUW3b7S/n+8kjFP18H90Rpu0lWrKwsYs4MOyjftKH3BtdNQ== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:content-transfer-encoding:date:from :in-reply-to:message-id:mime-version:references:subject:to :x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s= fm1; bh=KIbGMqANMpUGyomEfr64e+hVoWsMUljgRPHO1eQw31c=; b=pJStvFHa 1K9/htmo3an9h4VDd/6UjfLG4Jw6iLj/ocJEMBlkBeS1O+QbCNJ/eJqmvpysZbnG 2sP/GRJJg7yJrowM+x/1kvMYVaR1hx7x2O10Y8OBpN6wVc6b42doSe20E0KeNZUo TRm6E4udrEuTeNdkUmDmnxN0ncEOfGyVUssmzL49EbW1Mhr0U3WRQOE5/CW2CM0F eqSNLZlJZOlkiVs8pMi63qM8WYnpekPbdP6oms2QVQyU2tJSvShCDE/eUR1RQSmg D3CBMWAyuqdW23NJwSFOrtYNDzxzhKCtoGMkjiLTgH9+th8mjDadsGU3B2wXo7cw ZrvMUuqd1vAojQ== X-ME-Sender: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedujedrudeikedgjeegucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne cujfgurhephffvufffkffojghfggfgsedtkeertdertddtnecuhfhrohhmpeforgigihhm vgcutfhiphgrrhguuceomhgrgihimhgvsegtvghrnhhordhtvggthheqnecuggftrfgrth htvghrnhepvdekleevfeffkeejhfffueelteelfeduieefheduudfggffhhfffheevveeh hedvnecukfhppeeltddrkeelrdeikedrjeeinecuvehluhhsthgvrhfuihiivgepfeenuc frrghrrghmpehmrghilhhfrhhomhepmhgrgihimhgvsegtvghrnhhordhtvggthh X-ME-Proxy: Received: from localhost (lfbn-tou-1-1502-76.w90-89.abo.wanadoo.fr [90.89.68.76]) by mail.messagingengine.com (Postfix) with ESMTPA id EF58E24005C; Fri, 4 Dec 2020 10:11:50 -0500 (EST) From: Maxime Ripard To: Daniel Vetter , David Airlie , Maarten Lankhorst , Thomas Zimmermann , Maxime Ripard , Mark Rutland , Rob Herring , Frank Rowand , Eric Anholt Cc: devicetree@vger.kernel.org, linux-rpi-kernel@lists.infradead.org, bcm-kernel-feedback-list@broadcom.com, Tim Gover , Dave Stevenson , dri-devel@lists.freedesktop.org, linux-arm-kernel@lists.infradead.org, Phil Elwell Subject: [PATCH v2 7/7] drm/vc4: kms: Convert to atomic helpers Date: Fri, 4 Dec 2020 16:11:38 +0100 Message-Id: <20201204151138.1739736-8-maxime@cerno.tech> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201204151138.1739736-1-maxime@cerno.tech> References: <20201204151138.1739736-1-maxime@cerno.tech> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Now that the semaphore is gone, our atomic_commit implementation is basically drm_atomic_helper_commit with a somewhat custom commit_tail, the main difference being that we're using wait_for_flip_done instead of wait_for_vblanks used in the drm_atomic_helper_commit_tail helper. Let's switch to using drm_atomic_helper_commit. Acked-by: Thomas Zimmermann Signed-off-by: Maxime Ripard --- drivers/gpu/drm/vc4/vc4_kms.c | 110 +--------------------------------- 1 file changed, 3 insertions(+), 107 deletions(-) diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c index ffbfdde55fff..05f451f3e642 100644 --- a/drivers/gpu/drm/vc4/vc4_kms.c +++ b/drivers/gpu/drm/vc4/vc4_kms.c @@ -332,8 +332,7 @@ static void vc5_hvs_pv_muxing_commit(struct vc4_dev *vc4, } } -static void -vc4_atomic_complete_commit(struct drm_atomic_state *state) +static void vc4_atomic_commit_tail(struct drm_atomic_state *state) { struct drm_device *dev = state->dev; struct vc4_dev *vc4 = to_vc4_dev(dev); @@ -357,10 +356,6 @@ vc4_atomic_complete_commit(struct drm_atomic_state *state) if (vc4->hvs->hvs5) clk_set_min_rate(hvs->core_clk, 500000000); - drm_atomic_helper_wait_for_fences(dev, state, false); - - drm_atomic_helper_wait_for_dependencies(state); - old_hvs_state = vc4_hvs_get_old_global_state(state); if (!old_hvs_state) return; @@ -412,20 +407,8 @@ vc4_atomic_complete_commit(struct drm_atomic_state *state) drm_atomic_helper_cleanup_planes(dev, state); - drm_atomic_helper_commit_cleanup_done(state); - if (vc4->hvs->hvs5) clk_set_min_rate(hvs->core_clk, 0); - - drm_atomic_state_put(state); -} - -static void commit_work(struct work_struct *work) -{ - struct drm_atomic_state *state = container_of(work, - struct drm_atomic_state, - commit_work); - vc4_atomic_complete_commit(state); } static int vc4_atomic_commit_setup(struct drm_atomic_state *state) @@ -458,94 +441,6 @@ static int vc4_atomic_commit_setup(struct drm_atomic_state *state) return 0; } -/** - * vc4_atomic_commit - commit validated state object - * @dev: DRM device - * @state: the driver state object - * @nonblock: nonblocking commit - * - * This function commits a with drm_atomic_helper_check() pre-validated state - * object. This can still fail when e.g. the framebuffer reservation fails. For - * now this doesn't implement asynchronous commits. - * - * RETURNS - * Zero for success or -errno. - */ -static int vc4_atomic_commit(struct drm_device *dev, - struct drm_atomic_state *state, - bool nonblock) -{ - int ret; - - if (state->async_update) { - ret = drm_atomic_helper_prepare_planes(dev, state); - if (ret) - return ret; - - drm_atomic_helper_async_commit(dev, state); - - drm_atomic_helper_cleanup_planes(dev, state); - - return 0; - } - - /* We know for sure we don't want an async update here. Set - * state->legacy_cursor_update to false to prevent - * drm_atomic_helper_setup_commit() from auto-completing - * commit->flip_done. - */ - state->legacy_cursor_update = false; - ret = drm_atomic_helper_setup_commit(state, nonblock); - if (ret) - return ret; - - INIT_WORK(&state->commit_work, commit_work); - - ret = drm_atomic_helper_prepare_planes(dev, state); - if (ret) - return ret; - - if (!nonblock) { - ret = drm_atomic_helper_wait_for_fences(dev, state, true); - if (ret) { - drm_atomic_helper_cleanup_planes(dev, state); - return ret; - } - } - - /* - * This is the point of no return - everything below never fails except - * when the hw goes bonghits. Which means we can commit the new state on - * the software side now. - */ - - BUG_ON(drm_atomic_helper_swap_state(state, false) < 0); - - /* - * Everything below can be run asynchronously without the need to grab - * any modeset locks at all under one condition: It must be guaranteed - * that the asynchronous work has either been cancelled (if the driver - * supports it, which at least requires that the framebuffers get - * cleaned up with drm_atomic_helper_cleanup_planes()) or completed - * before the new state gets committed on the software side with - * drm_atomic_helper_swap_state(). - * - * This scheme allows new atomic state updates to be prepared and - * checked in parallel to the asynchronous completion of the previous - * update. Which is important since compositors need to figure out the - * composition of the next frame right after having submitted the - * current layout. - */ - - drm_atomic_state_get(state); - if (nonblock) - queue_work(system_unbound_wq, &state->commit_work); - else - vc4_atomic_complete_commit(state); - - return 0; -} - static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev, struct drm_file *file_priv, const struct drm_mode_fb_cmd2 *mode_cmd) @@ -966,11 +861,12 @@ vc4_atomic_check(struct drm_device *dev, struct drm_atomic_state *state) static struct drm_mode_config_helper_funcs vc4_mode_config_helpers = { .atomic_commit_setup = vc4_atomic_commit_setup, + .atomic_commit_tail = vc4_atomic_commit_tail, }; static const struct drm_mode_config_funcs vc4_mode_funcs = { .atomic_check = vc4_atomic_check, - .atomic_commit = vc4_atomic_commit, + .atomic_commit = drm_atomic_helper_commit, .fb_create = vc4_fb_create, };