From patchwork Thu Nov 14 22:09:15 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saravana Kannan X-Patchwork-Id: 843698 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8D0381B0F17 for ; Thu, 14 Nov 2024 22:09:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622175; cv=none; b=usi4R8vFln4gFyy6ZsvTBJalcmtXWEFF+FUokDTuCGOlTS6b8OleyptwMIt9sCK5ifihEmm6ouWgCAfsYRzZQegJpKXzmEmTAD6VLeOhXhxBxStlBnjsKyyrJRXfp1jpdVL6f2GbWLrPSjfFR0lVmrNOU+k56znFQLGxhUHl1yI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622175; c=relaxed/simple; bh=FXF39w61mNxxA37BRFFzg13pzMbcauk0JxD5udDvbNs=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=sDBhuS2nZoGahDhL5MSGHSzsz8XMDx2bUcjhhPp4a9hoRhBSPTkuUUTFdhlq27kKJIxdY0E3K+zp00gysxjGIWUE2bOiSuXUdGuonJgtGaDMjM6gDjjnNh9tcmpBl4p/S3luDBgfNoPXtHz88Kf4J4ooTt3j1BwnxD9N07CXKaM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=uKmQs4r2; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="uKmQs4r2" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ea8901dff1so21939977b3.1 for ; Thu, 14 Nov 2024 14:09:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622172; x=1732226972; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=ltTqN1vUYgDIEjeUsRotinFDZNC54N+YNek7x4otQhk=; b=uKmQs4r2sfC3N2Mm7T2NPt45wuhEGXkAJMkPw7WbYrUDhKq7XytcCvX+dG6PAm3ARv BUqfTU4MmipuMSBC4cGnYPJ3lUaoz/MQcrUeRGdogxjxPkZnoH1ywkUHPK8NjHxHLaKS hqURQBeQoCRKxMEEJ+edmHiMsuPMqrTuPyKLAloP1l2aEySylV6qBYlAeMXqrSM+DYAL 8pBVh6acwfE2t+muBWyg+gwXSWeBm70ZksAY280ICgADkfwJfl4HVQ3PCeOb2o6UmgoI jcI0WmpsXMCs8Ka5vw50qV5ZKceb9h/j2GmBxm0SDIP+lENi4951lz9pazRNBHSxqs9d 1s3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622172; x=1732226972; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=ltTqN1vUYgDIEjeUsRotinFDZNC54N+YNek7x4otQhk=; b=QkpVFajJWhwPA1CUz8QtLzkvyXdu2YLMIrlJmdMLn2QVQoF5k6FP6XWsfUUKvTrCto ApAF+S+/NulRc6J/E8mRcLI/hcW1sX2rZJoQjY+Ish6ueRMt9ywQhP6pRQ777vrPHh8x oujNonai45P9lPMhsF0WT3lzpSd1lCJOHiVmF6Lb9ve6vlY8zzCpCE4zmNa9s0jE3gzv 7B38IugBbYVBwI4tMlb6sn/XVXZh3++ogUeaIOrVAgF1D3wkr+s1XFA2xjwfMMChqNkh N9laS0Ohy3ibRSW5zu5tBzv2E9GC1AW1FzJBs9LWXgzH9aMwDpbnKcT2DbAJhOPBcmmX oZiw== X-Forwarded-Encrypted: i=1; AJvYcCUOPvjLVPcur9PED571T7oZI8i8kx+fmEgCugdmoeIpfCuG9B76l4GxHqrs1PZ0HDJOX5bURAyHMg==@vger.kernel.org X-Gm-Message-State: AOJu0Yy5Rq6Nk6QuKKiDYDxq3SwFydrN/Xfqxk7yjgkuOZEVpLAZbL1I COm6ho0+hJW8wQ2TbjgUan7oFYLZWDBKQDvwnWRIAdt+KBjNhqrmXF4+BZFjqicCranFP4Q4c+a N1L9JEvDrfUHLEw== X-Google-Smtp-Source: AGHT+IGJ/4jVZ1/xg/LGCl3APYeAMNbxGDbBEvRotblohLu+ttsAW0UM3N5LDvfVv2yeayzGHKI14yjv6HYVEP4= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a05:690c:6806:b0:6ea:8dad:c3b1 with SMTP id 00721157ae682-6ee55c9b324mr496857b3.3.1731622172602; Thu, 14 Nov 2024 14:09:32 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:15 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-2-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 1/5] PM: sleep: Fix runtime PM issue in dpm_resume() From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Some devices might have their is_suspended flag set to false. In these cases, dpm_resume() should skip doing anything for those devices. However, runtime PM enable and a few others steps are done before checking for this flag. Fix it so that we do things in the right order. Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 4a67e83300e1..86e51b9fefab 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -913,6 +913,9 @@ static void device_resume(struct device *dev, pm_message_t state, bool async) if (dev->power.syscore) goto Complete; + if (!dev->power.is_suspended) + goto Unlock; + if (dev->power.direct_complete) { /* Match the pm_runtime_disable() in __device_suspend(). */ pm_runtime_enable(dev); @@ -931,9 +934,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async) */ dev->power.is_prepared = false; - if (!dev->power.is_suspended) - goto Unlock; - if (dev->pm_domain) { info = "power domain "; callback = pm_op(&dev->pm_domain->ops, state); From patchwork Thu Nov 14 22:09:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saravana Kannan X-Patchwork-Id: 843406 Received: from mail-yw1-f202.google.com (mail-yw1-f202.google.com [209.85.128.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31C301B1D65 for ; Thu, 14 Nov 2024 22:09:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622179; cv=none; b=kEWaAp08NXcf/W4RsbcLE+L+h51ikXHvftXaaNmtWjEIiRieIzTdKgoYlC0jUPbDmuLdtTwM35NaSpBCRTirDe91Mlh+sckFi3JY6eLMSw7lQ/kgrhBRncg23sYlKtGUvGmJjmrs3zDtAkqc3JVnGoTinR7mVUlaCX1G4Y3zsxk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622179; c=relaxed/simple; bh=jYgI6Lv2TaRKTGXABWBeP3aEG3bkPZXhlUi0HpUIshM=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=mHNjXxtGHeiX5YK7BI6nricGnbdCBXThXhhUBGWveWcCJAUji0/VXN9i9f0NH13OraMwtySl/LYZ/Xa6Zd0B8NNE/tkfEe2MJfn5Cp/lO09phjsIe0oCkAR8cR11yZmGBr/BKc+m3COVO2vNrbIUPf30JSlSZQeeFNNojskiAKk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=HN+/RPI5; arc=none smtp.client-ip=209.85.128.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="HN+/RPI5" Received: by mail-yw1-f202.google.com with SMTP id 00721157ae682-6ee484ec18aso16195587b3.2 for ; Thu, 14 Nov 2024 14:09:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622177; x=1732226977; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=YK+OszyPJF9IsNwlHAXzh79EKRYRWDFx/RXocPFNckQ=; b=HN+/RPI5j3Ms+AoeGWTa0rVHwY6bHu35YMjC6nH4jYHXT11TSW8HgRkygYIahltTtY 6FlotNcu0MO6fn7+18gO2We1bs761O+yd6nFy45B3DMlPEzsFo6IqottKYtvJHI7h2gi fci6zyubwoKHS1sjwM22ywHGJvgVMaJEDCbaNmb31Sye5VAVAB5aKM3EEohm6VcFcTGz TkFpC56qf21Hfip3LvkaS543VK/G30zzjN+BIJbRBP46QK8RO4L5hLiG0mOCzTqCmy1n YmDsKlLc1FRUJRya7mkbTCQ6KEcmkQrN//6msGz/qiWwf+nXmCA40V+zdyRlS4C7Zd1z BAMg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622177; x=1732226977; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=YK+OszyPJF9IsNwlHAXzh79EKRYRWDFx/RXocPFNckQ=; b=g9g3ebR7Oq4qjhsBZMqCDhiqzOvDZVdRkNjf9XWSY2VIdKbz/s/iDmqF12fkWfGL6c sAHsTyxrmuWP55mgrRRU7tbjdXsmKj1odzVG0ugVHxWrhBaqcUeZecWLWPox2hrWM0z5 CB1czH3G8F1i4OXtBEWW2FPoFOQiBf9ihch+D1fchXlL6jGV4Lv0IRGdkn31LNtUY+xl l/5BGTC3NvNEl9ha1qpA57C6pbYdF6XgMvdtcAaCKmw8SCvnM6fjlEFKuUFYaws+YVxH z8Kg6iHGJ9JR4vigGGOD4VL98mbdmLy/PsDfJw5BgoDDob18ORng2LOyMJHwr4tDnR93 G6CA== X-Forwarded-Encrypted: i=1; AJvYcCXPBvrYH+GDrtw3bdDvDNu1SjqBveDtTb6dKEkqkHS5teFHpkHly8ncGLoXj7DJeIc3n1JH12gQgg==@vger.kernel.org X-Gm-Message-State: AOJu0Yym2hxmV8j2qjKJGa8KakFTAy/fCzPTCs92w5RbMFSwuoT+3hbC WjP5gPY4GaaKE3ApSYfXzrkMu85SfddhW/RkLJnMhfOlw87ud4Q6Qb8sLUf39yD3UvPACgfkKJR 7VkvNsa4Ua6pYeQ== X-Google-Smtp-Source: AGHT+IH5IPLhIyXGelZjgJIpI/dK6jQIqzw4zXbJcpm2KbdtZor186y0i6MHOA+ceIHOi+9vCMk9NuyHBVYsZQ4= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a81:ad0a:0:b0:6e3:d670:f603 with SMTP id 00721157ae682-6ee55c2f44fmr218917b3.3.1731622176991; Thu, 14 Nov 2024 14:09:36 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:16 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-3-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 2/5] PM: sleep: Remove unnecessary mutex lock when waiting on parent From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org Locking is not needed to do get_device(dev->parent). We either get a NULL (if the parent was cleared) or the actual parent. Also, when a device is deleted (device_del()) and removed from the dpm_list, its completion variable is also complete_all()-ed. So, we don't have to worry about waiting indefinitely on a deleted parent device. Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 13 ++----------- 1 file changed, 2 insertions(+), 11 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 86e51b9fefab..9b9b6088e56a 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -284,18 +284,9 @@ static bool dpm_wait_for_superior(struct device *dev, bool async) * counting the parent once more unless the device has been deleted * already (in which case return right away). */ - mutex_lock(&dpm_list_mtx); - - if (!device_pm_initialized(dev)) { - mutex_unlock(&dpm_list_mtx); - return false; - } - parent = get_device(dev->parent); - - mutex_unlock(&dpm_list_mtx); - - dpm_wait(parent, async); + if (device_pm_initialized(dev)) + dpm_wait(parent, async); put_device(parent); dpm_wait_for_suppliers(dev, async); From patchwork Thu Nov 14 22:09:17 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saravana Kannan X-Patchwork-Id: 843697 Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A8601B2194 for ; Thu, 14 Nov 2024 22:09:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622183; cv=none; b=FEf8cin6nbD5Eb4HW3yhM5hiQjTeP6YyP9X8HnrjA4vOzwvJda7NmPEjphcdI/grQ9jTr8zoJz1ws2czZVQRE+HPN5/+bn5bNmsU+T2MMPR99COChjQ+yBwYQkWjfmFrMinfvQJqmctWGtEa4TKspM0ZNQUKauTe/3A884TBOSk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622183; c=relaxed/simple; bh=yWIxtwftansYT2XGRoXOSgWp9Y+jUrFX9t88GwIz62w=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=Fc76gfDa8Nya17uIeIuZwMwJD11apBdYjFKt2FJdyK70DUMPN6qxFZO0BOHvlqJhbt18rMzv/4Vk224ERVxNUVssGygbKocZOiZyDZlMojLTx9f+MN4tVdoSAIFPZOQFJkbpVhAvLQUipctQ4iA5P7pSmeuhiGmBaNuvYatyADk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=MUtV37RL; arc=none smtp.client-ip=209.85.128.201 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="MUtV37RL" Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6ee47f9f207so16492127b3.0 for ; Thu, 14 Nov 2024 14:09:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622181; x=1732226981; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=oCE+wTj2NSDSuKVEvxg1xnDYLLwUUBXTSuE+NoiWWkY=; b=MUtV37RLBbu7x65rBG8kr/jhSMGvrxJ+LOr1Ft5zwl9zBLmG8JSD5GzNfMj/6Ho4B0 Fy8EUM5QWLpiDmoyHjQD4Uw0PN53fEaANLvNjiVlijTZZRxYQpxuiqEWUxiEiCWTvenu jeWdo5qG4L0qjzIzZ8z4huMic+59aNWVXMSTGjFYkRmHY15aejSgWjipegES2oE+BuEw CVlxQnISTzu/lQMaZJPORyrIdYVVwDF+hRI43sRb4bgvVhhIb9X5Dm7SGeW0ahjc+N3z ZotYMkDDkZivgPEv8qskny8d9G5nqVUb9hFVLbG5lQWDvW6IN2OU/6phNoSwpLlMQizq ERig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622181; x=1732226981; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=oCE+wTj2NSDSuKVEvxg1xnDYLLwUUBXTSuE+NoiWWkY=; b=VThRtRU5cK9pxiwYTHdy5z4ypYIok0rfwG4qwaljfvEy56v7+x5h+Aer+t1QIsplpq WHL34N40IFuYnmQ7Ot9AFoecUVXfjqNOtOFAeFU0Zg84K0T3grHawdH7qZFyYNyZ6BPf +9vOlUeDBUIc52tQuvnsTDo0tYgBbneKSlTnN+CUfpzW6ZP14xQSrEGesEg43iPzaKtS 8YRij5OUNWleLlNNx0Kn/9Sa3NceacpqTH4+WTUSWxJ2iZc4M3zAlEdpZCh4zJkYoeCM ZL7fCGuT9OAW/X1xlJswFBV2dSaj9OIfUW64ZNItbG8sAGQESjskhx099igTtZrPGCth diDg== X-Forwarded-Encrypted: i=1; AJvYcCWc/DaDbfYGas4P8KX7tuj9lD9ElsD640FfhJAJsPhZAUVR0DMFO90oiihLULdY53K5GzvgsfaioQ==@vger.kernel.org X-Gm-Message-State: AOJu0Yz56WCuUigSoUcmm56Dan7SnE/q8PkUff7TTw2hnjw6kOsoZmrK sX32E2PCeFu6aaKogvubkFQ5HI49xypBiJPCU+MPNq3kSB6c4EFaHICgb8JE3WfTUJJmI7i2NHP IbqeKvZQRyEfaEA== X-Google-Smtp-Source: AGHT+IGjEOUS7wMr4ogKkM7WbTZGcbWzZb2t3AGnu1yHNIrxDvljJSXUdc9GZvFgcfoE8/FWaYEEVqH22qJDxjg= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a05:690c:a17:b0:6ea:3c62:17c1 with SMTP id 00721157ae682-6ee55a553e0mr18637b3.1.1731622181452; Thu, 14 Nov 2024 14:09:41 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:17 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-4-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 3/5] PM: sleep: Add helper functions to loop through superior/subordinate devs From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org We have a lot of code that does or will loop through superior/subordinate devices and take action on them. Refactor the code to pull out the common "loop through" functionality into helper functions to avoid repeating the logic. Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 70 ++++++++++++++++++++++----------------- 1 file changed, 40 insertions(+), 30 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index 9b9b6088e56a..aa1470ef6ac0 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -247,15 +247,22 @@ static int dpm_wait_fn(struct device *dev, void *async_ptr) return 0; } -static void dpm_wait_for_children(struct device *dev, bool async) -{ - device_for_each_child(dev, &async, dpm_wait_fn); -} - -static void dpm_wait_for_suppliers(struct device *dev, bool async) +static int dpm_for_each_superior(struct device *dev, void *data, + int (*fn)(struct device *dev, void *data)) { + struct device *parent; + int ret = 0, idx; struct device_link *link; - int idx; + + if (!dev) + return 0; + + parent = get_device(dev->parent); + if (parent) + ret = fn(parent, data); + put_device(parent); + if (ret) + return ret; idx = device_links_read_lock(); @@ -267,29 +274,20 @@ static void dpm_wait_for_suppliers(struct device *dev, bool async) * walking. */ list_for_each_entry_rcu_locked(link, &dev->links.suppliers, c_node) - if (READ_ONCE(link->status) != DL_STATE_DORMANT) - dpm_wait(link->supplier, async); + if (READ_ONCE(link->status) != DL_STATE_DORMANT) { + ret = fn(link->supplier, data); + if (ret) + break; + } device_links_read_unlock(idx); + + return ret; } static bool dpm_wait_for_superior(struct device *dev, bool async) { - struct device *parent; - - /* - * If the device is resumed asynchronously and the parent's callback - * deletes both the device and the parent itself, the parent object may - * be freed while this function is running, so avoid that by reference - * counting the parent once more unless the device has been deleted - * already (in which case return right away). - */ - parent = get_device(dev->parent); - if (device_pm_initialized(dev)) - dpm_wait(parent, async); - put_device(parent); - - dpm_wait_for_suppliers(dev, async); + dpm_for_each_superior(dev, &async, dpm_wait_fn); /* * If the parent's callback has deleted the device, attempting to resume @@ -298,10 +296,18 @@ static bool dpm_wait_for_superior(struct device *dev, bool async) return device_pm_initialized(dev); } -static void dpm_wait_for_consumers(struct device *dev, bool async) +static int dpm_for_each_subordinate(struct device *dev, void *data, + int (*fn)(struct device *dev, void *data)) { + int ret, idx; struct device_link *link; - int idx; + + if (!dev) + return 0; + + ret = device_for_each_child(dev, data, fn); + if (ret) + return ret; idx = device_links_read_lock(); @@ -315,16 +321,20 @@ static void dpm_wait_for_consumers(struct device *dev, bool async) * unregistration). */ list_for_each_entry_rcu_locked(link, &dev->links.consumers, s_node) - if (READ_ONCE(link->status) != DL_STATE_DORMANT) - dpm_wait(link->consumer, async); + if (READ_ONCE(link->status) != DL_STATE_DORMANT) { + ret = fn(link->consumer, data); + if (ret) + break; + } device_links_read_unlock(idx); + + return ret; } static void dpm_wait_for_subordinate(struct device *dev, bool async) { - dpm_wait_for_children(dev, async); - dpm_wait_for_consumers(dev, async); + dpm_for_each_subordinate(dev, &async, dpm_wait_fn); } /** From patchwork Thu Nov 14 22:09:18 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saravana Kannan X-Patchwork-Id: 843405 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E723C1B21AE for ; Thu, 14 Nov 2024 22:09:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622189; cv=none; b=fnjRGGm0RGpHw8gwPJcUcOrj3RopUqivYw5t3WhXCa9ere0ENAKc5k3lKyw5Eri55XdB/GSAmzIpkwoqCFbPZRExgmCGu2U9Sn9aF4P+dQ/6ZMEkMoKu757IuN1FGDpKjxy5ID2gUDa5ZNtPqjfozCNPo5FQTJ3RVO6aQJNaBek= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622189; c=relaxed/simple; bh=zT69LE77WSFfAhqCDH5w950kvxq/gLKyUSfzX7wNoIg=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=LWdW7xgjBX2P3voYJeT6Q/7pQh5LJzFwfz/v7cKReXaaYWSuC/nraMChg+z7fc30A5hZsTCB0Ijpy4hCfOP7rmIAa5GGi9Vyr5TuxUHgCLuEZ8E8tnyGd3H+EHvQR0rSfCOTezY0Z7bOQxTpr4YvSfpZz3J2UKhtTM7S+iq0MJI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ItHwvwAb; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ItHwvwAb" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e30b8fd4ca1so1635486276.3 for ; Thu, 14 Nov 2024 14:09:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622186; x=1732226986; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=cuwLd2wG5w5IEQwXRfGEca25xUbPGee3GN6Vgv7vgbI=; b=ItHwvwAbgCu7II3PVd5JtpIDdSo/KOMlylB5YZIg1isP/Uadv/3qOkN7uJufuEdobX m9O9uAhcYJ5Ne3SJ3ZsCgpEv3+hl0Q71AgVccMRqyWH0vco1yaIe5rwzJJBTezQ26aRK E+x14Ey7cvZiUbum3hglr8F5iPlKwa/eUP/mdQ8wH69hy6WOiivupikoX9YUitBUubs9 sfnEitzAaBhDrMGmEhGAQ0vLge+1xD+raFigpC0nQqNfGPIA22ziupppN30tLu1UJiu2 ag9PnuyLGSkDIC/u0dMC5yhugw9XCi8t+WxhwbcrCIMdOZgEfIZnpTKg+VdY6UazGob+ OreA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622186; x=1732226986; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=cuwLd2wG5w5IEQwXRfGEca25xUbPGee3GN6Vgv7vgbI=; b=wArEdVfWFr7W1SpbcbuO4ufQc/znW8at9gDxSeeenYfgVRcmhwaHVm6ihUI52yHXSQ sP9i3G3ENpVYaAge3UFgXAGkouqapZkt7ofLVMI55h/n4F8E05s+3eAM0TtT3soA8UB4 vvYT8MNH/mTkCmxPwgEst/TnKNRnlYiBNguG+b9F17rDuwXb8oRfjunwxlhor7wBXHXm R3uXwPgN7y8kLC5YGXwQfRtVtwCg5ktqGd/Fmbfw5/H1Y+rjZgnYHO2w3q4mio8oU2X6 NK8XatHk8nZdh6ma84TgozaaZpNEiBlHVq2LlRSSvUVLY201XwUolRxNj9kiA/t7pdnT JO3g== X-Forwarded-Encrypted: i=1; AJvYcCX+N3EgxlzbSI5a20vYWG8F3QOqryLI3NVMdGTPV6QKx3VascyEAhkXO3TBfZ1les+dMonumvzxRw==@vger.kernel.org X-Gm-Message-State: AOJu0YxFOE4Yifwdi0aVeB5Ef1/vQ5/5ktX3blc9mzbvG5dvd0NXK8kM cIs5vQwtZDiYCu7b8NIIfEZdi9QFnVXFRdx+6XVmuM6NxjH4dVN5CXfoWNFti4zFnMe+y3QwgiB Z4ORBoKO9hUvg8g== X-Google-Smtp-Source: AGHT+IFkvpUZhEiNC+bQ9HRdBsFcgiebG16qqorItpAQ1ScVgOB47lhw/wazGuODgw4aX0JHOPU35kN2IPbBe74= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a25:2d12:0:b0:e29:6e61:3daf with SMTP id 3f1490d57ef6-e3825d38f0cmr322276.2.1731622185728; Thu, 14 Nov 2024 14:09:45 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:18 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-5-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 4/5] PM: sleep: Do breadth first suspend/resume for async suspend/resume From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org The dpm_list used for suspend/resume ensures that all superior devices (parents and suppliers) precede subordinate devices (children and consumers). Current async resume logic: ------------------------------- * For each resume phase (except the "complete" phase, which is always serialized), the resume logic first queues all async devices in the dpm_list. It then loops through the dpm_list again to resume the sync devices one by one. * Async devices wait for all their superior devices to resume before starting their own resume operation. * This process results in multiple sleep and wake-up cycles before an async device actually resumes. This sleeping also causes kworker threads to stall with work for a period. Consequently, the workqueue framework spins up more kworker threads to handle the other async devices. * The end result is excessive thread creation, wake-ups, sleeps, and context switches for every async device. This overhead makes a full async resume (with all devices marked as async-capable) much slower than a synchronous resume. Current async suspend logic: -------------------------------- * The async suspend logic differs from the async resume logic. The suspend logic loops through the dpm_list. When it finds an async device, it queues the work and moves on. However, if it encounters a sync device, it waits until the sync device (and all its subordinate devices) have suspended before proceeding to the next device. Therefore, an async suspend device can be left waiting on an unrelated device before even being queued. * Once queued, an async device experiences the same inefficiencies as in the resume logic (thread creation, wake-ups, sleeps, and context switches). On a Pixel 6, averaging over 100 suspend/resume cycles, the data is as follows: +---------------------------+-----------+------------+----------+ | Phase | Full sync | Full async | % change | +---------------------------+-----------+------------+----------+ | Total dpm_suspend*() time | 107 ms | 72 ms | -33% | +---------------------------+-----------+------------+----------+ | Total dpm_resume*() time | 75 ms | 90 ms | +20% | +---------------------------+-----------+------------+----------+ | Sum | 182 ms | 162 ms | -11% | +---------------------------+-----------+------------+----------+ This shows that full async suspend/resume is not a viable option. It makes the user-visible resume phase slower and only improves the overall time by 11%. To fix all this, this patches introduces a new async suspend/resume logic. New suspend/resume logic: ------------------------- * For each suspend/resume phase (except "complete" and "prepare," which are always serialized), the logic first queues only the async devices that don't have to wait for any subordinates (for suspend) or superiors (for resume). It then loops through the dpm_list again to suspend/resume the sync devices one by one. * When a device (sync or async) successfully suspends/resumes, it examines its superiors/subordinates and queues only the async devices that don't need to wait for any subordinates/superiors. With this new logic: * Queued async devices don't have to wait for completion and are always ready to perform their suspend/resume operation. * The queue of async devices remains short. * kworkers never sleep for extended periods, and the workqueue framework doesn't spin up many new threads to handle a backlog of async devices. * The result is approximately NCPU kworker threads running in parallel without sleeping until all async devices finish. On a Pixel 6, averaging over 100 suspend/resume cycles, the new logic yields improved results: +---------------------------+-----------+------------+------------------+ | Phase | Old full sync | New full async | % change | +---------------------------+-----------+------------+------------------+ | Total dpm_suspend*() time | 107 ms | 60 ms | -44% | +---------------------------+-----------+------------+------------------+ | Total dpm_resume*() time | 75 ms | 74 ms | -1% | +---------------------------+-----------+------------+------------------+ | Sum | 182 ms | 134 ms | -26% | +---------------------------+-----------+------------+------------------+ Signed-off-by: Saravana Kannan --- drivers/base/power/main.c | 242 ++++++++++++++++++++++++++++++++------ 1 file changed, 205 insertions(+), 37 deletions(-) diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c index aa1470ef6ac0..65c195be48b8 100644 --- a/drivers/base/power/main.c +++ b/drivers/base/power/main.c @@ -121,6 +121,12 @@ void device_pm_unlock(void) mutex_unlock(&dpm_list_mtx); } +static bool is_async(struct device *dev) +{ + return dev->power.async_suspend && pm_async_enabled + && !pm_trace_is_enabled(); +} + /** * device_pm_add - Add a device to the PM core's list of active devices. * @dev: Device to add to the list. @@ -287,6 +293,9 @@ static int dpm_for_each_superior(struct device *dev, void *data, static bool dpm_wait_for_superior(struct device *dev, bool async) { + if (is_async(dev)) + return true; + dpm_for_each_superior(dev, &async, dpm_wait_fn); /* @@ -334,9 +343,22 @@ static int dpm_for_each_subordinate(struct device *dev, void *data, static void dpm_wait_for_subordinate(struct device *dev, bool async) { + if (is_async(dev)) + return; + dpm_for_each_subordinate(dev, &async, dpm_wait_fn); } +static int dpm_check_fn(struct device *dev, void *data) +{ + return completion_done(&dev->power.completion) ? 0 : -EBUSY; +} + +static int dpm_check_subordinate(struct device *dev) +{ + return dpm_for_each_subordinate(dev, NULL, dpm_check_fn); +} + /** * pm_op - Return the PM operation appropriate for given PM event. * @ops: PM operations to choose from. @@ -578,33 +600,39 @@ bool dev_pm_skip_resume(struct device *dev) return !dev->power.must_resume; } -static bool is_async(struct device *dev) +static void dpm_async_fn(struct device *dev, async_func_t func) { - return dev->power.async_suspend && pm_async_enabled - && !pm_trace_is_enabled(); -} - -static bool dpm_async_fn(struct device *dev, async_func_t func) -{ - reinit_completion(&dev->power.completion); - - if (is_async(dev)) { - dev->power.async_in_progress = true; - - get_device(dev); + /* + * Multiple async devices could trigger the async queuing of this + * device. Make sure not to double queue it. + */ + spin_lock(&dev->power.lock); + if (dev->power.async_in_progress) { + spin_unlock(&dev->power.lock); + return; + } + dev->power.async_in_progress = true; + spin_unlock(&dev->power.lock); - if (async_schedule_dev_nocall(func, dev)) - return true; + get_device(dev); - put_device(dev); - } /* - * Because async_schedule_dev_nocall() above has returned false or it - * has not been called at all, func() is not running and it is safe to - * update the async_in_progress flag without extra synchronization. + * We aren't going to call any callbacks if the device has none. Also, + * direct_complete means all the resume and suspend callbacks should be + * skipped and the device should go straight to dpm_complete(). In both + * of these case, calling the func() synchronously will be a lot quicker + * than even queuing the async work. So, do that. */ - dev->power.async_in_progress = false; - return false; + if (dev->power.no_pm_callbacks || dev->power.direct_complete) { + func(dev, 0); + return; + } + + if (async_schedule_dev_nocall(func, dev)) + return; + + WARN(1, "Unable to schedule async suspend/resume calls!\n"); + put_device(dev); } /** @@ -692,12 +720,47 @@ static void device_resume_noirq(struct device *dev, pm_message_t state, bool asy } } +static void dpm_reinit_dev_state(struct list_head *list) +{ + struct device *dev; + + list_for_each_entry(dev, list, power.entry) { + reinit_completion(&dev->power.completion); + dev->power.async_in_progress = false; + } +} + +static int dpm_check_superior(struct device *dev) +{ + return dpm_for_each_superior(dev, NULL, dpm_check_fn); +} + +static int dpm_async_queue_resume_ready_fn(struct device *dev, void *data) +{ + if (!is_async(dev) || dpm_check_superior(dev)) + return 0; + + dpm_async_fn(dev, data); + return 0; +} + +static void dpm_async_resume_loop(struct list_head *from, async_func_t func) +{ + struct device *dev; + + list_for_each_entry(dev, from, power.entry) + dpm_async_queue_resume_ready_fn(dev, func); +} + static void async_resume_noirq(void *data, async_cookie_t cookie) { struct device *dev = data; device_resume_noirq(dev, pm_transition, true); put_device(dev); + + dpm_for_each_subordinate(dev, async_resume_noirq, + dpm_async_queue_resume_ready_fn); } static void dpm_noirq_resume_devices(pm_message_t state) @@ -712,23 +775,26 @@ static void dpm_noirq_resume_devices(pm_message_t state) mutex_lock(&dpm_list_mtx); + dpm_reinit_dev_state(&dpm_noirq_list); + /* * Trigger the resume of "async" devices upfront so they don't have to * wait for the "non-async" ones they don't depend on. */ - list_for_each_entry(dev, &dpm_noirq_list, power.entry) - dpm_async_fn(dev, async_resume_noirq); + dpm_async_resume_loop(&dpm_noirq_list, async_resume_noirq); while (!list_empty(&dpm_noirq_list)) { dev = to_device(dpm_noirq_list.next); list_move_tail(&dev->power.entry, &dpm_late_early_list); - if (!dev->power.async_in_progress) { + if (!is_async(dev)) { get_device(dev); mutex_unlock(&dpm_list_mtx); device_resume_noirq(dev, state, false); + dpm_for_each_subordinate(dev, async_resume_noirq, + dpm_async_queue_resume_ready_fn); put_device(dev); @@ -834,6 +900,9 @@ static void async_resume_early(void *data, async_cookie_t cookie) device_resume_early(dev, pm_transition, true); put_device(dev); + + dpm_for_each_subordinate(dev, async_resume_early, + dpm_async_queue_resume_ready_fn); } /** @@ -852,23 +921,26 @@ void dpm_resume_early(pm_message_t state) mutex_lock(&dpm_list_mtx); + dpm_reinit_dev_state(&dpm_late_early_list); + /* * Trigger the resume of "async" devices upfront so they don't have to * wait for the "non-async" ones they don't depend on. */ - list_for_each_entry(dev, &dpm_late_early_list, power.entry) - dpm_async_fn(dev, async_resume_early); + dpm_async_resume_loop(&dpm_late_early_list, async_resume_early); while (!list_empty(&dpm_late_early_list)) { dev = to_device(dpm_late_early_list.next); list_move_tail(&dev->power.entry, &dpm_suspended_list); - if (!dev->power.async_in_progress) { + if (!is_async(dev)) { get_device(dev); mutex_unlock(&dpm_list_mtx); device_resume_early(dev, state, false); + dpm_for_each_subordinate(dev, async_resume_early, + dpm_async_queue_resume_ready_fn); put_device(dev); @@ -996,6 +1068,9 @@ static void async_resume(void *data, async_cookie_t cookie) device_resume(dev, pm_transition, true); put_device(dev); + + dpm_for_each_subordinate(dev, async_resume, + dpm_async_queue_resume_ready_fn); } /** @@ -1018,23 +1093,26 @@ void dpm_resume(pm_message_t state) mutex_lock(&dpm_list_mtx); + dpm_reinit_dev_state(&dpm_suspended_list); + /* * Trigger the resume of "async" devices upfront so they don't have to * wait for the "non-async" ones they don't depend on. */ - list_for_each_entry(dev, &dpm_suspended_list, power.entry) - dpm_async_fn(dev, async_resume); + dpm_async_resume_loop(&dpm_suspended_list, async_resume); while (!list_empty(&dpm_suspended_list)) { dev = to_device(dpm_suspended_list.next); list_move_tail(&dev->power.entry, &dpm_prepared_list); - if (!dev->power.async_in_progress) { + if (!is_async(dev)) { get_device(dev); mutex_unlock(&dpm_list_mtx); device_resume(dev, state, false); + dpm_for_each_subordinate(dev, async_resume, + dpm_async_queue_resume_ready_fn); put_device(dev); @@ -1274,12 +1352,35 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state, bool asy return error; } +static int dpm_async_queue_suspend_ready_fn(struct device *dev, void *data) +{ + if (!is_async(dev) || dpm_check_subordinate(dev)) + return 0; + + dpm_async_fn(dev, data); + return 0; +} + +static void dpm_async_suspend_loop(struct list_head *from, async_func_t func) +{ + struct device *dev; + + list_for_each_entry_reverse(dev, from, power.entry) + dpm_async_queue_suspend_ready_fn(dev, func); +} + static void async_suspend_noirq(void *data, async_cookie_t cookie) { struct device *dev = data; device_suspend_noirq(dev, pm_transition, true); put_device(dev); + + if (async_error) + return; + + dpm_for_each_superior(dev, async_suspend_noirq, + dpm_async_queue_suspend_ready_fn); } static int dpm_noirq_suspend_devices(pm_message_t state) @@ -1294,12 +1395,20 @@ static int dpm_noirq_suspend_devices(pm_message_t state) mutex_lock(&dpm_list_mtx); + dpm_reinit_dev_state(&dpm_late_early_list); + + /* + * Trigger the "async" devices upfront so they don't have to wait for + * the "non-async" ones they don't depend on. + */ + dpm_async_suspend_loop(&dpm_late_early_list, async_suspend_noirq); + while (!list_empty(&dpm_late_early_list)) { struct device *dev = to_device(dpm_late_early_list.prev); list_move(&dev->power.entry, &dpm_noirq_list); - if (dpm_async_fn(dev, async_suspend_noirq)) + if (is_async(dev)) continue; get_device(dev); @@ -1307,13 +1416,20 @@ static int dpm_noirq_suspend_devices(pm_message_t state) mutex_unlock(&dpm_list_mtx); error = device_suspend_noirq(dev, state, false); + if (!error && !async_error) + dpm_for_each_superior(dev, async_suspend_noirq, + dpm_async_queue_suspend_ready_fn); put_device(dev); mutex_lock(&dpm_list_mtx); - if (error || async_error) + if (error || async_error) { + /* See explanation in dpm_suspend() */ + list_splice_init(&dpm_late_early_list, &dpm_noirq_list); break; + } + } mutex_unlock(&dpm_list_mtx); @@ -1447,6 +1563,12 @@ static void async_suspend_late(void *data, async_cookie_t cookie) device_suspend_late(dev, pm_transition, true); put_device(dev); + + if (async_error) + return; + + dpm_for_each_superior(dev, async_suspend_late, + dpm_async_queue_suspend_ready_fn); } /** @@ -1467,12 +1589,20 @@ int dpm_suspend_late(pm_message_t state) mutex_lock(&dpm_list_mtx); + dpm_reinit_dev_state(&dpm_suspended_list); + + /* + * Trigger the "async" devices upfront so they don't have to wait for + * the "non-async" ones they don't depend on. + */ + dpm_async_suspend_loop(&dpm_suspended_list, async_suspend_late); + while (!list_empty(&dpm_suspended_list)) { struct device *dev = to_device(dpm_suspended_list.prev); list_move(&dev->power.entry, &dpm_late_early_list); - if (dpm_async_fn(dev, async_suspend_late)) + if (is_async(dev)) continue; get_device(dev); @@ -1480,13 +1610,19 @@ int dpm_suspend_late(pm_message_t state) mutex_unlock(&dpm_list_mtx); error = device_suspend_late(dev, state, false); + if (!error && !async_error) + dpm_for_each_superior(dev, async_suspend_late, + dpm_async_queue_suspend_ready_fn); put_device(dev); mutex_lock(&dpm_list_mtx); - if (error || async_error) + if (error || async_error) { + /* See explanation in dpm_suspend() */ + list_splice_init(&dpm_suspended_list, &dpm_late_early_list); break; + } } mutex_unlock(&dpm_list_mtx); @@ -1712,6 +1848,12 @@ static void async_suspend(void *data, async_cookie_t cookie) device_suspend(dev, pm_transition, true); put_device(dev); + + if (async_error) + return; + + dpm_for_each_superior(dev, async_suspend, + dpm_async_queue_suspend_ready_fn); } /** @@ -1734,12 +1876,20 @@ int dpm_suspend(pm_message_t state) mutex_lock(&dpm_list_mtx); + dpm_reinit_dev_state(&dpm_prepared_list); + + /* + * Trigger the "async" devices upfront so they don't have to wait for + * the "non-async" ones they don't depend on. + */ + dpm_async_suspend_loop(&dpm_prepared_list, async_suspend); + while (!list_empty(&dpm_prepared_list)) { struct device *dev = to_device(dpm_prepared_list.prev); list_move(&dev->power.entry, &dpm_suspended_list); - if (dpm_async_fn(dev, async_suspend)) + if (is_async(dev)) continue; get_device(dev); @@ -1747,13 +1897,31 @@ int dpm_suspend(pm_message_t state) mutex_unlock(&dpm_list_mtx); error = device_suspend(dev, state, false); + if (!error && !async_error) + dpm_for_each_superior(dev, async_suspend, + dpm_async_queue_suspend_ready_fn); put_device(dev); mutex_lock(&dpm_list_mtx); - if (error || async_error) + if (error || async_error) { + /* + * async devices that come after the current sync device + * in the list might have already suspended. We need to + * make sure those async devices are resumed again. By + * moving all devices to the next list, we make sure the + * error handling path resumes all suspended devices. + * + * However, we also need to avoid resuming devices that + * haven't been suspended yet. Fortunately, the existing + * is_suspended/is_noirq_suspended/is_late_suspended + * flags take care of skipping the resume of a device if + * it hasn't been suspended yet. + */ + list_splice_init(&dpm_prepared_list, &dpm_suspended_list); break; + } } mutex_unlock(&dpm_list_mtx); From patchwork Thu Nov 14 22:09:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Saravana Kannan X-Patchwork-Id: 843696 Received: from mail-yb1-f202.google.com (mail-yb1-f202.google.com [209.85.219.202]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0A8E61B3930 for ; Thu, 14 Nov 2024 22:09:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.202 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622192; cv=none; b=X0KtjMRpcbIvJDei4zZz4b3d551UAy3wm/U+v9id2Z/bX5CPFB9zsrqN8rUXbm+lWLHi0D5+FGeve4ENOjEV6tjJtNbe7aqQ8M+0CChvaaHa24C8o6x80wOdCjnW516pTZT5ZXI6tHUltqXoiYjPUbUZ96O5QDBknsYan5cf42s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731622192; c=relaxed/simple; bh=erE7x5UyBXgceNKubyiTOJ7BXeO3c7MeYHV9axV0DZw=; h=Date:In-Reply-To:Message-Id:Mime-Version:References:Subject:From: To:Cc:Content-Type; b=FmKV2MtpKj35g0ByIannH1TUrQOKDOWOMXt0gdvb58VrrzL0sOT2FMDvlzyf8lnGgC973WGNZ0dIqJd8CgvQX3xEGOwpZZHrcctFfJr99WE68Unm9xU3N/t67m0pPi47udsp9wKMV2nvixL4+UH8kphSBI2itKsDeGlKSz7rBQQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=Sv9YPr3n; arc=none smtp.client-ip=209.85.219.202 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--saravanak.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Sv9YPr3n" Received: by mail-yb1-f202.google.com with SMTP id 3f1490d57ef6-e381d10dd2aso1327793276.2 for ; Thu, 14 Nov 2024 14:09:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1731622190; x=1732226990; darn=vger.kernel.org; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=Ix2aZjNEV5v90Kyb9lVYbv1gl/XRVKcqFa1DVAk8aZ0=; b=Sv9YPr3nuqoK8Mxj0uLy67ZQrYPVOEVW2/3nXGcHvyTg1E/5s3tSEuPMSbT2CEOTc9 esgkJJ7B31K4XcSsZbYDDYrXjz5AJbY3qhwIkoPl4aokj3mJLOw5sCfxw7sWLRjj0MKF JbDSbQOZLJSJPmQS4X5F9XCg6/ISxwv0Cn0kZ+HPKxl8gzE/GkqZl1btVVQDwbRoB3Gq XLfqXKFyk8eg7681tdMDpQbA2E+5L+R+1S2V0me/3DH1mQ6nukegp8uXWKMwiHFZRY8K HmgDOHYLvg8ufhQJKQla302wM8T4M2rfqKFczo1+N9wVXrUO1t67KhqJu3OGeee5vJtJ kGYQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731622190; x=1732226990; h=cc:to:from:subject:references:mime-version:message-id:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=Ix2aZjNEV5v90Kyb9lVYbv1gl/XRVKcqFa1DVAk8aZ0=; b=V8btdyMXpJ656RdZiRt8Ng2wkZdVNfAsWLIGCmkMpqgrlaAUTMwxOL5dbN7RuZF5Qq zfPtwWRhXnPtoBGbxWH+TxVnL8jMAAuC6DB02jaH5PUlN2DzfEDKRzW8KfHbHQApaaF4 jxJDXqdZVbTm/SubnoukbEupIGdYka3lx/Kcbgsc6ygig7pmgGSFtRvkMQaCG5L4zUyC MPnxOowaovMngA42wE9De11FgD+bOSpCcJbbHoLNr5Z/pWx9+iItJtJAWz8vOWjBJqmj St15puK3FytphHXd49K+fe0MsVz7ELVYPcaq/nZwKUh/nm38Hdo6mDZC4hRptzqmDD/h sf4g== X-Forwarded-Encrypted: i=1; AJvYcCVFgCnPIE9+lmQC/PKRggEhxfzMox0rNgLKrJFsskBtlGYuY0ahk8jCqD8Igm4rWSctAO+FDSssxQ==@vger.kernel.org X-Gm-Message-State: AOJu0YwDOsirr8E2VfIGC+mloqmXU9Y8vbv5EPO+M+6JuaAVK7QKl94a me0gIjmkA+DK3cvbbIhHJRNoVuGaVcx3GrE+E5NqDiXtprYMzlftLO9PMN6TrYFuZzdfyxQb4rz bt/wCAK2judqm/g== X-Google-Smtp-Source: AGHT+IF8VfOz6LQsMr/CGI8GCQ4S/zLj7Vdday2+nUG/rZc9SbBgXM0Sf7829fpe/0ZP6BT+O+UG5Ot4zG8sR2g= X-Received: from saravanak.san.corp.google.com ([2620:15c:2d:3:3e23:8355:c37:686e]) (user=saravanak job=sendgmr) by 2002:a25:ce05:0:b0:e38:25b5:e33 with SMTP id 3f1490d57ef6-e38263d5d0dmr314276.7.1731622190059; Thu, 14 Nov 2024 14:09:50 -0800 (PST) Date: Thu, 14 Nov 2024 14:09:19 -0800 In-Reply-To: <20241114220921.2529905-1-saravanak@google.com> Message-Id: <20241114220921.2529905-6-saravanak@google.com> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20241114220921.2529905-1-saravanak@google.com> X-Mailer: git-send-email 2.47.0.338.g60cca15819-goog Subject: [PATCH v1 5/5] PM: sleep: Spread out async kworker threads during dpm_resume*() phases From: Saravana Kannan To: "Rafael J. Wysocki" , Pavel Machek , Len Brown , Greg Kroah-Hartman , Ingo Molnar , Peter Zijlstra , Juri Lelli , Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Ben Segall , Mel Gorman , Valentin Schneider Cc: Saravana Kannan , Geert Uytterhoeven , Marek Vasut , Bird@google.com, Tim , kernel-team@android.com, linux-pm@vger.kernel.org, linux-kernel@vger.kernel.org As of today, the scheduler doesn't spread out all the kworker threads across all the available CPUs during suspend/resume. This causes significant resume latency during the dpm_resume*() phases. System resume latency is a very user-visible event. Reducing the latency is more important than trying to be energy aware during that period. Since there are no userspace processes running during this time and this is a very short time window, we can simply disable EAS during resume so that the parallel resume of the devices is spread across all the CPUs. On a Pixel 6, averaging over 100 suspend/resume cycles, the new logic plus disabling EAS for resume yields significant improvements: +---------------------------+-----------+------------+------------------+ | Phase | Old full sync | New full async | % change | | | | + EAS disabled | | +---------------------------+-----------+------------+------------------+ | Total dpm_suspend*() time | 107 ms | 62 ms | -42% | +---------------------------+-----------+------------+------------------+ | Total dpm_resume*() time | 75 ms | 61 ms | -19% | +---------------------------+-----------+------------+------------------+ | Sum | 182 ms | 123 ms | -32% | +---------------------------+-----------+------------+------------------+ Signed-off-by: Saravana Kannan --- kernel/power/suspend.c | 16 ++++++++++++++++ kernel/sched/topology.c | 13 +++++++++++++ 2 files changed, 29 insertions(+) diff --git a/kernel/power/suspend.c b/kernel/power/suspend.c index 09f8397bae15..7304dc39958f 100644 --- a/kernel/power/suspend.c +++ b/kernel/power/suspend.c @@ -393,6 +393,12 @@ void __weak arch_suspend_enable_irqs(void) local_irq_enable(); } +/* + * Intentionally not part of a header file to avoid risk of abuse by other + * drivers. + */ +void sched_set_energy_aware(unsigned int enable); + /** * suspend_enter - Make the system enter the given sleep state. * @state: System sleep state to enter. @@ -468,6 +474,15 @@ static int suspend_enter(suspend_state_t state, bool *wakeup) Platform_wake: platform_resume_noirq(state); + /* + * We do this only for resume instead of suspend and resume for these + * reasons: + * - Performance is more important than power for resume. + * - Power spent entering suspend is more important for suspend. Also, + * stangely, disabling EAS was making suspent a few milliseconds + * slower in my testing. + */ + sched_set_energy_aware(0); dpm_resume_noirq(PMSG_RESUME); Platform_early_resume: @@ -520,6 +535,7 @@ int suspend_devices_and_enter(suspend_state_t state) Resume_devices: suspend_test_start(); dpm_resume_end(PMSG_RESUME); + sched_set_energy_aware(1); suspend_test_finish("resume devices"); trace_suspend_resume(TPS("resume_console"), state, true); resume_console(); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 9748a4c8d668..c069c0b17cbf 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -284,6 +284,19 @@ void rebuild_sched_domains_energy(void) mutex_unlock(&sched_energy_mutex); } +void sched_set_energy_aware(unsigned int enable) +{ + int state; + + if (!sched_is_eas_possible(cpu_active_mask)) + return; + + sysctl_sched_energy_aware = enable; + state = static_branch_unlikely(&sched_energy_present); + if (state != sysctl_sched_energy_aware) + rebuild_sched_domains_energy(); +} + #ifdef CONFIG_PROC_SYSCTL static int sched_energy_aware_handler(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos)