From patchwork Fri Jun 13 09:44:21 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Changwoo Min X-Patchwork-Id: 896589 Received: from fanzine2.igalia.com (fanzine2.igalia.com [213.97.179.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77D022D23A5; Fri, 13 Jun 2025 09:45:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=213.97.179.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749807903; cv=none; b=Dfg2ixoRs2PwFhwNEirn4zJcwiFRTJiaHvPjidy028FOwxyC7PvkZYPsLOPH4bkVZnblV28kS893MzqRlNvwb9MDhSYAkUet7Eq62qdRjbp+Av2ohfBH/ARtjl2q+X7/IfPYjuXX/WTov4m894KZdJiR9PqPzHf05vTJ5guU0dw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1749807903; c=relaxed/simple; bh=cVIdpCkN+5cSS93w/j8KxW6U9tQ07XYroGnprrV0pmM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GaOr4GzRlQJY1KxiT6gaWxcYCIRJzLLu3UUMKQ8DELCYy1uq5lGxAYYn77VkL2wIMXajQd7qlOBwY7tOtObfymjcxGYtr72wzKsjnB3+6yZcqH1mbxyzYqlbviKDwYq01NjS3iwNwxT9oF+OwFZoUQWBCtz8dIHCtmLXgh7wzwY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=igalia.com; spf=pass smtp.mailfrom=igalia.com; dkim=pass (2048-bit key) header.d=igalia.com header.i=@igalia.com header.b=c6MIwqbN; arc=none smtp.client-ip=213.97.179.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=igalia.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=igalia.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=igalia.com header.i=@igalia.com header.b="c6MIwqbN" DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=igalia.com; s=20170329; h=Content-Transfer-Encoding:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe: List-Post:List-Owner:List-Archive; bh=9HVOxydT4xaa8ZVXUFVmv2O4FdtHBrwpSOmNk8hpbKg=; b=c6MIwqbNeYuSOVYtB8v6A8/qL+ av9YHigVToZpx+mfd3QMthKHTbAAbzEiYzzDZQkRK30hZhbinWYhckd8ksQsTJYI3QcppBtuRnYyT lERHOG1RymSdTh7i8SQDuPXrrHBbX2lP6kr9deJFBchggqwywDrX0ef18MyxmCHuwN8h1uquhyGdk kfRtkOY7JLoBI9jM3r7zxEcDsRrjQHlZovu6KHk2KnEvk9iKTc/RqhXyrLaZfUwWtxbFctqxE8N53 WcLmIc1y+NJ9cxlZs+0ujZQa6Erfd8MSovwYb8YLS5jnhfxzGMSCiuubA2UuHu1ix0qfISGKUx37R 95x9eN3w==; Received: from [58.29.143.236] (helo=localhost) by fanzine2.igalia.com with utf8esmtpsa (Cipher TLS1.3:ECDHE_SECP256R1__RSA_PSS_RSAE_SHA256__AES_256_GCM:256) (Exim) id 1uQ0xx-002xdA-E0; Fri, 13 Jun 2025 11:44:54 +0200 From: Changwoo Min To: lukasz.luba@arm.com, rafael@kernel.org, len.brown@intel.com, pavel@kernel.org Cc: christian.loehle@arm.com, tj@kernel.org, kernel-dev@igalia.com, linux-pm@vger.kernel.org, sched-ext@lists.linux.dev, linux-kernel@vger.kernel.org, Changwoo Min Subject: [PATCH v2 03/10] PM: EM: Assign a unique ID when creating a performance domain. Date: Fri, 13 Jun 2025 18:44:21 +0900 Message-ID: <20250613094428.267791-4-changwoo@igalia.com> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250613094428.267791-1-changwoo@igalia.com> References: <20250613094428.267791-1-changwoo@igalia.com> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 It is necessary to refer to a specific performance domain from a userspace. For example, the energy model of a particular performance domain is updated. To this end, assign a unique ID to each performance domain to address it, and manage them in a global linked list to look up a specific one by matching ID. IDA is used for ID assignment, and the mutex is used to protect the global list from concurrent access. Note that the mutex (em_pd_list_mutex) is not supposed to hold while holding em_pd_mutex to avoid ABBA deadlock. Signed-off-by: Changwoo Min --- include/linux/energy_model.h | 4 ++++ kernel/power/energy_model.c | 30 +++++++++++++++++++++++++++++- 2 files changed, 33 insertions(+), 1 deletion(-) diff --git a/include/linux/energy_model.h b/include/linux/energy_model.h index 7fa1eb3cc823..2f5c73fcdfe5 100644 --- a/include/linux/energy_model.h +++ b/include/linux/energy_model.h @@ -54,6 +54,8 @@ struct em_perf_table { /** * struct em_perf_domain - Performance domain * @em_table: Pointer to the runtime modifiable em_perf_table + * @node: node in em_pd_list (in energy_model.c) + * @id: A unique ID number for each performance domain * @nr_perf_states: Number of performance states * @min_perf_state: Minimum allowed Performance State index * @max_perf_state: Maximum allowed Performance State index @@ -71,6 +73,8 @@ struct em_perf_table { */ struct em_perf_domain { struct em_perf_table __rcu *em_table; + struct list_head node; + int id; int nr_perf_states; int min_perf_state; int max_perf_state; diff --git a/kernel/power/energy_model.c b/kernel/power/energy_model.c index ea7995a25780..58671ac142db 100644 --- a/kernel/power/energy_model.c +++ b/kernel/power/energy_model.c @@ -23,6 +23,16 @@ */ static DEFINE_MUTEX(em_pd_mutex); +/* + * Manage performance domains with IDs. One can iterate the performance domains + * through the list and pick one with their associated ID. The mutex serializes + * the list access. When holding em_pd_list_mutex, em_pd_mutex should not be + * taken to avoid potential deadlock. + */ +static DEFINE_IDA(em_pd_ida); +static LIST_HEAD(em_pd_list); +static DEFINE_MUTEX(em_pd_list_mutex); + static void em_cpufreq_update_efficiencies(struct device *dev, struct em_perf_state *table); static void em_check_capacity_update(void); @@ -396,7 +406,7 @@ static int em_create_pd(struct device *dev, int nr_states, struct em_perf_table *em_table; struct em_perf_domain *pd; struct device *cpu_dev; - int cpu, ret, num_cpus; + int cpu, ret, num_cpus, id; if (_is_cpu_device(dev)) { num_cpus = cpumask_weight(cpus); @@ -420,6 +430,13 @@ static int em_create_pd(struct device *dev, int nr_states, pd->nr_perf_states = nr_states; + INIT_LIST_HEAD(&pd->node); + + id = ida_alloc(&em_pd_ida, GFP_KERNEL); + if (id < 0) + return -ENOMEM; + pd->id = id; + em_table = em_table_alloc(pd); if (!em_table) goto free_pd; @@ -444,6 +461,7 @@ static int em_create_pd(struct device *dev, int nr_states, kfree(em_table); free_pd: kfree(pd); + ida_free(&em_pd_ida, id); return -EINVAL; } @@ -639,6 +657,10 @@ int em_dev_register_perf_domain(struct device *dev, unsigned int nr_states, if (_is_cpu_device(dev)) em_check_capacity_update(); + mutex_lock(&em_pd_list_mutex); + list_add_tail(&dev->em_pd->node, &em_pd_list); + mutex_unlock(&em_pd_list_mutex); + return ret; } EXPORT_SYMBOL_GPL(em_dev_register_perf_domain); @@ -657,6 +679,10 @@ void em_dev_unregister_perf_domain(struct device *dev) if (_is_cpu_device(dev)) return; + mutex_lock(&em_pd_list_mutex); + list_del_init(&dev->em_pd->node); + mutex_unlock(&em_pd_list_mutex); + /* * The mutex separates all register/unregister requests and protects * from potential clean-up/setup issues in the debugfs directories. @@ -668,6 +694,8 @@ void em_dev_unregister_perf_domain(struct device *dev) em_table_free(rcu_dereference_protected(dev->em_pd->em_table, lockdep_is_held(&em_pd_mutex))); + ida_free(&em_pd_ida, dev->em_pd->id); + kfree(dev->em_pd); dev->em_pd = NULL; mutex_unlock(&em_pd_mutex);