From patchwork Mon Sep 3 14:27:57 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 145793 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2556609ljw; Mon, 3 Sep 2018 07:28:25 -0700 (PDT) X-Google-Smtp-Source: ANB0VdY96V9p5z4C8wxGLthjFStZAyiQbHtpHKQqyX/wd/IGxOiT56ciKZoHi0chNCUswUZURn8L X-Received: by 2002:a63:d150:: with SMTP id c16-v6mr26740859pgj.188.1535984905638; Mon, 03 Sep 2018 07:28:25 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535984905; cv=none; d=google.com; s=arc-20160816; b=af5yaOBei6ue+vAXD06i8UffcX+ISCVmSnwnlgmemnuFhobQn2SbjVr+pkBnuqqJbV XLb/9g55SNfaD42rkjHOKioCYUsw9Y4bYjhYaAFS6PwsH0s0oU2XhtfE9d0AHBcj+HhS mUgssPkVKS5S5OGXoUuaQS4D9SSyXFj4AaCp9I6R2RqDMXEbHzZJGAUSy1fpiq4p68dK zMS1cBagGq9qdww8uH9d0YGUsXXDZmV6NH4FkS7VEu5FjY45SYdKlsj0gp4FAy7K1ZWw I4dAdMNHOUpTeg+giDdNVVe8SoQqRj3SvUJnNYSlGxZZRtImMhXjyAviOl4jZhfLco5/ PLAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=TY06Ai6bYTbbS+qrbG4Xrza1Hq/jISH8yLquACFz/LM=; b=xL/llcpV5LvgZ4sG1FhP6pMYAwuY6amGFKGc38tmb3C0roRpoVlbpMsjAgXH44TJ3h 0J9ICQAhoiJs9SevQ0/HVcXPOKeR00izsbIQcn3Q9G8YMGbhe7+lWx0Jo/pQhTAoIg5s 0rx6cdIsPdg2poKL7F0ZJQZgpLkEkQ3P1ztcDWnLLyNAh0T8q5byYXShxHh9REcuX2r/ VilPdDiHz6QXqQzdZaaeG5R3tKY0MTFeXrm63aEElefWNYbkYkdu5aNpifp8o8JagFlg WH8B/+MsQcRCeSohUwo4s/+irRhL+WgYANqYk/a2w1sguiCMAfo/30SKM8/5IlBKQVbq 7Qpg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h20-v6si17378271pgh.573.2018.09.03.07.28.25; Mon, 03 Sep 2018 07:28:25 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727441AbeICSsr (ORCPT + 32 others); Mon, 3 Sep 2018 14:48:47 -0400 Received: from mail-wr1-f68.google.com ([209.85.221.68]:34208 "EHLO mail-wr1-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725943AbeICSsq (ORCPT ); Mon, 3 Sep 2018 14:48:46 -0400 Received: by mail-wr1-f68.google.com with SMTP id g33-v6so908500wrd.1 for ; Mon, 03 Sep 2018 07:28:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TY06Ai6bYTbbS+qrbG4Xrza1Hq/jISH8yLquACFz/LM=; b=EebGpJVOTq++sUooJa2NyQudZ3F1E2YC4pRS2ozIz7WvwvDbji5Fp9QEQ4Dchoskyz i+dxkRvmQEFSI23EteFKMqocC+p+/ls0YXm5wQgyzPCKax0aDoyx3rYANTzn1luastcT 9ucpqMAKVpDZk6mKUu/ohfQmoHp1T35eKD7ootaDN4LcE/XyF+Zhj0aht/f3rVLvHt29 ekJSOnhuC00v8HVfFOMtcgcmdDaDZUbuMRGJFQUm6igT6hbvQMuYU9yY4NnxwRFmqrTR 9+JMwXlBaoiFUuw7PBxGNCP/gMsgrVr+hfxa0AEV6SpCgX606fvwmFsTBt6J7v72ZQ7p wdww== X-Gm-Message-State: APzg51BIPwajLSmsfUUS7noez2Tjh3xHDpuIqipj7tqv2trHW6dW4SIB 4Y+7cUkf6JImFExoEZJbFJ8f5g== X-Received: by 2002:adf:bc44:: with SMTP id a4-v6mr20466755wrh.255.1535984900403; Mon, 03 Sep 2018 07:28:20 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.227.30]) by smtp.gmail.com with ESMTPSA id b74-v6sm14175880wma.8.2018.09.03.07.28.19 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Sep 2018 07:28:19 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v5 1/5] sched/topology: Adding function partition_sched_domains_locked() Date: Mon, 3 Sep 2018 16:27:57 +0200 Message-Id: <20180903142801.20046-2-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180903142801.20046-1-juri.lelli@redhat.com> References: <20180903142801.20046-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Introducing function partition_sched_domains_locked() by taking the mutex locking code out of the original function. That way the work done by partition_sched_domains_locked() can be reused without dropping the mutex lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier --- include/linux/sched/topology.h | 10 ++++++++++ kernel/sched/topology.c | 17 +++++++++++++---- 2 files changed, 23 insertions(+), 4 deletions(-) -- 2.17.1 diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h index 26347741ba50..57997caf61b6 100644 --- a/include/linux/sched/topology.h +++ b/include/linux/sched/topology.h @@ -162,6 +162,10 @@ static inline struct cpumask *sched_domain_span(struct sched_domain *sd) return to_cpumask(sd->span); } +extern void partition_sched_domains_locked(int ndoms_new, + cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new); + extern void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new); @@ -206,6 +210,12 @@ extern void set_sched_topology(struct sched_domain_topology_level *tl); struct sched_domain_attr; +static inline void +partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ +} + static inline void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], struct sched_domain_attr *dattr_new) diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index 56a0fed30c0a..fb7ae691cb82 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1850,15 +1850,15 @@ static int dattrs_equal(struct sched_domain_attr *cur, int idx_cur, * ndoms_new == 0 is a special case for destroying existing domains, * and it will not create the default domain. * - * Call with hotplug lock held + * Call with hotplug lock and sched_domains_mutex held */ -void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], - struct sched_domain_attr *dattr_new) +void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) { int i, j, n; int new_topology; - mutex_lock(&sched_domains_mutex); + lockdep_assert_held(&sched_domains_mutex); /* Always unregister in case we don't destroy any domains: */ unregister_sched_domain_sysctl(); @@ -1923,6 +1923,15 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], ndoms_cur = ndoms_new; register_sched_domain_sysctl(); +} +/* + * Call with hotplug lock held + */ +void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); mutex_unlock(&sched_domains_mutex); } From patchwork Mon Sep 3 14:27:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 145794 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2556648ljw; Mon, 3 Sep 2018 07:28:28 -0700 (PDT) X-Google-Smtp-Source: ANB0VdbofLLTodVQnCM35VPowwgAa9FI0hZCQMitvRnB/34890JNX0z0EzsOBDmnHiLKGDYlXpbW X-Received: by 2002:a63:6849:: with SMTP id d70-v6mr26214066pgc.7.1535984908325; Mon, 03 Sep 2018 07:28:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535984908; cv=none; d=google.com; s=arc-20160816; b=Wdaoaxw+KaLaycaIwHKGu+L3eJDJziejBpoOoP1hI55cMXaAONvNw494UMzjF5e9Pg pm55Srim8Z0ZwMyc7RY5Uw37iSfrpw9FA7QZOoIxaVh4svf1TcOh3CIBwSjVqHkl5v85 4UcNGgumtzbjZCJz/5onFSQwJnXZBE5xAWoJWA1xWiYEatJl4vGeoOJdxpJx1WPX+fEV TdeYLjPwUtxjainLZLC1owlpCYQx9oTLfRkPHkGMjqIBBpzPn2ijfzr0qZFw37+qTr/E ZTPvZR8hJd/hSWVeJZE6YmY2JX7g6iEd1DuWhZBQsanvY02+Dwy72eo4ArzTvjonQ1QE dgkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=ixlA8GSsgMTTrpJTxCCYl0cq2Ht5+u6xLI53pQCU/GY=; b=CNx9rkDhPeESGNQrzXB+E0KhP/DQeOAqAgozh6X6kPYGRkzrrxtNf6hi/QwDqPkNCO HJcXdNbUK8GhQLlkwo/nsAusjy5ueyEks8CmHRvd7RkmQKESYTdpo+0BaFa3SQbouqro MVpYy6Qe9OBRKfZnKEAy2Y2FMwk9TCxSAkcOiuTQPTrPK6gsnsz+hrVi6mW7sMfQ0Ilu NR0uv2dEHImGKa+ZLODlCmkfGIaF82EInIzsGbkJ7GJ8lCTMWvPjkeklfUtuDYjmiTyz ZIlmUvdMyVqh4XAqnxjYrVSENNzYB67aPQCoGgMUhlPDy2vtz8Am5Si3A8Kt1xDIaE8u 8T6A== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h20-v6si17378271pgh.573.2018.09.03.07.28.28; Mon, 03 Sep 2018 07:28:28 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727635AbeICSsv (ORCPT + 32 others); Mon, 3 Sep 2018 14:48:51 -0400 Received: from mail-wr1-f67.google.com ([209.85.221.67]:37013 "EHLO mail-wr1-f67.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726507AbeICSss (ORCPT ); Mon, 3 Sep 2018 14:48:48 -0400 Received: by mail-wr1-f67.google.com with SMTP id u12-v6so883217wrr.4 for ; Mon, 03 Sep 2018 07:28:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ixlA8GSsgMTTrpJTxCCYl0cq2Ht5+u6xLI53pQCU/GY=; b=co1JYEoAiG/wg/aXp/plaM5rg8dNaUqM+0O8tR3iPBDnOLYLctTrF376rt4LT7bKcg MWt7RaiSS8iCjPRLSdvUGtoVYPM8b98ymyKvMckYJrVDHEUw4ZuC4FOmiL7v2IL5KFgG E22fM8iH00QtUJzIdD7b5iNnsI+/tH3R36RzMGYvhp+VwhsOs8SSNGElWmsC/Hh9SJhV 0RhSoyTupDcgt4oc9clNkD2BMnZ3zGw/t6oN8mym9T3AQ59G6lCColDuueMQWlhcGVx5 q7FtohYZAOd+1gWoDLyX3xm+wakzN0XrWl+TZobHT7pSKU8ad86FTEG+Ui7I9x75k4JP kESQ== X-Gm-Message-State: APzg51CaIUfjFfoAkNooNhav9Pd1qXw1yxIYn02+oiGY9c9CkQLH21HV /LIi9C1kKCsJCOICSiH8MzKHyg== X-Received: by 2002:adf:8b98:: with SMTP id o24-v6mr18996859wra.110.1535984901630; Mon, 03 Sep 2018 07:28:21 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.227.30]) by smtp.gmail.com with ESMTPSA id b74-v6sm14175880wma.8.2018.09.03.07.28.20 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Sep 2018 07:28:21 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v5 2/5] sched/core: Streamlining calls to task_rq_unlock() Date: Mon, 3 Sep 2018 16:27:58 +0200 Message-Id: <20180903142801.20046-3-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180903142801.20046-1-juri.lelli@redhat.com> References: <20180903142801.20046-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier Calls to task_rq_unlock() are done several times in function __sched_setscheduler(). This is fine when only the rq lock needs to be handled but not so much when other locks come into play. This patch streamlines the release of the rq lock so that only one location need to be modified when dealing with more than one lock. No change of functionality is introduced by this patch. Signed-off-by: Mathieu Poirier Reviewed-by: Steven Rostedt (VMware) --- kernel/sched/core.c | 24 ++++++++++++++---------- 1 file changed, 14 insertions(+), 10 deletions(-) -- 2.17.1 diff --git a/kernel/sched/core.c b/kernel/sched/core.c index deafa9fe602b..22f5622cba69 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4232,8 +4232,8 @@ static int __sched_setscheduler(struct task_struct *p, * Changing the policy of the stop threads its a very bad idea: */ if (p == rq->stop) { - task_rq_unlock(rq, p, &rf); - return -EINVAL; + retval = -EINVAL; + goto unlock; } /* @@ -4249,8 +4249,8 @@ static int __sched_setscheduler(struct task_struct *p, goto change; p->sched_reset_on_fork = reset_on_fork; - task_rq_unlock(rq, p, &rf); - return 0; + retval = 0; + goto unlock; } change: @@ -4263,8 +4263,8 @@ static int __sched_setscheduler(struct task_struct *p, if (rt_bandwidth_enabled() && rt_policy(policy) && task_group(p)->rt_bandwidth.rt_runtime == 0 && !task_group_is_autogroup(task_group(p))) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } #endif #ifdef CONFIG_SMP @@ -4279,8 +4279,8 @@ static int __sched_setscheduler(struct task_struct *p, */ if (!cpumask_subset(span, &p->cpus_allowed) || rq->rd->dl_bw.bw == 0) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } } #endif @@ -4299,8 +4299,8 @@ static int __sched_setscheduler(struct task_struct *p, * is available. */ if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) { - task_rq_unlock(rq, p, &rf); - return -EBUSY; + retval = -EBUSY; + goto unlock; } p->sched_reset_on_fork = reset_on_fork; @@ -4356,6 +4356,10 @@ static int __sched_setscheduler(struct task_struct *p, preempt_enable(); return 0; + +unlock: + task_rq_unlock(rq, p, &rf); + return retval; } static int _sched_setscheduler(struct task_struct *p, int policy, From patchwork Mon Sep 3 14:28:00 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 145796 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2556854ljw; Mon, 3 Sep 2018 07:28:40 -0700 (PDT) X-Google-Smtp-Source: ANB0VdatLj4eaSQZ6KVjR1d6mxdiZFpMxerqi0DjYYB28kM4t6PbDKAJi/JfvjIOlPQjSIw2ndck X-Received: by 2002:a63:1d47:: with SMTP id d7-v6mr26630114pgm.180.1535984920082; Mon, 03 Sep 2018 07:28:40 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535984920; cv=none; d=google.com; s=arc-20160816; b=Dy3Imypa85HnnmbEgE7Pa64YhbJdLe8MnhA03ijVOulSQ0P2Db//WOoDtxzbQv4UbO yrtJz2IBZINCqOpaWKzaYc9w2zdoB7EC8lzH+fRwQnr/J8HoH+L7TdUCvp/tb44nNYCK bBjwmxv8U/B+M1zBYvpivjHDOTrYCd4tHTfMB2jmvV6GfCj3VQJXg4aoQAY4TbDPMEKp UiDkPsh5mqDytlBzKko9AG9Z0RbwF3S7obt1fLIly8HPbSV+lvs7yCiyPhh3uXyDKHvZ pL3TnXRpI+QWXW8JsII/Tp+0nSLU7Aq2rxNJow2TBDMg/WJ84nNkC2DXUuWc9Ciay22e BpkQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=v0jayrGp8UDsEsbUDSIyXVkXNuJR5R5Ac/Dnq7npMJE=; b=d0OhmSOkFTQYEj4sxC2uSHE5C6DsGtlKK3znlcFRiakwZvBCyLCCvx6EJ61nMEoPw6 2t9S9CE/qKN+/laQuTOxUSkajHgqGdatFIwtuLZ5o+f1h8p1mmRqS1KgAJWvTAu8wMuX Ll9ene5cIhaCDYVtWhHY5lOb5WngXKv73VlsvLpVmV2nwaJJMkUdYuq6cuQmOh8RIXBH oa0OlHNf495iV5eKOfdAdexvD2KnaeynQKAfe8NV4rhx2Te/QfTKQUvNGolPfTIEIdfa 3lR4qzsjEDVlp8LakPx24OvC123udxYZ3mEfDjQZirFYKA+KLIzswMvkJIXRcxpntbMu Vhkw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id k5-v6si19884669pfk.2.2018.09.03.07.28.39; Mon, 03 Sep 2018 07:28:40 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727703AbeICStC (ORCPT + 32 others); Mon, 3 Sep 2018 14:49:02 -0400 Received: from mail-wm0-f68.google.com ([74.125.82.68]:56250 "EHLO mail-wm0-f68.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727320AbeICSsv (ORCPT ); Mon, 3 Sep 2018 14:48:51 -0400 Received: by mail-wm0-f68.google.com with SMTP id f21-v6so1389624wmc.5 for ; Mon, 03 Sep 2018 07:28:25 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=v0jayrGp8UDsEsbUDSIyXVkXNuJR5R5Ac/Dnq7npMJE=; b=R8gFj6UssAgTB5AdM7xkeJtmTZbDWSDl8fDE7B0rfnHk7IKJ2zzuSfDPAzu9PI8zbu yZy8nOMXo2WnTUV0MUjqdvUgD7jiiCa8RJJmjq1kK1z3PRO20b5q00A+h0kRSUdYWKuU q30lIQ7nVj++rUL64F6+3jnozyNeaDz8Xxz2zHwvZlfGeEOvMkeXMIpfOB39xqlbzoBL JEOFIxPxlRPbPHsDs9lAbySwYLEJTqDcDbNz7YxVci1/r8Pz9soPwHWkFfKD33jcFqOn NN5RumgYIv64pEw/I0PnX3kdEvJUPN4qm2M/y7O8ci1CEjDx9hWcy9U8/v39mUasABbL +Ohw== X-Gm-Message-State: APzg51CaZIXEk5cc4DmqLeeqKjFxRjVl/eNs9ZT+5fRJnQ7FbNC2WkFe PCFuuLGfr+2GtBYJuyKseLX4/g== X-Received: by 2002:a1c:5802:: with SMTP id m2-v6mr5022888wmb.154.1535984904486; Mon, 03 Sep 2018 07:28:24 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.227.30]) by smtp.gmail.com with ESMTPSA id b74-v6sm14175880wma.8.2018.09.03.07.28.22 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Sep 2018 07:28:23 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org, Juri Lelli Subject: [PATCH v5 4/5] sched/core: Prevent race condition between cpuset and __sched_setscheduler() Date: Mon, 3 Sep 2018 16:28:00 +0200 Message-Id: <20180903142801.20046-5-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180903142801.20046-1-juri.lelli@redhat.com> References: <20180903142801.20046-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier No synchronisation mechanism exists between the cpuset subsystem and calls to function __sched_setscheduler(). As such, it is possible that new root domains are created on the cpuset side while a deadline acceptance test is carried out in __sched_setscheduler(), leading to a potential oversell of CPU bandwidth. Grab callback_lock from core scheduler, so to prevent situations such as the one described above from happening. Signed-off-by: Mathieu Poirier Signed-off-by: Juri Lelli --- v4->v5: grab callback_lock instead of cpuset_mutex, as callback_lock is enough to get read-only access to cpusets [1] and it can be easily converted to be a raw_spinlock (done in previous - new - patch). [1] https://elixir.bootlin.com/linux/latest/source/kernel/cgroup/cpuset.c#L275 --- include/linux/cpuset.h | 6 ++++++ kernel/cgroup/cpuset.c | 18 ++++++++++++++++++ kernel/sched/core.c | 10 ++++++++++ 3 files changed, 34 insertions(+) -- 2.17.1 Signed-off-by: Mathieu Poirier Signed-off-by: Juri Lelli diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h index 934633a05d20..8e5a8dd0622b 100644 --- a/include/linux/cpuset.h +++ b/include/linux/cpuset.h @@ -55,6 +55,8 @@ extern void cpuset_init_smp(void); extern void cpuset_force_rebuild(void); extern void cpuset_update_active_cpus(void); extern void cpuset_wait_for_hotplug(void); +extern void cpuset_read_only_lock(void); +extern void cpuset_read_only_unlock(void); extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask); extern void cpuset_cpus_allowed_fallback(struct task_struct *p); extern nodemask_t cpuset_mems_allowed(struct task_struct *p); @@ -176,6 +178,10 @@ static inline void cpuset_update_active_cpus(void) static inline void cpuset_wait_for_hotplug(void) { } +static inline void cpuset_read_only_lock(void) { } + +static inline void cpuset_read_only_unlock(void) { } + static inline void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask) { diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 5b43f482fa0f..8dc26005bb1e 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -2410,6 +2410,24 @@ void __init cpuset_init_smp(void) BUG_ON(!cpuset_migrate_mm_wq); } +/** + * cpuset_read_only_lock - Grab the callback_lock from another subsysytem + * + * Description: Gives the holder read-only access to cpusets. + */ +void cpuset_read_only_lock(void) +{ + raw_spin_lock(&callback_lock); +} + +/** + * cpuset_read_only_unlock - Release the callback_lock from another subsysytem + */ +void cpuset_read_only_unlock(void) +{ + raw_spin_unlock(&callback_lock); +} + /** * cpuset_cpus_allowed - return cpus_allowed mask from a tasks cpuset. * @tsk: pointer to task_struct from which to obtain cpuset->cpus_allowed. diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 22f5622cba69..ac11ee599968 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4228,6 +4228,13 @@ static int __sched_setscheduler(struct task_struct *p, rq = task_rq_lock(p, &rf); update_rq_clock(rq); + /* + * Make sure we don't race with the cpuset subsystem where root + * domains can be rebuilt or modified while operations like DL + * admission checks are carried out. + */ + cpuset_read_only_lock(); + /* * Changing the policy of the stop threads its a very bad idea: */ @@ -4289,6 +4296,7 @@ static int __sched_setscheduler(struct task_struct *p, /* Re-check policy now with rq lock held: */ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) { policy = oldpolicy = -1; + cpuset_read_only_unlock(); task_rq_unlock(rq, p, &rf); goto recheck; } @@ -4346,6 +4354,7 @@ static int __sched_setscheduler(struct task_struct *p, /* Avoid rq from going away on us: */ preempt_disable(); + cpuset_read_only_unlock(); task_rq_unlock(rq, p, &rf); if (pi) @@ -4358,6 +4367,7 @@ static int __sched_setscheduler(struct task_struct *p, return 0; unlock: + cpuset_read_only_unlock(); task_rq_unlock(rq, p, &rf); return retval; } From patchwork Mon Sep 3 14:28:01 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juri Lelli X-Patchwork-Id: 145795 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp2556695ljw; Mon, 3 Sep 2018 07:28:31 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZ6znBgcuezq3p1pz9xYa96NhxKxevNzrGns7s1nDBc5uwt+4knjx9oMVrEhw5ekxzvji3k X-Received: by 2002:a63:549:: with SMTP id 70-v6mr27204722pgf.385.1535984911423; Mon, 03 Sep 2018 07:28:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1535984911; cv=none; d=google.com; s=arc-20160816; b=YOHbexFT++cWKiP+BmqEigzVH0So8gzV4CEyYRYVF2G0X0nnwnfZqaQnuMHsMS2Q/R ew4jI0r5Wr7tEPmcmBKOjpHXi/s7QenMS/UCEySPZAx6AscKehYgsPcTZ4/k7j9xr4ZX 21QtV5af1tR/0ITwSKZ6INia+RbT2mSAi0VelzO6fb/I2X7eiPW5tzn4qxIc0QWvEJic ONL7wm+TBFwApf77qoRx1QX8sWwwJZhrae6ugf2rZxJv248hHUSgxo+Ha7LhQhGNd9rH yZFwiPoga1ZoBHDnmzE3cH7TWsfyVq6J1S/eqa+uXN/qDnkguRXdGDaiDrk5FMDwlYaq NzMg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:arc-authentication-results; bh=flB6Avs1BCi12+S/SEuaSdhO9tBjTvRcvegTwyRjkqk=; b=ru16T/qaUpiD5CEs0Dr1BRz3pdWYdYYVQ5b48GvwHuxSjECxpifSMfb54dVYcdZtpR +cvDX/Hv/Aie+qFCfKDKD2ETnwUL9HdEgyiqg3y3093yYAlF12p8vhMAKiUHtTGFxDOd FsQM+c7NU7eB/sQDj01H5HQEdcvCKXbKaSV8TK5ByArCsCiQyWHl6A0VG3UcNrGzNhdN 15peAHo3jv5TvAsZLD1Vot4PwKSROe0400mUZl+qSEucH4ul7eQtrwgbuUODx/OS8jK7 nu1U9jlI5QZ5ag/DRWYbQcucJGY9Q5prbNp0Biv/TWbt7JeuT3jZc9tR7WDg2nIAUrT3 WjWQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id cb14-v6si20564929plb.178.2018.09.03.07.28.31; Mon, 03 Sep 2018 07:28:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=redhat.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727685AbeICSsy (ORCPT + 32 others); Mon, 3 Sep 2018 14:48:54 -0400 Received: from mail-wr1-f65.google.com ([209.85.221.65]:35377 "EHLO mail-wr1-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727636AbeICSsx (ORCPT ); Mon, 3 Sep 2018 14:48:53 -0400 Received: by mail-wr1-f65.google.com with SMTP id j26-v6so898193wre.2 for ; Mon, 03 Sep 2018 07:28:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=flB6Avs1BCi12+S/SEuaSdhO9tBjTvRcvegTwyRjkqk=; b=WGA0X4QSoqySOSQahWaSpJ1xMEdudS9abuzcFRfu2qoK4W3/XqBb9NHQ+S51a/Sb0G e/3RqBuw7CLEfNvcM6T+QVm1ApBy+Ehxca9cEZkCyb1vOGlFhDtxd/y3eVrHgoa+FkoR bprNrZ3bp/tAiDBBsxOE0C61pga4G1T///ZTB0rWDU9rmyR9vjsC69xUdaa5rV8cPR9f bC4Gi94z/68gscVQkxjTRmiJYvxNiDmeM6JPcgR20+jpyqt7qz6ZA8G/M9y43iNs9fod E/+uzjuuEXPwCUX1YFCmKGPhGuR52EkluGcT31QtqI0giN3guZZ1IwfcCbPhOOmfsOic 0M4Q== X-Gm-Message-State: APzg51D11/rkRMrozR43Vowa1TUOUepAx1HH1K6s068AGM86QbIvMplW 4x6jyLwmv/ZCIlEVbVMSqEwUVaJLH6k= X-Received: by 2002:adf:b357:: with SMTP id k23-v6mr20006567wrd.207.1535984905767; Mon, 03 Sep 2018 07:28:25 -0700 (PDT) Received: from localhost.localdomain.com ([151.15.227.30]) by smtp.gmail.com with ESMTPSA id b74-v6sm14175880wma.8.2018.09.03.07.28.24 (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Mon, 03 Sep 2018 07:28:25 -0700 (PDT) From: Juri Lelli To: peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org Cc: linux-kernel@vger.kernel.org, luca.abeni@santannapisa.it, claudio@evidence.eu.com, tommaso.cucinotta@santannapisa.it, bristot@redhat.com, mathieu.poirier@linaro.org, lizefan@huawei.com, cgroups@vger.kernel.org Subject: [PATCH v5 5/5] cpuset: Rebuild root domain deadline accounting information Date: Mon, 3 Sep 2018 16:28:01 +0200 Message-Id: <20180903142801.20046-6-juri.lelli@redhat.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180903142801.20046-1-juri.lelli@redhat.com> References: <20180903142801.20046-1-juri.lelli@redhat.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mathieu Poirier When the topology of root domains is modified by CPUset or CPUhotplug operations information about the current deadline bandwidth held in the root domain is lost. This patch address the issue by recalculating the lost deadline bandwidth information by circling through the deadline tasks held in CPUsets and adding their current load to the root domain they are associated with. Signed-off-by: Mathieu Poirier --- include/linux/sched.h | 5 +++ include/linux/sched/deadline.h | 8 +++++ kernel/cgroup/cpuset.c | 63 +++++++++++++++++++++++++++++++++- kernel/sched/deadline.c | 31 +++++++++++++++++ kernel/sched/sched.h | 3 -- kernel/sched/topology.c | 13 ++++++- 6 files changed, 118 insertions(+), 5 deletions(-) -- 2.17.1 diff --git a/include/linux/sched.h b/include/linux/sched.h index e0f4f56c9310..2bf3edc8658d 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -279,6 +279,11 @@ struct vtime { u64 gtime; }; +#ifdef CONFIG_SMP +extern struct root_domain def_root_domain; +extern struct mutex sched_domains_mutex; +#endif + struct sched_info { #ifdef CONFIG_SCHED_INFO /* Cumulative counters: */ diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h index 0cb034331cbb..1aff00b65f3c 100644 --- a/include/linux/sched/deadline.h +++ b/include/linux/sched/deadline.h @@ -24,3 +24,11 @@ static inline bool dl_time_before(u64 a, u64 b) { return (s64)(a - b) < 0; } + +#ifdef CONFIG_SMP + +struct root_domain; +extern void dl_add_task_root_domain(struct task_struct *p); +extern void dl_clear_root_domain(struct root_domain *rd); + +#endif /* CONFIG_SMP */ diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c index 8dc26005bb1e..e5d782c5b191 100644 --- a/kernel/cgroup/cpuset.c +++ b/kernel/cgroup/cpuset.c @@ -44,6 +44,7 @@ #include #include #include +#include #include #include #include @@ -813,6 +814,66 @@ static int generate_sched_domains(cpumask_var_t **domains, return ndoms; } +static void update_tasks_root_domain(struct cpuset *cs) +{ + struct css_task_iter it; + struct task_struct *task; + + css_task_iter_start(&cs->css, 0, &it); + + while ((task = css_task_iter_next(&it))) + dl_add_task_root_domain(task); + + css_task_iter_end(&it); +} + +/* + * Called with cpuset_mutex held (rebuild_sched_domains()) + * Called with hotplug lock held (rebuild_sched_domains_locked()) + * Called with sched_domains_mutex held (partition_and_rebuild_domains()) + */ +static void rebuild_root_domains(void) +{ + struct cpuset *cs = NULL; + struct cgroup_subsys_state *pos_css; + + rcu_read_lock(); + + /* + * Clear default root domain DL accounting, it will be computed again + * if a task belongs to it. + */ + dl_clear_root_domain(&def_root_domain); + + cpuset_for_each_descendant_pre(cs, pos_css, &top_cpuset) { + + if (cpumask_empty(cs->effective_cpus)) { + pos_css = css_rightmost_descendant(pos_css); + continue; + } + + css_get(&cs->css); + + rcu_read_unlock(); + + update_tasks_root_domain(cs); + + rcu_read_lock(); + css_put(&cs->css); + } + rcu_read_unlock(); +} + +static void +partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[], + struct sched_domain_attr *dattr_new) +{ + mutex_lock(&sched_domains_mutex); + partition_sched_domains_locked(ndoms_new, doms_new, dattr_new); + rebuild_root_domains(); + mutex_unlock(&sched_domains_mutex); +} + /* * Rebuild scheduler domains. * @@ -845,7 +906,7 @@ static void rebuild_sched_domains_locked(void) ndoms = generate_sched_domains(&doms, &attr); /* Have scheduler rebuild the domains */ - partition_sched_domains(ndoms, doms, attr); + partition_and_rebuild_sched_domains(ndoms, doms, attr); out: put_online_cpus(); } diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c index 997ea7b839fa..5c5938acf89a 100644 --- a/kernel/sched/deadline.c +++ b/kernel/sched/deadline.c @@ -2285,6 +2285,37 @@ void __init init_sched_dl_class(void) GFP_KERNEL, cpu_to_node(i)); } +void dl_add_task_root_domain(struct task_struct *p) +{ + unsigned long flags; + struct rq_flags rf; + struct rq *rq; + struct dl_bw *dl_b; + + rq = task_rq_lock(p, &rf); + if (!dl_task(p)) + goto unlock; + + dl_b = &rq->rd->dl_bw; + raw_spin_lock_irqsave(&dl_b->lock, flags); + + dl_b->total_bw += p->dl.dl_bw; + + raw_spin_unlock_irqrestore(&dl_b->lock, flags); + +unlock: + task_rq_unlock(rq, p, &rf); +} + +void dl_clear_root_domain(struct root_domain *rd) +{ + unsigned long flags; + + raw_spin_lock_irqsave(&rd->dl_bw.lock, flags); + rd->dl_bw.total_bw = 0; + raw_spin_unlock_irqrestore(&rd->dl_bw.lock, flags); +} + #endif /* CONFIG_SMP */ static void switched_from_dl(struct rq *rq, struct task_struct *p) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 4a2e8cae63c4..84215d464dd1 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -750,9 +750,6 @@ struct root_domain { unsigned long max_cpu_capacity; }; -extern struct root_domain def_root_domain; -extern struct mutex sched_domains_mutex; - extern void init_defrootdomain(void); extern int sched_init_domains(const struct cpumask *cpu_map); extern void rq_attach_root(struct rq *rq, struct root_domain *rd); diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c index fb7ae691cb82..08128bdf3944 100644 --- a/kernel/sched/topology.c +++ b/kernel/sched/topology.c @@ -1883,8 +1883,19 @@ void partition_sched_domains_locked(int ndoms_new, cpumask_var_t doms_new[], for (i = 0; i < ndoms_cur; i++) { for (j = 0; j < n && !new_topology; j++) { if (cpumask_equal(doms_cur[i], doms_new[j]) - && dattrs_equal(dattr_cur, i, dattr_new, j)) + && dattrs_equal(dattr_cur, i, dattr_new, j)) { + struct root_domain *rd; + + /* + * This domain won't be destroyed and as such + * its dl_bw->total_bw needs to be cleared. It + * will be recomputed in function + * update_tasks_root_domain(). + */ + rd = cpu_rq(cpumask_any(doms_cur[i]))->rd; + dl_clear_root_domain(rd); goto match1; + } } /* No match - a current sched domain not in new doms_new[] */ detach_destroy_domains(doms_cur[i]);