Message ID | 20190719140000.31694-3-juri.lelli@redhat.com |
---|---|
State | New |
Headers | show |
Series | [v9,1/8] sched/topology: Adding function partition_sched_domains_locked() | expand |
On 7/19/19 3:59 PM, Juri Lelli wrote: > From: Mathieu Poirier <mathieu.poirier@linaro.org> [...] > @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, > */ > if (!cpumask_subset(span, &p->cpus_allowed) || This doesn't apply cleanly on v5.3-rc1 anymore due to commit 3bd3706251ee ("sched/core: Provide a pointer to the valid CPU mask"). > rq->rd->dl_bw.bw == 0) { > - task_rq_unlock(rq, p, &rf); > - return -EPERM; > + retval = -EPERM; > + goto unlock; > } > } > #endif [...]
On 22/07/19 10:21, Dietmar Eggemann wrote: > On 7/19/19 3:59 PM, Juri Lelli wrote: > > From: Mathieu Poirier <mathieu.poirier@linaro.org> > > [...] > > > @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, > > */ > > if (!cpumask_subset(span, &p->cpus_allowed) || > > This doesn't apply cleanly on v5.3-rc1 anymore due to commit > 3bd3706251ee ("sched/core: Provide a pointer to the valid CPU mask"). > > > rq->rd->dl_bw.bw == 0) { > > - task_rq_unlock(rq, p, &rf); > > - return -EPERM; > > + retval = -EPERM; > > + goto unlock; > > } > > } > > #endif Thanks for reporting. The set is based on cgroup/for-next (as of last week), though. I can of course rebase on tip/sched/core or mainline if needed. Best, Juri
On 7/22/19 10:32 AM, Juri Lelli wrote: > On 22/07/19 10:21, Dietmar Eggemann wrote: >> On 7/19/19 3:59 PM, Juri Lelli wrote: >>> From: Mathieu Poirier <mathieu.poirier@linaro.org> >> >> [...] >> >>> @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, >>> */ >>> if (!cpumask_subset(span, &p->cpus_allowed) || >> >> This doesn't apply cleanly on v5.3-rc1 anymore due to commit >> 3bd3706251ee ("sched/core: Provide a pointer to the valid CPU mask"). >> >>> rq->rd->dl_bw.bw == 0) { >>> - task_rq_unlock(rq, p, &rf); >>> - return -EPERM; >>> + retval = -EPERM; >>> + goto unlock; >>> } >>> } >>> #endif > > Thanks for reporting. The set is based on cgroup/for-next (as of last > week), though. I can of course rebase on tip/sched/core or mainline if > needed. Not sure, there is another little issue on 3/8 since uclamp is in v5.3-rc1 as well commit 69842cba9ace8 ("sched/uclamp: Add CPU's clamp buckets refcounting").
On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote: > Thanks for reporting. The set is based on cgroup/for-next (as of last > week), though. I can of course rebase on tip/sched/core or mainline if > needed. TJ; I would like to take these patches through the scheduler tree if you don't mind. Afaict there's no real conflict vs cgroup/for-next (I applied the patches and then did a pull of cgroup/for-next which finished without complaints).
On Mon, Jul 22, 2019 at 11:08:17AM +0200, Dietmar Eggemann wrote: > Not sure, there is another little issue on 3/8 since uclamp is in > v5.3-rc1 as well commit 69842cba9ace8 ("sched/uclamp: Add CPU's clamp > buckets refcounting"). Also, 8/8, but all conflicts are trivial and I've fixed them up.
On Tue, Jul 23, 2019 at 12:31:31PM +0200, Peter Zijlstra wrote: > On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote: > > > Thanks for reporting. The set is based on cgroup/for-next (as of last > > week), though. I can of course rebase on tip/sched/core or mainline if > > needed. > > TJ; I would like to take these patches through the scheduler tree if you > don't mind. Afaict there's no real conflict vs cgroup/for-next (I > applied the patches and then did a pull of cgroup/for-next which > finished without complaints). Yeah, for sure, please go ahead. Thanks. -- tejun
On 23/07/19 06:11, Tejun Heo wrote: > On Tue, Jul 23, 2019 at 12:31:31PM +0200, Peter Zijlstra wrote: > > On Mon, Jul 22, 2019 at 10:32:14AM +0200, Juri Lelli wrote: > > > > > Thanks for reporting. The set is based on cgroup/for-next (as of last > > > week), though. I can of course rebase on tip/sched/core or mainline if > > > needed. > > > > TJ; I would like to take these patches through the scheduler tree if you > > don't mind. Afaict there's no real conflict vs cgroup/for-next (I > > applied the patches and then did a pull of cgroup/for-next which > > finished without complaints). > > Yeah, for sure, please go ahead. > > Thanks. Thanks!
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 874c427742a9..acd6a9fe85bc 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4222,8 +4222,8 @@ static int __sched_setscheduler(struct task_struct *p, * Changing the policy of the stop threads its a very bad idea: */ if (p == rq->stop) { - task_rq_unlock(rq, p, &rf); - return -EINVAL; + retval = -EINVAL; + goto unlock; } /* @@ -4239,8 +4239,8 @@ static int __sched_setscheduler(struct task_struct *p, goto change; p->sched_reset_on_fork = reset_on_fork; - task_rq_unlock(rq, p, &rf); - return 0; + retval = 0; + goto unlock; } change: @@ -4253,8 +4253,8 @@ static int __sched_setscheduler(struct task_struct *p, if (rt_bandwidth_enabled() && rt_policy(policy) && task_group(p)->rt_bandwidth.rt_runtime == 0 && !task_group_is_autogroup(task_group(p))) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } #endif #ifdef CONFIG_SMP @@ -4269,8 +4269,8 @@ static int __sched_setscheduler(struct task_struct *p, */ if (!cpumask_subset(span, &p->cpus_allowed) || rq->rd->dl_bw.bw == 0) { - task_rq_unlock(rq, p, &rf); - return -EPERM; + retval = -EPERM; + goto unlock; } } #endif @@ -4289,8 +4289,8 @@ static int __sched_setscheduler(struct task_struct *p, * is available. */ if ((dl_policy(policy) || dl_task(p)) && sched_dl_overflow(p, policy, attr)) { - task_rq_unlock(rq, p, &rf); - return -EBUSY; + retval = -EBUSY; + goto unlock; } p->sched_reset_on_fork = reset_on_fork; @@ -4346,6 +4346,10 @@ static int __sched_setscheduler(struct task_struct *p, preempt_enable(); return 0; + +unlock: + task_rq_unlock(rq, p, &rf); + return retval; } static int _sched_setscheduler(struct task_struct *p, int policy,