Message ID | 1304256126-26015-31-git-send-email-paulmck@linux.vnet.ibm.com |
---|---|
State | New |
Headers | show |
On Sun, 2011-05-01 at 06:21 -0700, Paul E. McKenney wrote: > From: Paul E. McKenney <paul.mckenney@linaro.org> > > Although rcu_yield() dropped from real-time to normal priority, there > is always the possibility that the competing tasks have been niced. > So nice to 19 in rcu_yield() to help ensure that other tasks have a > better chance of running. But.. that just prolongs the pain of overhead you _have_ to eat, no? In a brief surge, fine, you can spread the cost out.. but how do you know when it's ok to yield? (When maintenance threads worrying about their CPU usage is worrisome.) > Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > --- > kernel/rcutree.c | 1 + > 1 files changed, 1 insertions(+), 0 deletions(-) > > diff --git a/kernel/rcutree.c b/kernel/rcutree.c > index 3295c7b..963b4b1 100644 > --- a/kernel/rcutree.c > +++ b/kernel/rcutree.c > @@ -1561,6 +1561,7 @@ static void rcu_yield(void (*f)(unsigned long), unsigned long arg) > mod_timer(&yield_timer, jiffies + 2); > sp.sched_priority = 0; > sched_setscheduler_nocheck(current, SCHED_NORMAL, &sp); > + set_user_nice(current, 19); > schedule(); > sp.sched_priority = RCU_KTHREAD_PRIO; > sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);
On Sun, May 01, 2011 at 07:51:04PM +0200, Mike Galbraith wrote: > On Sun, 2011-05-01 at 06:21 -0700, Paul E. McKenney wrote: > > From: Paul E. McKenney <paul.mckenney@linaro.org> > > > > Although rcu_yield() dropped from real-time to normal priority, there > > is always the possibility that the competing tasks have been niced. > > So nice to 19 in rcu_yield() to help ensure that other tasks have a > > better chance of running. > > But.. that just prolongs the pain of overhead you _have_ to eat, no? In > a brief surge, fine, you can spread the cost out.. but how do you know > when it's ok to yield? I modeled this code on the existing code in ksoftirqd. But yes, this is a heuristic. I do believe that it is quite robust, but time will tell. > (When maintenance threads worrying about their CPU usage is worrisome.) Indeed. But I am not introducing this, just moving the existing checking from ksoftirqd. So I believe that I am OK here. Thanx, Paul > > Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> > > Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> > > --- > > kernel/rcutree.c | 1 + > > 1 files changed, 1 insertions(+), 0 deletions(-) > > > > diff --git a/kernel/rcutree.c b/kernel/rcutree.c > > index 3295c7b..963b4b1 100644 > > --- a/kernel/rcutree.c > > +++ b/kernel/rcutree.c > > @@ -1561,6 +1561,7 @@ static void rcu_yield(void (*f)(unsigned long), unsigned long arg) > > mod_timer(&yield_timer, jiffies + 2); > > sp.sched_priority = 0; > > sched_setscheduler_nocheck(current, SCHED_NORMAL, &sp); > > + set_user_nice(current, 19); > > schedule(); > > sp.sched_priority = RCU_KTHREAD_PRIO; > > sched_setscheduler_nocheck(current, SCHED_FIFO, &sp); > >
On Mon, 2011-05-02 at 01:11 -0700, Paul E. McKenney wrote: > On Sun, May 01, 2011 at 07:51:04PM +0200, Mike Galbraith wrote: > > On Sun, 2011-05-01 at 06:21 -0700, Paul E. McKenney wrote: > > > From: Paul E. McKenney <paul.mckenney@linaro.org> > > > > > > Although rcu_yield() dropped from real-time to normal priority, there > > > is always the possibility that the competing tasks have been niced. > > > So nice to 19 in rcu_yield() to help ensure that other tasks have a > > > better chance of running. > > > > But.. that just prolongs the pain of overhead you _have_ to eat, no? In > > a brief surge, fine, you can spread the cost out.. but how do you know > > when it's ok to yield? > > I modeled this code on the existing code in ksoftirqd. But yes, this is > a heuristic. I do believe that it is quite robust, but time will tell. (It probably is fine, but when I see 'yield', alarms and sirens go off)
diff --git a/kernel/rcutree.c b/kernel/rcutree.c index 3295c7b..963b4b1 100644 --- a/kernel/rcutree.c +++ b/kernel/rcutree.c @@ -1561,6 +1561,7 @@ static void rcu_yield(void (*f)(unsigned long), unsigned long arg) mod_timer(&yield_timer, jiffies + 2); sp.sched_priority = 0; sched_setscheduler_nocheck(current, SCHED_NORMAL, &sp); + set_user_nice(current, 19); schedule(); sp.sched_priority = RCU_KTHREAD_PRIO; sched_setscheduler_nocheck(current, SCHED_FIFO, &sp);