mbox series

[RFC,RT,v2,0/2] Add PINNED_HARD mode to hrtimers

Message ID 20210616071705.166658-1-juri.lelli@redhat.com
Headers show
Series Add PINNED_HARD mode to hrtimers | expand

Message

Juri Lelli June 16, 2021, 7:17 a.m. UTC
Hi,

I rebased an RFC series I already proposed a while ago [1] and I'd like
people to consider it again for inclusion.

When running cyclictest on isolated CPUs with timer_migration enabled,
the following, where CPU0 is one of the housekeeping CPUs and CPU2 is
isolated, is (of course) happening:

     <idle>-0     [000] ... hrtimer_cancel:       hrtimer=0xffffb4a74be7fe70
     <idle>-0     [000] ... hrtimer_expire_entry: hrtimer=0xffffb4a74be7fe70 now=144805770984 function=hrtimer_wakeup/0x0
     <idle>-0     [000] ... sched_wakeup:         cyclictest:1171 [4] success=1 CPU:002
     <idle>-0     [000] ... hrtimer_expire_exit:  hrtimer=0xffffb4a74be7fe70
     <idle>-0     [002] ... sched_switch:         swapper/2:0 [120] R ==> cyclictest:1171 [4]
 cyclictest-1171  [002] ... hrtimer_init:         hrtimer=0xffffb4a74be7fe70 clockid=CLOCK_MONOTONIC mode=0x8
 cyclictest-1171  [002] ... hrtimer_start:        hrtimer=0xffffb4a74be7fe70 function=hrtimer_wakeup/0x0 ...
 cyclictest-1171  [002] ... sched_switch:         cyclictest:1171 [4] S ==> swapper/2:0 [120]

While cyclitest is arming the hrtimer while running on isolated CPU2
(by means of clock_nanosleep), the hrtimer is then firing on CPU0. This
is due to the fact that switch_hrtimer_base(), called at hrtimer enqueue
time, will prefer to enqueue the timer on an housekeeping !idle CPU, if
the timer is not pinned, as per timer_migration feature.

The problem with this is that we are measuring wake up latencies across
isolated and !isolated domains, which is against the purpose of
configuring the latter, while having timer_migration enabled is required
for certain workloads that are not using timers and don't want to be
ever interrupted.

Since PREEMPT_RT already forces HARD mode for hrtimers armed by tasks
running with RT policies, it seems to make sense to also force PINNED
mode under the same conditions.

This set implements the behavior, achieving something like the
following:

     <idle>-0     [002] ... hrtimer_cancel:       hrtimer=0xffffafbacc19fe78
     <idle>-0     [002] ... hrtimer_expire_entry: hrtimer=0xffffafbacc19fe78 now=104335855898 function=hrtimer_wakeup/0x0
     <idle>-0     [002] ... sched_wakeup:         cyclictest:1165 [4] success=1 CPU:002
     <idle>-0     [002] ... hrtimer_expire_exit:  hrtimer=0xffffafbacc19fe78
     <idle>-0     [002] ... sched_switch:         swapper/2:0 [120] R ==> cyclictest:1165 [4]
 cyclictest-1165  [002] ... hrtimer_init:         hrtimer=0xffffafbacc19fe78 clockid=CLOCK_MONOTONIC mode=0xa
 cyclictest-1165  [002] ... hrtimer_start:        hrtimer=0xffffafbacc19fe78 function=hrtimer_wakeup/0x0 ...
 cyclictest-1165  [002] ... sched_switch:         cyclictest:1165 [4] S ==> swapper/2:0 [120]

Sebastian didn't look against the proposed changes, but I didn't follow
up back then because it looked like we could meet workloads requirements
at that time w/o this set. Now things have changed, looks like the mix
of the two types of workloads - interrupt driven and always running - is
very relevant and we need to accommodate both types on the same system
setup.

Does this still make sense or do you suggest alternative approaches?

Thanks!

- Juri

1 - https://lore.kernel.org/lkml/20190214133716.10187-1-juri.lelli@redhat.com/

Juri Lelli (2):
  time/hrtimer: Add PINNED_HARD mode for realtime hrtimers
  time/hrtimer: Embed hrtimer mode into hrtimer_sleeper

 include/linux/hrtimer.h |  3 +++
 kernel/time/hrtimer.c   | 13 +++++++------
 2 files changed, 10 insertions(+), 6 deletions(-)

Comments

Thomas Gleixner June 18, 2021, 11:35 p.m. UTC | #1
Juri,

On Wed, Jun 16 2021 at 09:17, Juri Lelli wrote:
> While running cyclictest on isolated CPUs with timer_migration enabled,

> I noticed the following behavior, where CPU0 is one of the housekeeping

> CPUs and CPU2 is isolated:

>

>      <idle>-0     [000] ... hrtimer_cancel:       hrtimer=0xffffb4a74be7fe70

>      <idle>-0     [000] ... hrtimer_expire_entry: hrtimer=0xffffb4a74be7fe70 now=144805770984 function=hrtimer_wakeup/0x0

>      <idle>-0     [000] ... sched_wakeup:         cyclictest:1171 [4] success=1 CPU:002

>      <idle>-0     [000] ... hrtimer_expire_exit:  hrtimer=0xffffb4a74be7fe70

>      <idle>-0     [002] ... sched_switch:         swapper/2:0 [120] R ==> cyclictest:1171 [4]

>  cyclictest-1171  [002] ... hrtimer_init:         hrtimer=0xffffb4a74be7fe70 clockid=CLOCK_MONOTONIC mode=0x8

>  cyclictest-1171  [002] ... hrtimer_start:        hrtimer=0xffffb4a74be7fe70 function=hrtimer_wakeup/0x0 ...

>  cyclictest-1171  [002] ... sched_switch:         cyclictest:1171 [4] S ==> swapper/2:0 [120]

>

> While cyclitest was arming the hrtimer while running on isolated CPU2

> (by means of clock_nanosleep), the hrtimer was then firing on CPU0. This

> is due to the fact that switch_hrtimer_base(), called at hrtimer enqueue

> time, will prefer to enqueue the timer on an housekeeping !idle CPU, if

> the timer is not pinned and timer_migration is enabled.

>

> The problem with this is that we are measuring wake up latencies across

> isolated and !isolated domains, which is against the purpose of

> configuring the latter.

>

> Since PREEMPT_RT already forces HARD mode for hrtimers armed by tasks

> running with RT policies, it makes sense to also force PINNED mode under

> the same conditions.

>

> This patch implements this behavior, achieving something like the


 git grep 'This patch' Documentation/process

Also look at the recommended usage of 'We, I' while at it.

> @@ -55,6 +55,8 @@ enum hrtimer_mode {

>  	HRTIMER_MODE_ABS_HARD	= HRTIMER_MODE_ABS | HRTIMER_MODE_HARD,

>  	HRTIMER_MODE_REL_HARD	= HRTIMER_MODE_REL | HRTIMER_MODE_HARD,

>  

> +	HRTIMER_MODE_PINNED_HARD = HRTIMER_MODE_PINNED | HRTIMER_MODE_HARD,

> +

>  	HRTIMER_MODE_ABS_PINNED_HARD = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_HARD,

>  	HRTIMER_MODE_REL_PINNED_HARD = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_HARD,

>  };

> diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c

> index 3fa18a01f5b2..f64954d5c8f8 100644

> --- a/kernel/time/hrtimer.c

> +++ b/kernel/time/hrtimer.c

> @@ -1842,7 +1842,7 @@ static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl,

>  	 */

>  	if (IS_ENABLED(CONFIG_PREEMPT_RT)) {

>  		if (task_is_realtime(current) && !(mode & HRTIMER_MODE_SOFT))

> -			mode |= HRTIMER_MODE_HARD;

> +			mode |= HRTIMER_MODE_PINNED_HARD;

>  	}

>  

>  	__hrtimer_init(&sl->timer, clock_id, mode);


It makes sense to some extent, but in fact you are curing the symptom.

The root cause is that all of this is semantically ill defined.

The underlying problem is get_nohz_timer_target() which is a completely
broken heuristics trying to predict which CPU is the proper target for
handling the timer some unspecified time in the future.

In hindsight I regret that I even helped to merge that, but hindsight.
Is get_nohz_timer_target() anything near correct for some unspecified
reason? You surely know that it's not.

In fact your patch makes it even more semantically undefined simply
because it is solving the single RT thread per CPU use case which is
exposed by cyclictest. Is that universaly true for all RT tasks and use
cases?

The wild west of anything which scratches 'my itch' based on 'my use
case numbers' in Linux ended many years ago and while RT was always a
valuable playground for unthinkable ideas we definitely tried hard not
to accept use case specific hacks wihtout a proper justification that it
makes sense in general.

So why are you even trying to sell this to me?

get_nohz_timer_target() is broken by definition and while it made some
sense years ago despite it's heuristic nature, this is something which
really needs to be cleaned up because it causes more trouble than it
solves. Tagging every other timer as pinned just to work around that
underlying nonsense is just wrong.

We have been working on getting rid of this at least for the timer list
timers (which are admittedly the easier part of the problem) on and off
for years. I can't find the public links right now, but I'll ask
Anna-Maria to fill the void. Might take a while as she's AFK for a
while.

Thanks,

        tglx
Thomas Gleixner June 19, 2021, 7:56 a.m. UTC | #2
On Sat, Jun 19 2021 at 01:35, Thomas Gleixner wrote:
> The wild west of anything which scratches 'my itch' based on 'my use

> case numbers' in Linux ended many years ago and while RT was always a

> valuable playground for unthinkable ideas we definitely tried hard not

> to accept use case specific hacks wihtout a proper justification that it

> makes sense in general.

>

> So why are you even trying to sell this to me?


I wouldn't have been that grumpy if you'd at least checked whether the
task is pinned. Still I would have told you that you "fix" it at the
wrong place.

Why on earth is that nohz heuristic trainwreck not even checking that?
It's not a RT problem and it's not a problem restricted to RT tasks
either. If a task is pinned then arming the timer on a random other CPU
is blatant nonsense independent of the scheduling class.

Thanks,

        tglx
Juri Lelli June 21, 2021, 5:35 a.m. UTC | #3
Hi,

On 19/06/21 09:56, Thomas Gleixner wrote:
> On Sat, Jun 19 2021 at 01:35, Thomas Gleixner wrote:

> > The wild west of anything which scratches 'my itch' based on 'my use

> > case numbers' in Linux ended many years ago and while RT was always a

> > valuable playground for unthinkable ideas we definitely tried hard not

> > to accept use case specific hacks wihtout a proper justification that it

> > makes sense in general.

> >

> > So why are you even trying to sell this to me?

> 

> I wouldn't have been that grumpy if you'd at least checked whether the

> task is pinned. Still I would have told you that you "fix" it at the

> wrong place.


Ah, indeed. Pulled the trigger too early it seems. I'll ponder more.

> Why on earth is that nohz heuristic trainwreck not even checking that?

> It's not a RT problem and it's not a problem restricted to RT tasks

> either. If a task is pinned then arming the timer on a random other CPU

> is blatant nonsense independent of the scheduling class.


Agree. Lemme look more into it.

Thanks for the comments!

Best,
Juri