Message ID | 20210616071705.166658-1-juri.lelli@redhat.com |
---|---|
Headers | show |
Series | Add PINNED_HARD mode to hrtimers | expand |
Juri, On Wed, Jun 16 2021 at 09:17, Juri Lelli wrote: > While running cyclictest on isolated CPUs with timer_migration enabled, > I noticed the following behavior, where CPU0 is one of the housekeeping > CPUs and CPU2 is isolated: > > <idle>-0 [000] ... hrtimer_cancel: hrtimer=0xffffb4a74be7fe70 > <idle>-0 [000] ... hrtimer_expire_entry: hrtimer=0xffffb4a74be7fe70 now=144805770984 function=hrtimer_wakeup/0x0 > <idle>-0 [000] ... sched_wakeup: cyclictest:1171 [4] success=1 CPU:002 > <idle>-0 [000] ... hrtimer_expire_exit: hrtimer=0xffffb4a74be7fe70 > <idle>-0 [002] ... sched_switch: swapper/2:0 [120] R ==> cyclictest:1171 [4] > cyclictest-1171 [002] ... hrtimer_init: hrtimer=0xffffb4a74be7fe70 clockid=CLOCK_MONOTONIC mode=0x8 > cyclictest-1171 [002] ... hrtimer_start: hrtimer=0xffffb4a74be7fe70 function=hrtimer_wakeup/0x0 ... > cyclictest-1171 [002] ... sched_switch: cyclictest:1171 [4] S ==> swapper/2:0 [120] > > While cyclitest was arming the hrtimer while running on isolated CPU2 > (by means of clock_nanosleep), the hrtimer was then firing on CPU0. This > is due to the fact that switch_hrtimer_base(), called at hrtimer enqueue > time, will prefer to enqueue the timer on an housekeeping !idle CPU, if > the timer is not pinned and timer_migration is enabled. > > The problem with this is that we are measuring wake up latencies across > isolated and !isolated domains, which is against the purpose of > configuring the latter. > > Since PREEMPT_RT already forces HARD mode for hrtimers armed by tasks > running with RT policies, it makes sense to also force PINNED mode under > the same conditions. > > This patch implements this behavior, achieving something like the git grep 'This patch' Documentation/process Also look at the recommended usage of 'We, I' while at it. > @@ -55,6 +55,8 @@ enum hrtimer_mode { > HRTIMER_MODE_ABS_HARD = HRTIMER_MODE_ABS | HRTIMER_MODE_HARD, > HRTIMER_MODE_REL_HARD = HRTIMER_MODE_REL | HRTIMER_MODE_HARD, > > + HRTIMER_MODE_PINNED_HARD = HRTIMER_MODE_PINNED | HRTIMER_MODE_HARD, > + > HRTIMER_MODE_ABS_PINNED_HARD = HRTIMER_MODE_ABS_PINNED | HRTIMER_MODE_HARD, > HRTIMER_MODE_REL_PINNED_HARD = HRTIMER_MODE_REL_PINNED | HRTIMER_MODE_HARD, > }; > diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c > index 3fa18a01f5b2..f64954d5c8f8 100644 > --- a/kernel/time/hrtimer.c > +++ b/kernel/time/hrtimer.c > @@ -1842,7 +1842,7 @@ static void __hrtimer_init_sleeper(struct hrtimer_sleeper *sl, > */ > if (IS_ENABLED(CONFIG_PREEMPT_RT)) { > if (task_is_realtime(current) && !(mode & HRTIMER_MODE_SOFT)) > - mode |= HRTIMER_MODE_HARD; > + mode |= HRTIMER_MODE_PINNED_HARD; > } > > __hrtimer_init(&sl->timer, clock_id, mode); It makes sense to some extent, but in fact you are curing the symptom. The root cause is that all of this is semantically ill defined. The underlying problem is get_nohz_timer_target() which is a completely broken heuristics trying to predict which CPU is the proper target for handling the timer some unspecified time in the future. In hindsight I regret that I even helped to merge that, but hindsight. Is get_nohz_timer_target() anything near correct for some unspecified reason? You surely know that it's not. In fact your patch makes it even more semantically undefined simply because it is solving the single RT thread per CPU use case which is exposed by cyclictest. Is that universaly true for all RT tasks and use cases? The wild west of anything which scratches 'my itch' based on 'my use case numbers' in Linux ended many years ago and while RT was always a valuable playground for unthinkable ideas we definitely tried hard not to accept use case specific hacks wihtout a proper justification that it makes sense in general. So why are you even trying to sell this to me? get_nohz_timer_target() is broken by definition and while it made some sense years ago despite it's heuristic nature, this is something which really needs to be cleaned up because it causes more trouble than it solves. Tagging every other timer as pinned just to work around that underlying nonsense is just wrong. We have been working on getting rid of this at least for the timer list timers (which are admittedly the easier part of the problem) on and off for years. I can't find the public links right now, but I'll ask Anna-Maria to fill the void. Might take a while as she's AFK for a while. Thanks, tglx
On Sat, Jun 19 2021 at 01:35, Thomas Gleixner wrote: > The wild west of anything which scratches 'my itch' based on 'my use > case numbers' in Linux ended many years ago and while RT was always a > valuable playground for unthinkable ideas we definitely tried hard not > to accept use case specific hacks wihtout a proper justification that it > makes sense in general. > > So why are you even trying to sell this to me? I wouldn't have been that grumpy if you'd at least checked whether the task is pinned. Still I would have told you that you "fix" it at the wrong place. Why on earth is that nohz heuristic trainwreck not even checking that? It's not a RT problem and it's not a problem restricted to RT tasks either. If a task is pinned then arming the timer on a random other CPU is blatant nonsense independent of the scheduling class. Thanks, tglx
Hi, On 19/06/21 09:56, Thomas Gleixner wrote: > On Sat, Jun 19 2021 at 01:35, Thomas Gleixner wrote: > > The wild west of anything which scratches 'my itch' based on 'my use > > case numbers' in Linux ended many years ago and while RT was always a > > valuable playground for unthinkable ideas we definitely tried hard not > > to accept use case specific hacks wihtout a proper justification that it > > makes sense in general. > > > > So why are you even trying to sell this to me? > > I wouldn't have been that grumpy if you'd at least checked whether the > task is pinned. Still I would have told you that you "fix" it at the > wrong place. Ah, indeed. Pulled the trigger too early it seems. I'll ponder more. > Why on earth is that nohz heuristic trainwreck not even checking that? > It's not a RT problem and it's not a problem restricted to RT tasks > either. If a task is pinned then arming the timer on a random other CPU > is blatant nonsense independent of the scheduling class. Agree. Lemme look more into it. Thanks for the comments! Best, Juri