Message ID | 1495856035-6622-4-git-send-email-john.stultz@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | Fixes for two recently found timekeeping bugs | expand |
On Fri, May 26, 2017 at 08:33:54PM -0700, John Stultz wrote: > From: Will Deacon <will.deacon@arm.com> > > Commit 45a7905fc48f ("arm64: vdso: defer shifting of nanosecond > component of timespec") fixed sub-ns inaccuracies in our vDSO > clock_gettime implementation by deferring the right-shift of the > nanoseconds components until after the timespec addition, which > operates on left-shifted values. That worked nicely until > support for CLOCK_MONOTONIC_RAW was added in 49eea433b326 > ("arm64: Add support for CLOCK_MONOTONIC_RAW in clock_gettime() > vDSO"). Noticing that the core timekeeping code never set > tkr_raw.xtime_nsec, the vDSO implementation didn't bother > exposing it via the data page and instead took the unshifted > tk->raw_time.tv_nsec value which was then immediately shifted > left in the vDSO code. > > Now that the core code is actually setting tkr_raw.xtime_nsec, > we need to take that into account in the vDSO by adding it to > the shifted raw_time value. Rather than do that at each use (and > expand the data page in the process), instead perform the > shift/addition operation when populating the data page and > remove the shift from the vDSO code entirely. > > Cc: Thomas Gleixner <tglx@linutronix.de> > Cc: Ingo Molnar <mingo@kernel.org> > Cc: Miroslav Lichvar <mlichvar@redhat.com> > Cc: Richard Cochran <richardcochran@gmail.com> > Cc: Prarit Bhargava <prarit@redhat.com> > Cc: Stephen Boyd <stephen.boyd@linaro.org> > Cc: Kevin Brodsky <kevin.brodsky@arm.com> > Cc: Will Deacon <will.deacon@arm.com> > Cc: Daniel Mentz <danielmentz@google.com> > Reported-by: John Stultz <john.stultz@linaro.org> > Acked-by: Acked-by: Kevin Brodsky <kevin.brodsky@arm.com> I don't think Kevin liked it *that* much ^^ Will
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c index 41b6e31..d0cb007 100644 --- a/arch/arm64/kernel/vdso.c +++ b/arch/arm64/kernel/vdso.c @@ -221,10 +221,11 @@ void update_vsyscall(struct timekeeper *tk) /* tkr_mono.cycle_last == tkr_raw.cycle_last */ vdso_data->cs_cycle_last = tk->tkr_mono.cycle_last; vdso_data->raw_time_sec = tk->raw_time.tv_sec; - vdso_data->raw_time_nsec = tk->raw_time.tv_nsec; + vdso_data->raw_time_nsec = (tk->raw_time.tv_nsec << + tk->tkr_raw.shift) + + tk->tkr_raw.xtime_nsec; vdso_data->xtime_clock_sec = tk->xtime_sec; vdso_data->xtime_clock_nsec = tk->tkr_mono.xtime_nsec; - /* tkr_raw.xtime_nsec == 0 */ vdso_data->cs_mono_mult = tk->tkr_mono.mult; vdso_data->cs_raw_mult = tk->tkr_raw.mult; /* tkr_mono.shift == tkr_raw.shift */ diff --git a/arch/arm64/kernel/vdso/gettimeofday.S b/arch/arm64/kernel/vdso/gettimeofday.S index e00b467..76320e9 100644 --- a/arch/arm64/kernel/vdso/gettimeofday.S +++ b/arch/arm64/kernel/vdso/gettimeofday.S @@ -256,7 +256,6 @@ monotonic_raw: seqcnt_check fail=monotonic_raw /* All computations are done with left-shifted nsecs. */ - lsl x14, x14, x12 get_nsec_per_sec res=x9 lsl x9, x9, x12