Message ID | 1481301992-2344-1-git-send-email-ard.biesheuvel@linaro.org |
---|---|
State | New |
Headers | show |
On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: > Currently, we allow kernel mode NEON in softirq or hardirq context by > stacking and unstacking a slice of the NEON register file for each call > to kernel_neon_begin() and kernel_neon_end(), respectively. > > Given that > a) a CPU typically spends most of its time in userland, during which time > no kernel mode NEON in process context is in progress, > b) a CPU spends most of its time in the kernel doing other things than > kernel mode NEON when it gets interrupted to perform kernel mode NEON > in softirq context > > the stacking and subsequent unstacking is only necessary if we are > interrupting a thread while it is performing kernel mode NEON in process > context, which means that in all other cases, we can simply preserve the > userland FPSIMD state once, and only restore it upon return to userland, > even if we are being invoked from softirq or hardirq context. > > So instead of checking whether we are running in interrupt context, keep > track of the level of nested kernel mode NEON calls in progress, and only > perform the eager stack/unstack if the level exceeds 1. > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> This looks good to me. This should also make the SVE case trivial now: there can only be live SVE state when in process context with !TIF_FOREIGN_FPSTATE, and the SVE save/restore is then handled by fpsimd_{save,load}_state() directly. For deeper nesting levels, there is already no live SVE state, so kernel_neon_{save,load}_partial_state() are enough in that case. As and when KERNEL_MODE_SVE comes along this will need another look, but this patch looks like a step forward for now. Reviewed-by: Dave Martin <Dave.Martin@arm.com> > --- > v4: > - use this_cpu_inc/dec, which give sufficient guarantees regarding > concurrency, but do not imply SMP barriers, which are not needed here > > v3: > - avoid corruption by concurrent invocations of kernel_neon_begin()/_end() > > v2: > - BUG() on unexpected values of the nesting level > - relax the BUG() on num_regs>32 to a WARN, given that nothing actually > breaks in that case > > arch/arm64/kernel/fpsimd.c | 47 ++++++++++++++------ > 1 file changed, 33 insertions(+), 14 deletions(-) > > diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c > index 394c61db5566..37d6dfc9059b 100644 > --- a/arch/arm64/kernel/fpsimd.c > +++ b/arch/arm64/kernel/fpsimd.c > @@ -220,20 +220,35 @@ void fpsimd_flush_task_state(struct task_struct *t) > > #ifdef CONFIG_KERNEL_MODE_NEON > > -static DEFINE_PER_CPU(struct fpsimd_partial_state, hardirq_fpsimdstate); > -static DEFINE_PER_CPU(struct fpsimd_partial_state, softirq_fpsimdstate); > +/* > + * Although unlikely, it is possible for three kernel mode NEON contexts to > + * be live at the same time: process context, softirq context and hardirq > + * context. So while the userland context is stashed in the thread's fpsimd > + * state structure, we need two additional levels of storage. > + */ > +static DEFINE_PER_CPU(struct fpsimd_partial_state, nested_fpsimdstate[2]); > +static DEFINE_PER_CPU(int, kernel_neon_nesting_level); > > /* > * Kernel-side NEON support functions > */ > void kernel_neon_begin_partial(u32 num_regs) > { > - if (in_interrupt()) { > - struct fpsimd_partial_state *s = this_cpu_ptr( > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); > + struct fpsimd_partial_state *s; > + int level; > + > + preempt_disable(); > + > + level = this_cpu_inc_return(kernel_neon_nesting_level); > + BUG_ON(level > 3); > + > + if (level > 1) { > + s = this_cpu_ptr(nested_fpsimdstate); > > - BUG_ON(num_regs > 32); > - fpsimd_save_partial_state(s, roundup(num_regs, 2)); > + WARN_ON_ONCE(num_regs > 32); > + num_regs = min(roundup(num_regs, 2), 32U); > + > + fpsimd_save_partial_state(&s[level - 2], num_regs); > } else { > /* > * Save the userland FPSIMD state if we have one and if we > @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) > * that there is no longer userland FPSIMD state in the > * registers. > */ > - preempt_disable(); > if (current->mm && > !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) > fpsimd_save_state(¤t->thread.fpsimd_state); > @@ -252,13 +266,18 @@ EXPORT_SYMBOL(kernel_neon_begin_partial); > > void kernel_neon_end(void) > { > - if (in_interrupt()) { > - struct fpsimd_partial_state *s = this_cpu_ptr( > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); > - fpsimd_load_partial_state(s); > - } else { > - preempt_enable(); > + struct fpsimd_partial_state *s; > + int level; > + > + level = this_cpu_read(kernel_neon_nesting_level); > + BUG_ON(level < 1); > + > + if (level > 1) { > + s = this_cpu_ptr(nested_fpsimdstate); > + fpsimd_load_partial_state(&s[level - 2]); > } > + this_cpu_dec(kernel_neon_nesting_level); > + preempt_enable(); > } > EXPORT_SYMBOL(kernel_neon_end); > > -- > 2.7.4 > > > _______________________________________________ > linux-arm-kernel mailing list > linux-arm-kernel@lists.infradead.org > http://lists.infradead.org/mailman/listinfo/linux-arm-kernel _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Fri, Dec 09, 2016 at 05:24:08PM +0000, Dave P Martin wrote: > On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: > > Currently, we allow kernel mode NEON in softirq or hardirq context by > > stacking and unstacking a slice of the NEON register file for each call > > to kernel_neon_begin() and kernel_neon_end(), respectively. > > > > Given that > > a) a CPU typically spends most of its time in userland, during which time > > no kernel mode NEON in process context is in progress, > > b) a CPU spends most of its time in the kernel doing other things than > > kernel mode NEON when it gets interrupted to perform kernel mode NEON > > in softirq context > > > > the stacking and subsequent unstacking is only necessary if we are > > interrupting a thread while it is performing kernel mode NEON in process > > context, which means that in all other cases, we can simply preserve the > > userland FPSIMD state once, and only restore it upon return to userland, > > even if we are being invoked from softirq or hardirq context. > > > > So instead of checking whether we are running in interrupt context, keep > > track of the level of nested kernel mode NEON calls in progress, and only > > perform the eager stack/unstack if the level exceeds 1. > > > > Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > > This looks good to me. > > This should also make the SVE case trivial now: there can only be live > SVE state when in process context with !TIF_FOREIGN_FPSTATE, and the > SVE save/restore is then handled by fpsimd_{save,load}_state() > directly. For deeper nesting levels, there is already no live SVE > state, so kernel_neon_{save,load}_partial_state() are enough in that > case. That's still tricky for SVE. If you get an interrupt in kernel_neon_begin_partial() after the level has been incremented to 1 but before fpsimd_save_state() has been called, SVE-unaware kernel_neon_{save,load}_partial_state() would corrupt the SVE state. -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: > void kernel_neon_begin_partial(u32 num_regs) > { > - if (in_interrupt()) { > - struct fpsimd_partial_state *s = this_cpu_ptr( > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); > + struct fpsimd_partial_state *s; > + int level; > + > + preempt_disable(); > + > + level = this_cpu_inc_return(kernel_neon_nesting_level); > + BUG_ON(level > 3); > + > + if (level > 1) { > + s = this_cpu_ptr(nested_fpsimdstate); > > - BUG_ON(num_regs > 32); > - fpsimd_save_partial_state(s, roundup(num_regs, 2)); > + WARN_ON_ONCE(num_regs > 32); > + num_regs = min(roundup(num_regs, 2), 32U); > + > + fpsimd_save_partial_state(&s[level - 2], num_regs); > } else { > /* > * Save the userland FPSIMD state if we have one and if we > @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) > * that there is no longer userland FPSIMD state in the > * registers. > */ > - preempt_disable(); > if (current->mm && > !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) > fpsimd_save_state(¤t->thread.fpsimd_state); I wonder whether we could actually do this saving and flag/level setting in reverse to simplify the races. Something like your previous patch but only set TIF_FOREIGN_FPSTATE after saving: level = this_cpu_read(kernel_neon_nesting_level); if (level > 0) { ... fpsimd_save_partial_state(); } else { if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) fpsimd_save_state(); set_thread_flag(TIF_FOREIGN_FPSTATE); } this_cpu_inc(kernel_neon_nesting_level); There is a risk of extra saving if we get an interrupt after test_thread_flag() and before set_thread_flag() but I don't think this would corrupt any state, just writing things twice. (disclaimer: I haven't thought of all the possible races and I'm not entirely sure about the kernel_neon_end() part) -- Catalin _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Fri, Dec 09, 2016 at 06:21:55PM +0000, Catalin Marinas wrote: > On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: > > void kernel_neon_begin_partial(u32 num_regs) > > { > > - if (in_interrupt()) { > > - struct fpsimd_partial_state *s = this_cpu_ptr( > > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); > > + struct fpsimd_partial_state *s; > > + int level; > > + > > + preempt_disable(); > > + > > + level = this_cpu_inc_return(kernel_neon_nesting_level); > > + BUG_ON(level > 3); > > + > > + if (level > 1) { > > + s = this_cpu_ptr(nested_fpsimdstate); > > > > - BUG_ON(num_regs > 32); > > - fpsimd_save_partial_state(s, roundup(num_regs, 2)); > > + WARN_ON_ONCE(num_regs > 32); > > + num_regs = min(roundup(num_regs, 2), 32U); > > + > > + fpsimd_save_partial_state(&s[level - 2], num_regs); > > } else { > > /* > > * Save the userland FPSIMD state if we have one and if we > > @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) > > * that there is no longer userland FPSIMD state in the > > * registers. > > */ > > - preempt_disable(); > > if (current->mm && > > !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) > > fpsimd_save_state(¤t->thread.fpsimd_state); > > I wonder whether we could actually do this saving and flag/level setting > in reverse to simplify the races. Something like your previous patch but > only set TIF_FOREIGN_FPSTATE after saving: > > level = this_cpu_read(kernel_neon_nesting_level); > if (level > 0) { > ... > fpsimd_save_partial_state(); > } else { > if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) > fpsimd_save_state(); > set_thread_flag(TIF_FOREIGN_FPSTATE); > } > this_cpu_inc(kernel_neon_nesting_level); > > There is a risk of extra saving if we get an interrupt after > test_thread_flag() and before set_thread_flag() but I don't think this > would corrupt any state, just writing things twice. I would worry that we can save two states over the same buffer and then restore an uninitialised buffer in this case unless we are careful. Because the level-dependent code is now misbracketed by the inc/dec, a preempting call races with the outer call and use the same value. I guess we could do if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) fpsimd_save_state(); clear_thread_flag(TIF_FOREIGN_FPSTATE); at the start unconditionally, before the _inc_return(). The task state may then get saved in the middle of being saved, but as you say it shouldn't have changed in the meantime. The nested save code may then do a partial save of the same state on top of that which could get restored at the inner kernel_neon_end() call. > (disclaimer: I haven't thought of all the possible races and I'm not > entirely sure about the kernel_neon_end() part) Cheers ---Dave _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 9 December 2016 at 19:29, Dave Martin <Dave.Martin@arm.com> wrote: > On Fri, Dec 09, 2016 at 06:21:55PM +0000, Catalin Marinas wrote: >> On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: >> > void kernel_neon_begin_partial(u32 num_regs) >> > { >> > - if (in_interrupt()) { >> > - struct fpsimd_partial_state *s = this_cpu_ptr( >> > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); >> > + struct fpsimd_partial_state *s; >> > + int level; >> > + >> > + preempt_disable(); >> > + >> > + level = this_cpu_inc_return(kernel_neon_nesting_level); >> > + BUG_ON(level > 3); >> > + >> > + if (level > 1) { >> > + s = this_cpu_ptr(nested_fpsimdstate); >> > >> > - BUG_ON(num_regs > 32); >> > - fpsimd_save_partial_state(s, roundup(num_regs, 2)); >> > + WARN_ON_ONCE(num_regs > 32); >> > + num_regs = min(roundup(num_regs, 2), 32U); >> > + >> > + fpsimd_save_partial_state(&s[level - 2], num_regs); >> > } else { >> > /* >> > * Save the userland FPSIMD state if we have one and if we >> > @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) >> > * that there is no longer userland FPSIMD state in the >> > * registers. >> > */ >> > - preempt_disable(); >> > if (current->mm && >> > !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) >> > fpsimd_save_state(¤t->thread.fpsimd_state); >> >> I wonder whether we could actually do this saving and flag/level setting >> in reverse to simplify the races. Something like your previous patch but >> only set TIF_FOREIGN_FPSTATE after saving: >> >> level = this_cpu_read(kernel_neon_nesting_level); >> if (level > 0) { >> ... >> fpsimd_save_partial_state(); >> } else { >> if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) >> fpsimd_save_state(); >> set_thread_flag(TIF_FOREIGN_FPSTATE); >> } >> this_cpu_inc(kernel_neon_nesting_level); >> >> There is a risk of extra saving if we get an interrupt after >> test_thread_flag() and before set_thread_flag() but I don't think this >> would corrupt any state, just writing things twice. > > I would worry that we can save two states over the same buffer and then > restore an uninitialised buffer in this case unless we are careful. > Because the level-dependent code is now misbracketed by the inc/dec, > a preempting call races with the outer call and use the same value. > > I guess we could do > > if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) > fpsimd_save_state(); > clear_thread_flag(TIF_FOREIGN_FPSTATE); > > at the start unconditionally, before the _inc_return(). > > The task state may then get saved in the middle of being saved, but > as you say it shouldn't have changed in the meantime. It /will/ have changed in the meantime: when the interrupted context is resumed, it will happily proceed with saving the state where it left off, but now the register file contains whatever was left after the interrupt handler is done with the NEON. > The nested > save code may then do a partial save of the same state on top of that > which could get restored at the inner kernel_neon_end() call. > I'm afraid the only way to deal with this correctly is to treat the whole sequence as a critical section, which means execute it with interrupts disabled. _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On Fri, Dec 09, 2016 at 08:57:20PM +0000, Ard Biesheuvel wrote: > On 9 December 2016 at 19:29, Dave Martin <Dave.Martin@arm.com> wrote: > > On Fri, Dec 09, 2016 at 06:21:55PM +0000, Catalin Marinas wrote: > >> On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: > >> > void kernel_neon_begin_partial(u32 num_regs) > >> > { > >> > - if (in_interrupt()) { > >> > - struct fpsimd_partial_state *s = this_cpu_ptr( > >> > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); > >> > + struct fpsimd_partial_state *s; > >> > + int level; > >> > + > >> > + preempt_disable(); > >> > + > >> > + level = this_cpu_inc_return(kernel_neon_nesting_level); > >> > + BUG_ON(level > 3); > >> > + > >> > + if (level > 1) { > >> > + s = this_cpu_ptr(nested_fpsimdstate); > >> > > >> > - BUG_ON(num_regs > 32); > >> > - fpsimd_save_partial_state(s, roundup(num_regs, 2)); > >> > + WARN_ON_ONCE(num_regs > 32); > >> > + num_regs = min(roundup(num_regs, 2), 32U); > >> > + > >> > + fpsimd_save_partial_state(&s[level - 2], num_regs); > >> > } else { > >> > /* > >> > * Save the userland FPSIMD state if we have one and if we > >> > @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) > >> > * that there is no longer userland FPSIMD state in the > >> > * registers. > >> > */ > >> > - preempt_disable(); > >> > if (current->mm && > >> > !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) > >> > fpsimd_save_state(¤t->thread.fpsimd_state); > >> > >> I wonder whether we could actually do this saving and flag/level setting > >> in reverse to simplify the races. Something like your previous patch but > >> only set TIF_FOREIGN_FPSTATE after saving: > >> > >> level = this_cpu_read(kernel_neon_nesting_level); > >> if (level > 0) { > >> ... > >> fpsimd_save_partial_state(); > >> } else { > >> if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) > >> fpsimd_save_state(); > >> set_thread_flag(TIF_FOREIGN_FPSTATE); > >> } > >> this_cpu_inc(kernel_neon_nesting_level); > >> > >> There is a risk of extra saving if we get an interrupt after > >> test_thread_flag() and before set_thread_flag() but I don't think this > >> would corrupt any state, just writing things twice. > > > > I would worry that we can save two states over the same buffer and then > > restore an uninitialised buffer in this case unless we are careful. > > Because the level-dependent code is now misbracketed by the inc/dec, > > a preempting call races with the outer call and use the same value. > > > > I guess we could do > > > > if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) > > fpsimd_save_state(); > > clear_thread_flag(TIF_FOREIGN_FPSTATE); > > > > at the start unconditionally, before the _inc_return(). > > > > The task state may then get saved in the middle of being saved, but > > as you say it shouldn't have changed in the meantime. > > It /will/ have changed in the meantime: when the interrupted context > is resumed, it will happily proceed with saving the state where it > left off, but now the register file contains whatever was left after > the interrupt handler is done with the NEON. Hmmm, true. The NEON regs will have been restored by kernel_neon_end() in the inner context, but the extra SVE bits won't have been. > > > The nested > > save code may then do a partial save of the same state on top of that > > which could get restored at the inner kernel_neon_end() call. > > > > I'm afraid the only way to deal with this correctly is to treat the > whole sequence as a critical section, which means execute it with > interrupts disabled. Or we make the KERNEL_MODE_NEON code SVE-aware, which is where I started off. In that case, we do SVE (partial) save/restore whenever kernel_mode_neon() is called with live SVE state. The change here is that would we consider that there is always live SVE state until the fpsimd_save_state() actually finishes at the outer level. We may want to delay setting of TIF_FOREIGN_FPSTATE for that purpose. This means you do take an additional latency hit if you want to use NEON in an interrupting context and there happens to be live SVE state. It's a consequence of the architecture though -- I don't think there's any way to get around it. We can still scale the cost by implementing sve_save_partial_state() or something equivalent. You original inc()+save() ... restore()+dec() seems sound enough if viewed this way. Unless I'm missing something? Cheers ---Dave _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
On 12 December 2016 at 10:35, Dave Martin <Dave.Martin@arm.com> wrote: > On Fri, Dec 09, 2016 at 08:57:20PM +0000, Ard Biesheuvel wrote: >> On 9 December 2016 at 19:29, Dave Martin <Dave.Martin@arm.com> wrote: >> > On Fri, Dec 09, 2016 at 06:21:55PM +0000, Catalin Marinas wrote: >> >> On Fri, Dec 09, 2016 at 04:46:32PM +0000, Ard Biesheuvel wrote: >> >> > void kernel_neon_begin_partial(u32 num_regs) >> >> > { >> >> > - if (in_interrupt()) { >> >> > - struct fpsimd_partial_state *s = this_cpu_ptr( >> >> > - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); >> >> > + struct fpsimd_partial_state *s; >> >> > + int level; >> >> > + >> >> > + preempt_disable(); >> >> > + >> >> > + level = this_cpu_inc_return(kernel_neon_nesting_level); >> >> > + BUG_ON(level > 3); >> >> > + >> >> > + if (level > 1) { >> >> > + s = this_cpu_ptr(nested_fpsimdstate); >> >> > >> >> > - BUG_ON(num_regs > 32); >> >> > - fpsimd_save_partial_state(s, roundup(num_regs, 2)); >> >> > + WARN_ON_ONCE(num_regs > 32); >> >> > + num_regs = min(roundup(num_regs, 2), 32U); >> >> > + >> >> > + fpsimd_save_partial_state(&s[level - 2], num_regs); >> >> > } else { >> >> > /* >> >> > * Save the userland FPSIMD state if we have one and if we >> >> > @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) >> >> > * that there is no longer userland FPSIMD state in the >> >> > * registers. >> >> > */ >> >> > - preempt_disable(); >> >> > if (current->mm && >> >> > !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) >> >> > fpsimd_save_state(¤t->thread.fpsimd_state); >> >> >> >> I wonder whether we could actually do this saving and flag/level setting >> >> in reverse to simplify the races. Something like your previous patch but >> >> only set TIF_FOREIGN_FPSTATE after saving: >> >> >> >> level = this_cpu_read(kernel_neon_nesting_level); >> >> if (level > 0) { >> >> ... >> >> fpsimd_save_partial_state(); >> >> } else { >> >> if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) >> >> fpsimd_save_state(); >> >> set_thread_flag(TIF_FOREIGN_FPSTATE); >> >> } >> >> this_cpu_inc(kernel_neon_nesting_level); >> >> >> >> There is a risk of extra saving if we get an interrupt after >> >> test_thread_flag() and before set_thread_flag() but I don't think this >> >> would corrupt any state, just writing things twice. >> > >> > I would worry that we can save two states over the same buffer and then >> > restore an uninitialised buffer in this case unless we are careful. >> > Because the level-dependent code is now misbracketed by the inc/dec, >> > a preempting call races with the outer call and use the same value. >> > >> > I guess we could do >> > >> > if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) >> > fpsimd_save_state(); >> > clear_thread_flag(TIF_FOREIGN_FPSTATE); >> > >> > at the start unconditionally, before the _inc_return(). >> > >> > The task state may then get saved in the middle of being saved, but >> > as you say it shouldn't have changed in the meantime. >> >> It /will/ have changed in the meantime: when the interrupted context >> is resumed, it will happily proceed with saving the state where it >> left off, but now the register file contains whatever was left after >> the interrupt handler is done with the NEON. > > Hmmm, true. The NEON regs will have been restored by kernel_neon_end() > in the inner context, but the extra SVE bits won't have been. > Even worse: both the interrupter and the interruptee think they are preserving the userland context, so once the interrupter is done, it will not restore the context as it found it. The interruptee will then proceed and write whatever is left in those registers into the saved state. >> >> > The nested >> > save code may then do a partial save of the same state on top of that >> > which could get restored at the inner kernel_neon_end() call. >> > >> >> I'm afraid the only way to deal with this correctly is to treat the >> whole sequence as a critical section, which means execute it with >> interrupts disabled. > > Or we make the KERNEL_MODE_NEON code SVE-aware, which is where I started > off. In that case, we do SVE (partial) save/restore whenever > kernel_mode_neon() is called with live SVE state. The change here is > that would we consider that there is always live SVE state until the > fpsimd_save_state() actually finishes at the outer level. We may want > to delay setting of TIF_FOREIGN_FPSTATE for that purpose. > > This means you do take an additional latency hit if you want to use NEON > in an interrupting context and there happens to be live SVE state. It's > a consequence of the architecture though -- I don't think there's any > way to get around it. We can still scale the cost by implementing > sve_save_partial_state() or something equivalent. > > You original inc()+save() ... restore()+dec() seems sound enough if > viewed this way. Unless I'm missing something? > I think having a small critical section is not so bad. Let me send out a v5 so we can discuss ... _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c index 394c61db5566..37d6dfc9059b 100644 --- a/arch/arm64/kernel/fpsimd.c +++ b/arch/arm64/kernel/fpsimd.c @@ -220,20 +220,35 @@ void fpsimd_flush_task_state(struct task_struct *t) #ifdef CONFIG_KERNEL_MODE_NEON -static DEFINE_PER_CPU(struct fpsimd_partial_state, hardirq_fpsimdstate); -static DEFINE_PER_CPU(struct fpsimd_partial_state, softirq_fpsimdstate); +/* + * Although unlikely, it is possible for three kernel mode NEON contexts to + * be live at the same time: process context, softirq context and hardirq + * context. So while the userland context is stashed in the thread's fpsimd + * state structure, we need two additional levels of storage. + */ +static DEFINE_PER_CPU(struct fpsimd_partial_state, nested_fpsimdstate[2]); +static DEFINE_PER_CPU(int, kernel_neon_nesting_level); /* * Kernel-side NEON support functions */ void kernel_neon_begin_partial(u32 num_regs) { - if (in_interrupt()) { - struct fpsimd_partial_state *s = this_cpu_ptr( - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); + struct fpsimd_partial_state *s; + int level; + + preempt_disable(); + + level = this_cpu_inc_return(kernel_neon_nesting_level); + BUG_ON(level > 3); + + if (level > 1) { + s = this_cpu_ptr(nested_fpsimdstate); - BUG_ON(num_regs > 32); - fpsimd_save_partial_state(s, roundup(num_regs, 2)); + WARN_ON_ONCE(num_regs > 32); + num_regs = min(roundup(num_regs, 2), 32U); + + fpsimd_save_partial_state(&s[level - 2], num_regs); } else { /* * Save the userland FPSIMD state if we have one and if we @@ -241,7 +256,6 @@ void kernel_neon_begin_partial(u32 num_regs) * that there is no longer userland FPSIMD state in the * registers. */ - preempt_disable(); if (current->mm && !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE)) fpsimd_save_state(¤t->thread.fpsimd_state); @@ -252,13 +266,18 @@ EXPORT_SYMBOL(kernel_neon_begin_partial); void kernel_neon_end(void) { - if (in_interrupt()) { - struct fpsimd_partial_state *s = this_cpu_ptr( - in_irq() ? &hardirq_fpsimdstate : &softirq_fpsimdstate); - fpsimd_load_partial_state(s); - } else { - preempt_enable(); + struct fpsimd_partial_state *s; + int level; + + level = this_cpu_read(kernel_neon_nesting_level); + BUG_ON(level < 1); + + if (level > 1) { + s = this_cpu_ptr(nested_fpsimdstate); + fpsimd_load_partial_state(&s[level - 2]); } + this_cpu_dec(kernel_neon_nesting_level); + preempt_enable(); } EXPORT_SYMBOL(kernel_neon_end);
Currently, we allow kernel mode NEON in softirq or hardirq context by stacking and unstacking a slice of the NEON register file for each call to kernel_neon_begin() and kernel_neon_end(), respectively. Given that a) a CPU typically spends most of its time in userland, during which time no kernel mode NEON in process context is in progress, b) a CPU spends most of its time in the kernel doing other things than kernel mode NEON when it gets interrupted to perform kernel mode NEON in softirq context the stacking and subsequent unstacking is only necessary if we are interrupting a thread while it is performing kernel mode NEON in process context, which means that in all other cases, we can simply preserve the userland FPSIMD state once, and only restore it upon return to userland, even if we are being invoked from softirq or hardirq context. So instead of checking whether we are running in interrupt context, keep track of the level of nested kernel mode NEON calls in progress, and only perform the eager stack/unstack if the level exceeds 1. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> --- v4: - use this_cpu_inc/dec, which give sufficient guarantees regarding concurrency, but do not imply SMP barriers, which are not needed here v3: - avoid corruption by concurrent invocations of kernel_neon_begin()/_end() v2: - BUG() on unexpected values of the nesting level - relax the BUG() on num_regs>32 to a WARN, given that nothing actually breaks in that case arch/arm64/kernel/fpsimd.c | 47 ++++++++++++++------ 1 file changed, 33 insertions(+), 14 deletions(-) -- 2.7.4 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel