Message ID | 20180315203050.19791-13-andre.przywara@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | New VGIC(-v2) implementation | expand |
Hi Andre, On 03/15/2018 08:30 PM, Andre Przywara wrote: > The event channel IRQ has level triggered semantics, however the current > VGIC treats everything as edge triggered. > To correctly process those IRQs, we have to lower the (virtual) IRQ line > at some point in time, depending on whether ther interrupt condition > still prevails. > Check the per-VCPU evtchn_upcall_pending variable to make the interrupt > line match its status, and call this function upon every hypervisor > entry. > > Signed-off-by: Andre Przywara <andre.przywara@linaro.org> Reviewed-by: Julien Grall <julien.grall@arm.com> Cheers, > --- > xen/arch/arm/domain.c | 7 +++++++ > xen/arch/arm/traps.c | 1 + > xen/include/asm-arm/event.h | 1 + > 3 files changed, 9 insertions(+) > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c > index 4462e62599..18b915d2e9 100644 > --- a/xen/arch/arm/domain.c > +++ b/xen/arch/arm/domain.c > @@ -954,6 +954,13 @@ void vcpu_mark_events_pending(struct vcpu *v) > vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true); > } > > +void vcpu_update_evtchn_irq(struct vcpu *v) > +{ > + bool pending = vcpu_info(v, evtchn_upcall_pending); > + > + vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending); > +} > + > /* The ARM spec declares that even if local irqs are masked in > * the CPSR register, an irq should wake up a cpu from WFI anyway. > * For this reason we need to check for irqs that need delivery, > diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c > index 46464d7bb9..c13223a69f 100644 > --- a/xen/arch/arm/traps.c > +++ b/xen/arch/arm/traps.c > @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs) > * trap and how it can be optimised. > */ > vtimer_sync(current); > + vcpu_update_evtchn_irq(current); > #endif > > vgic_sync_from_lrs(current); > diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h > index c7a415ef57..2f51864043 100644 > --- a/xen/include/asm-arm/event.h > +++ b/xen/include/asm-arm/event.h > @@ -6,6 +6,7 @@ > > void vcpu_kick(struct vcpu *v); > void vcpu_mark_events_pending(struct vcpu *v); > +void vcpu_update_evtchn_irq(struct vcpu *v); > void vcpu_block_unless_event_pending(struct vcpu *v); > > static inline int vcpu_event_delivery_is_enabled(struct vcpu *v) >
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index 4462e62599..18b915d2e9 100644 --- a/xen/arch/arm/domain.c +++ b/xen/arch/arm/domain.c @@ -954,6 +954,13 @@ void vcpu_mark_events_pending(struct vcpu *v) vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, true); } +void vcpu_update_evtchn_irq(struct vcpu *v) +{ + bool pending = vcpu_info(v, evtchn_upcall_pending); + + vgic_inject_irq(v->domain, v, v->domain->arch.evtchn_irq, pending); +} + /* The ARM spec declares that even if local irqs are masked in * the CPSR register, an irq should wake up a cpu from WFI anyway. * For this reason we need to check for irqs that need delivery, diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c index 46464d7bb9..c13223a69f 100644 --- a/xen/arch/arm/traps.c +++ b/xen/arch/arm/traps.c @@ -2033,6 +2033,7 @@ static void enter_hypervisor_head(struct cpu_user_regs *regs) * trap and how it can be optimised. */ vtimer_sync(current); + vcpu_update_evtchn_irq(current); #endif vgic_sync_from_lrs(current); diff --git a/xen/include/asm-arm/event.h b/xen/include/asm-arm/event.h index c7a415ef57..2f51864043 100644 --- a/xen/include/asm-arm/event.h +++ b/xen/include/asm-arm/event.h @@ -6,6 +6,7 @@ void vcpu_kick(struct vcpu *v); void vcpu_mark_events_pending(struct vcpu *v); +void vcpu_update_evtchn_irq(struct vcpu *v); void vcpu_block_unless_event_pending(struct vcpu *v); static inline int vcpu_event_delivery_is_enabled(struct vcpu *v)
The event channel IRQ has level triggered semantics, however the current VGIC treats everything as edge triggered. To correctly process those IRQs, we have to lower the (virtual) IRQ line at some point in time, depending on whether ther interrupt condition still prevails. Check the per-VCPU evtchn_upcall_pending variable to make the interrupt line match its status, and call this function upon every hypervisor entry. Signed-off-by: Andre Przywara <andre.przywara@linaro.org> --- xen/arch/arm/domain.c | 7 +++++++ xen/arch/arm/traps.c | 1 + xen/include/asm-arm/event.h | 1 + 3 files changed, 9 insertions(+)