Message ID | 1571419545-20401-13-git-send-email-Dave.Martin@arm.com |
---|---|
State | New |
Headers | show |
Series | arm64: ARMv8.5-A: Branch Target Identification support | expand |
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h index d69c1ef..f41bfdee 100644 --- a/arch/arm64/include/asm/kvm_emulate.h +++ b/arch/arm64/include/asm/kvm_emulate.h @@ -450,10 +450,12 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu, static inline void kvm_skip_instr(struct kvm_vcpu *vcpu, bool is_wide_instr) { - if (vcpu_mode_is_32bit(vcpu)) + if (vcpu_mode_is_32bit(vcpu)) { kvm_skip_instr32(vcpu, is_wide_instr); - else + } else { *vcpu_pc(vcpu) += 4; + *vcpu_cpsr(vcpu) &= ~PSR_BTYPE_MASK; + } /* advance the singlestep state machine */ *vcpu_cpsr(vcpu) &= ~DBG_SPSR_SS;
Since normal execution of any non-branch instruction resets the PSTATE BTYPE field to 0, so do the same thing when emulating a trapped instruction. Branches don't trap directly, so we should never need to assign a non-zero value to BTYPE here. Signed-off-by: Dave Martin <Dave.Martin@arm.com> --- Changes since v2: * Drop (u64) case when masking out PSR_BTYPE_MASK in arm64_skip_faulting_instruction(). PSTATE may grow, but we should address this more generally rather than with point hacks. * Add { } around if () clause that was unbalanced by the previous patch. --- arch/arm64/include/asm/kvm_emulate.h | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) -- 2.1.4