From patchwork Fri May 29 09:30:25 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Alex_Benn=C3=A9e?= X-Patchwork-Id: 49149 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f197.google.com (mail-lb0-f197.google.com [209.85.217.197]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 90D3B218E7 for ; Fri, 29 May 2015 09:32:00 +0000 (UTC) Received: by lbcak1 with SMTP id ak1sf17308149lbc.2 for ; Fri, 29 May 2015 02:31:59 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:delivered-to:from:to:cc:subject:date:message-id :in-reply-to:references:mime-version:content-type :content-transfer-encoding:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=g4/nTPmWqYB35cQ1D120u73ty5+PB1cso+FRioeU0E4=; b=Y9ZTR2JQ/ClTKwU9C49+1n9TD1FijfxLSc/+wpUvM2oyvCfI53ywhZE+0fQkJXT5iH tVosWhm5ETJ0J3AMQxfufmrziF8PXz8vX2gZVhKpJCWZX0KzwETaQMkqnAX+MciZ8EQR xJ1aA6LG7qPbspVa8D3lTqChuiExTzp7jhmpcy9z3IWiVDyWL3k1s9ZevUoo95wdF0TN MLR5fJXNNMdqjXgGFVm5cjnqBxpuOfdA8FvS5K5CWXyLCOgmOx+Px/BgQKLaGPt2pwPj oTvr/02vsy1iHiI5GodYvGEv7s4QZuFrUmmz1L35m+O96YrJtVVnA9k0XDb8ogw670n2 Tq+A== X-Gm-Message-State: ALoCoQn6oBtBHFVEkN2KolMl4O67UjI/8+ytyvhZ+XAGO6MWQpjmvPWWTk/ziQTuUDcqX6TInzvv X-Received: by 10.180.105.226 with SMTP id gp2mr2160096wib.1.1432891919558; Fri, 29 May 2015 02:31:59 -0700 (PDT) X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.87.81 with SMTP id v17ls355136laz.2.gmail; Fri, 29 May 2015 02:31:59 -0700 (PDT) X-Received: by 10.152.28.97 with SMTP id a1mr7180020lah.9.1432891919363; Fri, 29 May 2015 02:31:59 -0700 (PDT) Received: from mail-la0-f52.google.com (mail-la0-f52.google.com. [209.85.215.52]) by mx.google.com with ESMTPS id wz10si4216678lbb.138.2015.05.29.02.31.59 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 29 May 2015 02:31:59 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.52 as permitted sender) client-ip=209.85.215.52; Received: by labpy14 with SMTP id py14so39649839lab.0 for ; Fri, 29 May 2015 02:31:59 -0700 (PDT) X-Received: by 10.152.4.137 with SMTP id k9mr6967116lak.29.1432891919045; Fri, 29 May 2015 02:31:59 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.108.230 with SMTP id hn6csp165625lbb; Fri, 29 May 2015 02:31:57 -0700 (PDT) X-Received: by 10.70.42.67 with SMTP id m3mr13394802pdl.88.1432891916663; Fri, 29 May 2015 02:31:56 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id qp11si7630514pdb.211.2015.05.29.02.31.55; Fri, 29 May 2015 02:31:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755948AbbE2Jbu (ORCPT + 28 others); Fri, 29 May 2015 05:31:50 -0400 Received: from static.88-198-71-155.clients.your-server.de ([88.198.71.155]:56459 "EHLO socrates.bennee.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755751AbbE2JbP (ORCPT ); Fri, 29 May 2015 05:31:15 -0400 Received: from localhost ([127.0.0.1] helo=zen.linaroharston) by socrates.bennee.com with esmtp (Exim 4.80) (envelope-from ) id 1YyHww-0004KD-Vc; Fri, 29 May 2015 12:55:39 +0200 From: =?UTF-8?q?Alex=20Benn=C3=A9e?= To: kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, christoffer.dall@linaro.org, marc.zyngier@arm.com, peter.maydell@linaro.org, agraf@suse.de, drjones@redhat.com, pbonzini@redhat.com, zhichao.huang@linaro.org Cc: jan.kiszka@siemens.com, dahi@linux.vnet.ibm.com, r65777@freescale.com, bp@suse.de, =?UTF-8?q?Alex=20Benn=C3=A9e?= , Gleb Natapov , Russell King , Catalin Marinas , Will Deacon , Ard Biesheuvel , Lorenzo Pieralisi , Andre Przywara , Richard Weinberger , linux-kernel@vger.kernel.org (open list) Subject: [PATCH v5 09/12] KVM: arm64: introduce vcpu->arch.debug_ptr Date: Fri, 29 May 2015 10:30:25 +0100 Message-Id: <1432891828-4816-10-git-send-email-alex.bennee@linaro.org> X-Mailer: git-send-email 2.4.1 In-Reply-To: <1432891828-4816-1-git-send-email-alex.bennee@linaro.org> References: <1432891828-4816-1-git-send-email-alex.bennee@linaro.org> MIME-Version: 1.0 X-SA-Exim-Connect-IP: 127.0.0.1 X-SA-Exim-Mail-From: alex.bennee@linaro.org X-SA-Exim-Scanned: No (on socrates.bennee.com); SAEximRunCond expanded to false Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: alex.bennee@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.52 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This introduces a level of indirection for the debug registers. Instead of using the sys_regs[] directly we store registers in a structure in the vcpu. As we are no longer tied to the layout of the sys_regs[] we can make the copies size appropriate for control and value registers. This also entails updating the sys_regs code to access this new structure. Instead of passing a register index we now pass an offset into the kvm_guest_debug_arch structure. We also need to ensure the GET/SET_ONE_REG ioctl operations store the registers in their correct location. Signed-off-by: Alex Bennée --- arch/arm/kvm/arm.c | 3 + arch/arm64/include/asm/kvm_asm.h | 24 +++----- arch/arm64/include/asm/kvm_host.h | 12 +++- arch/arm64/kernel/asm-offsets.c | 6 ++ arch/arm64/kvm/hyp.S | 107 +++++++++++++++++--------------- arch/arm64/kvm/sys_regs.c | 126 +++++++++++++++++++++++++++++++------- 6 files changed, 188 insertions(+), 90 deletions(-) diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c index 9b3ed6d..0d17c7b 100644 --- a/arch/arm/kvm/arm.c +++ b/arch/arm/kvm/arm.c @@ -279,6 +279,9 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu) /* Set up the timer */ kvm_timer_vcpu_init(vcpu); + /* Set the debug registers to be the guests */ + vcpu->arch.debug_ptr = &vcpu->arch.vcpu_debug_state; + return 0; } diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h index d6b507e..e997404 100644 --- a/arch/arm64/include/asm/kvm_asm.h +++ b/arch/arm64/include/asm/kvm_asm.h @@ -46,24 +46,16 @@ #define CNTKCTL_EL1 20 /* Timer Control Register (EL1) */ #define PAR_EL1 21 /* Physical Address Register */ #define MDSCR_EL1 22 /* Monitor Debug System Control Register */ -#define DBGBCR0_EL1 23 /* Debug Breakpoint Control Registers (0-15) */ -#define DBGBCR15_EL1 38 -#define DBGBVR0_EL1 39 /* Debug Breakpoint Value Registers (0-15) */ -#define DBGBVR15_EL1 54 -#define DBGWCR0_EL1 55 /* Debug Watchpoint Control Registers (0-15) */ -#define DBGWCR15_EL1 70 -#define DBGWVR0_EL1 71 /* Debug Watchpoint Value Registers (0-15) */ -#define DBGWVR15_EL1 86 -#define MDCCINT_EL1 87 /* Monitor Debug Comms Channel Interrupt Enable Reg */ +#define MDCCINT_EL1 23 /* Monitor Debug Comms Channel Interrupt Enable Reg */ /* 32bit specific registers. Keep them at the end of the range */ -#define DACR32_EL2 88 /* Domain Access Control Register */ -#define IFSR32_EL2 89 /* Instruction Fault Status Register */ -#define FPEXC32_EL2 90 /* Floating-Point Exception Control Register */ -#define DBGVCR32_EL2 91 /* Debug Vector Catch Register */ -#define TEECR32_EL1 92 /* ThumbEE Configuration Register */ -#define TEEHBR32_EL1 93 /* ThumbEE Handler Base Register */ -#define NR_SYS_REGS 94 +#define DACR32_EL2 24 /* Domain Access Control Register */ +#define IFSR32_EL2 25 /* Instruction Fault Status Register */ +#define FPEXC32_EL2 26 /* Floating-Point Exception Control Register */ +#define DBGVCR32_EL2 27 /* Debug Vector Catch Register */ +#define TEECR32_EL1 28 /* ThumbEE Configuration Register */ +#define TEEHBR32_EL1 29 /* ThumbEE Handler Base Register */ +#define NR_SYS_REGS 30 /* 32bit mapping */ #define c0_MPIDR (MPIDR_EL1 * 2) /* MultiProcessor ID Register */ diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h index e2db6a6..e5040b6 100644 --- a/arch/arm64/include/asm/kvm_host.h +++ b/arch/arm64/include/asm/kvm_host.h @@ -108,11 +108,21 @@ struct kvm_vcpu_arch { /* Exception Information */ struct kvm_vcpu_fault_info fault; - /* Debug state */ + /* Guest debug state */ u64 debug_flags; + /* + * For debugging the guest we need to keep a set of debug + * registers which can override the guests own debug state + * while being used. These are set via the KVM_SET_GUEST_DEBUG + * ioctl. + */ + struct kvm_guest_debug_arch *debug_ptr; + struct kvm_guest_debug_arch vcpu_debug_state; + /* Pointer to host CPU context */ kvm_cpu_context_t *host_cpu_context; + struct kvm_guest_debug_arch host_debug_state; /* VGIC state */ struct vgic_cpu vgic_cpu; diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c index dfb25a2..1a8e97c 100644 --- a/arch/arm64/kernel/asm-offsets.c +++ b/arch/arm64/kernel/asm-offsets.c @@ -116,10 +116,16 @@ int main(void) DEFINE(VCPU_FAR_EL2, offsetof(struct kvm_vcpu, arch.fault.far_el2)); DEFINE(VCPU_HPFAR_EL2, offsetof(struct kvm_vcpu, arch.fault.hpfar_el2)); DEFINE(VCPU_DEBUG_FLAGS, offsetof(struct kvm_vcpu, arch.debug_flags)); + DEFINE(VCPU_DEBUG_PTR, offsetof(struct kvm_vcpu, arch.debug_ptr)); + DEFINE(DEBUG_BCR, offsetof(struct kvm_guest_debug_arch, dbg_bcr)); + DEFINE(DEBUG_BVR, offsetof(struct kvm_guest_debug_arch, dbg_bvr)); + DEFINE(DEBUG_WCR, offsetof(struct kvm_guest_debug_arch, dbg_wcr)); + DEFINE(DEBUG_WVR, offsetof(struct kvm_guest_debug_arch, dbg_wvr)); DEFINE(VCPU_HCR_EL2, offsetof(struct kvm_vcpu, arch.hcr_el2)); DEFINE(VCPU_MDCR_EL2, offsetof(struct kvm_vcpu, arch.mdcr_el2)); DEFINE(VCPU_IRQ_LINES, offsetof(struct kvm_vcpu, arch.irq_lines)); DEFINE(VCPU_HOST_CONTEXT, offsetof(struct kvm_vcpu, arch.host_cpu_context)); + DEFINE(VCPU_HOST_DEBUG_STATE, offsetof(struct kvm_vcpu, arch.host_debug_state)); DEFINE(VCPU_TIMER_CNTV_CTL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_ctl)); DEFINE(VCPU_TIMER_CNTV_CVAL, offsetof(struct kvm_vcpu, arch.timer_cpu.cntv_cval)); DEFINE(KVM_TIMER_CNTVOFF, offsetof(struct kvm, arch.timer.cntvoff)); diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S index 9c4897d..1940a4c 100644 --- a/arch/arm64/kvm/hyp.S +++ b/arch/arm64/kvm/hyp.S @@ -228,7 +228,7 @@ stp x24, x25, [x3, #160] .endm -.macro save_debug type +.macro save_debug type regsz offsz // x4: pointer to register set // x5: number of registers to skip // x6..x22 trashed @@ -258,22 +258,22 @@ add x22, x22, x5, lsl #2 br x22 1: - str x21, [x4, #(15 * 8)] - str x20, [x4, #(14 * 8)] - str x19, [x4, #(13 * 8)] - str x18, [x4, #(12 * 8)] - str x17, [x4, #(11 * 8)] - str x16, [x4, #(10 * 8)] - str x15, [x4, #(9 * 8)] - str x14, [x4, #(8 * 8)] - str x13, [x4, #(7 * 8)] - str x12, [x4, #(6 * 8)] - str x11, [x4, #(5 * 8)] - str x10, [x4, #(4 * 8)] - str x9, [x4, #(3 * 8)] - str x8, [x4, #(2 * 8)] - str x7, [x4, #(1 * 8)] - str x6, [x4, #(0 * 8)] + str \regsz\()21, [x4, #(15 * \offsz)] + str \regsz\()20, [x4, #(14 * \offsz)] + str \regsz\()19, [x4, #(13 * \offsz)] + str \regsz\()18, [x4, #(12 * \offsz)] + str \regsz\()17, [x4, #(11 * \offsz)] + str \regsz\()16, [x4, #(10 * \offsz)] + str \regsz\()15, [x4, #(9 * \offsz)] + str \regsz\()14, [x4, #(8 * \offsz)] + str \regsz\()13, [x4, #(7 * \offsz)] + str \regsz\()12, [x4, #(6 * \offsz)] + str \regsz\()11, [x4, #(5 * \offsz)] + str \regsz\()10, [x4, #(4 * \offsz)] + str \regsz\()9, [x4, #(3 * \offsz)] + str \regsz\()8, [x4, #(2 * \offsz)] + str \regsz\()7, [x4, #(1 * \offsz)] + str \regsz\()6, [x4, #(0 * \offsz)] .endm .macro restore_sysregs @@ -318,7 +318,7 @@ msr mdscr_el1, x25 .endm -.macro restore_debug type +.macro restore_debug type regsz offsz // x4: pointer to register set // x5: number of registers to skip // x6..x22 trashed @@ -327,22 +327,22 @@ add x22, x22, x5, lsl #2 br x22 1: - ldr x21, [x4, #(15 * 8)] - ldr x20, [x4, #(14 * 8)] - ldr x19, [x4, #(13 * 8)] - ldr x18, [x4, #(12 * 8)] - ldr x17, [x4, #(11 * 8)] - ldr x16, [x4, #(10 * 8)] - ldr x15, [x4, #(9 * 8)] - ldr x14, [x4, #(8 * 8)] - ldr x13, [x4, #(7 * 8)] - ldr x12, [x4, #(6 * 8)] - ldr x11, [x4, #(5 * 8)] - ldr x10, [x4, #(4 * 8)] - ldr x9, [x4, #(3 * 8)] - ldr x8, [x4, #(2 * 8)] - ldr x7, [x4, #(1 * 8)] - ldr x6, [x4, #(0 * 8)] + ldr \regsz\()21, [x4, #(15 * \offsz)] + ldr \regsz\()20, [x4, #(14 * \offsz)] + ldr \regsz\()19, [x4, #(13 * \offsz)] + ldr \regsz\()18, [x4, #(12 * \offsz)] + ldr \regsz\()17, [x4, #(11 * \offsz)] + ldr \regsz\()16, [x4, #(10 * \offsz)] + ldr \regsz\()15, [x4, #(9 * \offsz)] + ldr \regsz\()14, [x4, #(8 * \offsz)] + ldr \regsz\()13, [x4, #(7 * \offsz)] + ldr \regsz\()12, [x4, #(6 * \offsz)] + ldr \regsz\()11, [x4, #(5 * \offsz)] + ldr \regsz\()10, [x4, #(4 * \offsz)] + ldr \regsz\()9, [x4, #(3 * \offsz)] + ldr \regsz\()8, [x4, #(2 * \offsz)] + ldr \regsz\()7, [x4, #(1 * \offsz)] + ldr \regsz\()6, [x4, #(0 * \offsz)] adr x22, 1f add x22, x22, x5, lsl #2 @@ -601,6 +601,7 @@ __restore_sysregs: __save_debug: // x0: base address for vcpu context // x2: ptr to current CPU context + // x3: ptr to debug reg struct // x4/x5: trashed mrs x26, id_aa64dfr0_el1 @@ -611,16 +612,16 @@ __save_debug: sub w25, w26, w25 // How many WPs to skip mov x5, x24 - add x4, x2, #CPU_SYSREG_OFFSET(DBGBCR0_EL1) - save_debug dbgbcr - add x4, x2, #CPU_SYSREG_OFFSET(DBGBVR0_EL1) - save_debug dbgbvr + add x4, x3, #DEBUG_BCR + save_debug dbgbcr w 4 + add x4, x3, #DEBUG_BVR + save_debug dbgbvr x 8 mov x5, x25 - add x4, x2, #CPU_SYSREG_OFFSET(DBGWCR0_EL1) - save_debug dbgwcr - add x4, x2, #CPU_SYSREG_OFFSET(DBGWVR0_EL1) - save_debug dbgwvr + add x4, x3, #DEBUG_WCR + save_debug dbgwcr w 4 + add x4, x3, #DEBUG_WVR + save_debug dbgwvr x 8 mrs x21, mdccint_el1 str x21, [x2, #CPU_SYSREG_OFFSET(MDCCINT_EL1)] @@ -640,16 +641,16 @@ __restore_debug: sub w25, w26, w25 // How many WPs to skip mov x5, x24 - add x4, x2, #CPU_SYSREG_OFFSET(DBGBCR0_EL1) - restore_debug dbgbcr - add x4, x2, #CPU_SYSREG_OFFSET(DBGBVR0_EL1) - restore_debug dbgbvr + add x4, x3, #DEBUG_BCR + restore_debug dbgbcr w 4 + add x4, x3, #DEBUG_BVR + restore_debug dbgbvr x 8 mov x5, x25 - add x4, x2, #CPU_SYSREG_OFFSET(DBGWCR0_EL1) - restore_debug dbgwcr - add x4, x2, #CPU_SYSREG_OFFSET(DBGWVR0_EL1) - restore_debug dbgwvr + add x4, x3, #DEBUG_WCR + restore_debug dbgwcr w 4 + add x4, x3, #DEBUG_WVR + restore_debug dbgwvr x 8 ldr x21, [x2, #CPU_SYSREG_OFFSET(MDCCINT_EL1)] msr mdccint_el1, x21 @@ -688,6 +689,7 @@ ENTRY(__kvm_vcpu_run) bl __save_sysregs compute_debug_state 1f + add x3, x0, #VCPU_HOST_DEBUG_STATE bl __save_debug 1: activate_traps @@ -703,6 +705,8 @@ ENTRY(__kvm_vcpu_run) bl __restore_fpsimd skip_debug_state x3, 1f + ldr x3, [x0, #VCPU_DEBUG_PTR] + kern_hyp_va x3 bl __restore_debug 1: restore_guest_32bit_state @@ -723,6 +727,8 @@ __kvm_vcpu_return: bl __save_sysregs skip_debug_state x3, 1f + ldr x3, [x0, #VCPU_DEBUG_PTR] + kern_hyp_va x3 bl __save_debug 1: save_guest_32bit_state @@ -745,6 +751,7 @@ __kvm_vcpu_return: // already been saved. Note that we nuke the whole 64bit word. // If we ever add more flags, we'll have to be more careful... str xzr, [x0, #VCPU_DEBUG_FLAGS] + add x3, x0, #VCPU_HOST_DEBUG_STATE bl __restore_debug 1: restore_host_regs diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c index c370b40..405403e 100644 --- a/arch/arm64/kvm/sys_regs.c +++ b/arch/arm64/kvm/sys_regs.c @@ -211,6 +211,48 @@ static bool trap_debug_regs(struct kvm_vcpu *vcpu, return true; } +static inline bool trap_debug32(struct kvm_vcpu *vcpu, + const struct sys_reg_params *p, + const struct sys_reg_desc *rd) +{ + __u32 *r = (__u32 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + if (p->is_write) { + *r = *vcpu_reg(vcpu, p->Rt); + vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + } else { + *vcpu_reg(vcpu, p->Rt) = *r; + } + + return true; +} + +static inline bool trap_debug64(struct kvm_vcpu *vcpu, + const struct sys_reg_params *p, + const struct sys_reg_desc *rd) +{ + __u64 *r = (__u64 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + if (p->is_write) { + *r = *vcpu_reg(vcpu, p->Rt); + vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; + } else { + *vcpu_reg(vcpu, p->Rt) = *r; + } + + return true; +} + +static inline void reset_debug32(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) +{ + __u32 *r = (__u32 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + *r = rd->val; +} + +static inline void reset_debug64(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd) +{ + __u64 *r = (__u64 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + *r = rd->val; +} + static void reset_amair_el1(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) { u64 amair; @@ -240,16 +282,20 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r) #define DBG_BCR_BVR_WCR_WVR_EL1(n) \ /* DBGBVRn_EL1 */ \ { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b100), \ - trap_debug_regs, reset_val, (DBGBVR0_EL1 + (n)), 0 }, \ + trap_debug64, reset_debug64, \ + offsetof(struct kvm_guest_debug_arch, dbg_bvr[(n)]), 0 }, \ /* DBGBCRn_EL1 */ \ { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b101), \ - trap_debug_regs, reset_val, (DBGBCR0_EL1 + (n)), 0 }, \ + trap_debug32, reset_debug32, \ + offsetof(struct kvm_guest_debug_arch, dbg_bcr[(n)]), 0}, \ /* DBGWVRn_EL1 */ \ { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b110), \ - trap_debug_regs, reset_val, (DBGWVR0_EL1 + (n)), 0 }, \ + trap_debug64, reset_debug64, \ + offsetof(struct kvm_guest_debug_arch, dbg_wvr[(n)]), 0 }, \ /* DBGWCRn_EL1 */ \ { Op0(0b10), Op1(0b000), CRn(0b0000), CRm((n)), Op2(0b111), \ - trap_debug_regs, reset_val, (DBGWCR0_EL1 + (n)), 0 } + trap_debug32, reset_debug32, \ + offsetof(struct kvm_guest_debug_arch, dbg_wcr[(n)]), 0} /* * Architected system registers. @@ -502,37 +548,23 @@ static bool trap_dbgidr(struct kvm_vcpu *vcpu, } } -static bool trap_debug32(struct kvm_vcpu *vcpu, - const struct sys_reg_params *p, - const struct sys_reg_desc *r) -{ - if (p->is_write) { - vcpu_cp14(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt); - vcpu->arch.debug_flags |= KVM_ARM64_DEBUG_DIRTY; - } else { - *vcpu_reg(vcpu, p->Rt) = vcpu_cp14(vcpu, r->reg); - } - - return true; -} - #define DBG_BCR_BVR_WCR_WVR(n) \ /* DBGBVRn */ \ { Op1( 0), CRn( 0), CRm((n)), Op2( 4), trap_debug32, \ - NULL, (cp14_DBGBVR0 + (n) * 2) }, \ + NULL, offsetof(struct kvm_guest_debug_arch, dbg_bvr[(n)]) }, \ /* DBGBCRn */ \ { Op1( 0), CRn( 0), CRm((n)), Op2( 5), trap_debug32, \ - NULL, (cp14_DBGBCR0 + (n) * 2) }, \ + NULL, offsetof(struct kvm_guest_debug_arch, dbg_bcr[(n)]) }, \ /* DBGWVRn */ \ { Op1( 0), CRn( 0), CRm((n)), Op2( 6), trap_debug32, \ - NULL, (cp14_DBGWVR0 + (n) * 2) }, \ + NULL, offsetof(struct kvm_guest_debug_arch, dbg_wvr[(n)]) }, \ /* DBGWCRn */ \ { Op1( 0), CRn( 0), CRm((n)), Op2( 7), trap_debug32, \ - NULL, (cp14_DBGWCR0 + (n) * 2) } + NULL, offsetof(struct kvm_guest_debug_arch, dbg_wcr[(n)]) } #define DBGBXVR(n) \ { Op1( 0), CRn( 1), CRm((n)), Op2( 1), trap_debug32, \ - NULL, cp14_DBGBXVR0 + n * 2 } + NULL, offsetof(struct kvm_guest_debug_arch, dbg_bvr[(n)])+sizeof(__u32) } /* * Trapped cp14 registers. We generally ignore most of the external @@ -1288,6 +1320,42 @@ static int demux_c15_set(u64 id, void __user *uaddr) } } +static int debug_set32(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) +{ + __u32 *r = (__u32 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + if (copy_from_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; + return 0; +} + +static int debug_set64(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) +{ + __u64 *r = (__u64 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + if (copy_from_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; + return 0; +} + +static int debug_get32(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) +{ + __u32 *r = (__u32 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; + return 0; +} + +static int debug_get64(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd, + const struct kvm_one_reg *reg, void __user *uaddr) +{ + __u64 *r = (__u64 *) ((void * )&vcpu->arch.vcpu_debug_state + rd->reg); + if (copy_to_user(uaddr, r, KVM_REG_SIZE(reg->id)) != 0) + return -EFAULT; + return 0; +} + int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg) { const struct sys_reg_desc *r; @@ -1303,6 +1371,12 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (!r) return get_invariant_sys_reg(reg->id, uaddr); + if (r->access == trap_debug64) + return debug_get64(vcpu, r, reg, uaddr); + + if (r->access == trap_debug32) + return debug_get32(vcpu, r, reg, uaddr); + return reg_to_user(uaddr, &vcpu_sys_reg(vcpu, r->reg), reg->id); } @@ -1321,6 +1395,12 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg if (!r) return set_invariant_sys_reg(reg->id, uaddr); + if (r->access == trap_debug64) + return debug_set64(vcpu, r, reg, uaddr); + + if (r->access == trap_debug32) + return debug_set32(vcpu, r, reg, uaddr); + return reg_from_user(&vcpu_sys_reg(vcpu, r->reg), uaddr, reg->id); }