From patchwork Wed Apr 30 00:18:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 886823 Received: from mail-pl1-f178.google.com (mail-pl1-f178.google.com [209.85.214.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2996B20D4FF for ; Wed, 30 Apr 2025 00:18:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745972335; cv=none; b=TR0HJhcB171Tddmr0XAaoLV7oUIFMAa0RLRUMrmQR8lCujz2pySYvyn9GdkXYHsoCrdAfWmbffNtP4nSKv3ywdnBV1oU6/MnuOvrOlfTqL7t0aQdJ454vUGZUsoCm4UX0g9Arn7lPUKC1aBRW8ReKrMoaOQY4S8cKySdphMaXvU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745972335; c=relaxed/simple; bh=dyrsixdqd72d4GhjLxYKjrEitKA7Vr5CxyiKCgeJAPs=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qIlpGsK1iB2psew9RjfRe8PBvsa9cB0JwuWXJ2VPx8eKDgIfH6zbESD9nsi7tBCHSY2/oFjhahXC0/GJITWUCKKjhGo6kuxH+Tc9pt3EU0SPcO5yzjTVpQ0H10kempxvANWLZIuB52Fp8sefndCsHQdRjTklvAudxTwmLIsuLuw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=BOfJgi6U; arc=none smtp.client-ip=209.85.214.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="BOfJgi6U" Received: by mail-pl1-f178.google.com with SMTP id d9443c01a7336-227914acd20so3541805ad.1 for ; Tue, 29 Apr 2025 17:18:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1745972333; x=1746577133; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=HzFnMDKxDRe5ptrjbu29rP2FmnodbvAdzA5LPi8A4Ws=; b=BOfJgi6U+eJTrUYKSSoxp/sB5VuKLzw7ya0C0EO72NkxnjYHpSHbBtBy03qI+U8LFX DSyd867t1ucbU5GvVBlJ4nwGPnpeNG3LvYk1fub/AwJAOHU6XX68y+TsS7ggRgZGsrlp 8T9FSkvvU1NALczGe8XHlQhDGHe3fXaED/D1lsk9h2rWMQLMXW/Yn1iCja57ryn5TnxC rK300LJeXWcrHPRu/F9upyx4/YzUODh9tFp1Wta2+xg2cfAi7oaHZ9IoyElaxBFZ3d1I 4fxMMgtcbBHbcDTT9UMy0eXfCrurCWHfnUZI0+NeuOqO9AEsaRj7H3KHELQohBWwrm9S vDGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745972333; x=1746577133; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=HzFnMDKxDRe5ptrjbu29rP2FmnodbvAdzA5LPi8A4Ws=; b=Qql5v2Vm9vvQBsSRAyhEUIk+jakjrilELyea+qribrJ79jcWSjBlt2a3mkIbjO92bi 9yZrA2itzmA7asaUntR0wX9PSoogXeD5mCIXWP40uuAmh99N4WE4lXURfKunvz2Kwgzx mYbpSGObJ7aJXPzynbFUmM0xEZqZZF1V5nqlhgxVsN6t1rSXbcQkh7LQ3cvEDJdKDE2X LcAj6le+DDzynn553bgbdw/Yt1pLX5bkgYOWDXGaiSICrBSo3J2LJzCZ40kUgewrMjE6 Vszxb2yRdLphg0xfFY9vTeckxEq5Mldlwellnb+DfBRe7nLa0coL9NpLsedaOmt+yP/Y /7ig== X-Forwarded-Encrypted: i=1; AJvYcCXe5m3MYSI7WpHD4Y/R1AncRx1CVHyejegPF0pKowJq1bQ0iq8d8XzArX11QbUsQLQj0QQzVjTsCueGJK887s4=@vger.kernel.org X-Gm-Message-State: AOJu0YxgXSbeJ6wcT4bUnSq5YECRYGrauA1tyKVTsd4xFT/mNb3tcBZC wZ7bcFsjFOelnWmFzs+tMKesTNlT2cyWC3z+XXKUVHRMlhvfiUgN8a3OA96OKXU= X-Gm-Gg: ASbGnctm0TI7EV6l03epenXDpT83cf4/D+BATrGcduLSouc7/q9luuLpgPklm674YUg SY55Hxsw7oYZaFHI4MYK60n9pNtqaVjSAlR+Josc6fGbnc9eDOivsd6qKfFqLFZ6NH7dN9Iwdy7 K/K1nwd6P8CqG0UZXj0iLupfhzC4MDTijeltMUWVbR+qvJ7rM3UJqKZu0R1YhXDb2E3Zqyi106W oxAyWpf61ABdlROanGkhec8TC5SYpdvGp1DMf68rZNbuKBL46uPCosURJ7lYItqcNwdXXnhHs96 qs/qF+d8C5dHmu1vudthnRttKCdnu5n4x72UNPJYu4JBDS5xyrElsg== X-Google-Smtp-Source: AGHT+IHZ4bXOCLBk1HNkU29TPfhwANakA0aDa7Ppfdx6LhdeljekYFvEKUGr3JIdqCo3Zpa4RxJbKw== X-Received: by 2002:a17:902:ce82:b0:21f:58fd:d215 with SMTP id d9443c01a7336-22df4787428mr10314435ad.11.1745972333526; Tue, 29 Apr 2025 17:18:53 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b1f68988ca4sm1907790a12.74.2025.04.29.17.18.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Apr 2025 17:18:53 -0700 (PDT) From: Atish Patra Date: Tue, 29 Apr 2025 17:18:45 -0700 Subject: [PATCH v2 1/3] KVM: riscv: selftests: Align the trap information wiht pt_regs Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250429-kvm_selftest_improve-v2-1-51713f91e04a@rivosinc.com> References: <20250429-kvm_selftest_improve-v2-0-51713f91e04a@rivosinc.com> In-Reply-To: <20250429-kvm_selftest_improve-v2-0-51713f91e04a@rivosinc.com> To: Anup Patel , Atish Patra , Paolo Bonzini , Shuah Khan , Paul Walmsley , Palmer Dabbelt , Alexandre Ghiti , Andrew Jones Cc: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Atish Patra X-Mailer: b4 0.15-dev-42535 The current exeception register structure in selftests are missing few registers (e.g stval). Instead of adding it manually, change the ex_regs to align with pt_regs to make it future proof. Suggested-by: Andrew Jones Signed-off-by: Atish Patra --- .../selftests/kvm/include/riscv/processor.h | 10 +- tools/testing/selftests/kvm/lib/riscv/handlers.S | 164 ++++++++++++--------- tools/testing/selftests/kvm/lib/riscv/processor.c | 2 +- tools/testing/selftests/kvm/riscv/arch_timer.c | 2 +- tools/testing/selftests/kvm/riscv/ebreak_test.c | 2 +- tools/testing/selftests/kvm/riscv/sbi_pmu_test.c | 4 +- 6 files changed, 104 insertions(+), 80 deletions(-) diff --git a/tools/testing/selftests/kvm/include/riscv/processor.h b/tools/testing/selftests/kvm/include/riscv/processor.h index 5f389166338c..1b5aef87de0f 100644 --- a/tools/testing/selftests/kvm/include/riscv/processor.h +++ b/tools/testing/selftests/kvm/include/riscv/processor.h @@ -60,7 +60,8 @@ static inline bool __vcpu_has_sbi_ext(struct kvm_vcpu *vcpu, uint64_t sbi_ext) return __vcpu_has_ext(vcpu, RISCV_SBI_EXT_REG(sbi_ext)); } -struct ex_regs { +struct pt_regs { + unsigned long epc; unsigned long ra; unsigned long sp; unsigned long gp; @@ -92,16 +93,19 @@ struct ex_regs { unsigned long t4; unsigned long t5; unsigned long t6; - unsigned long epc; + /* Supervisor/Machine CSRs */ unsigned long status; + unsigned long badaddr; unsigned long cause; + /* a0 value before the syscall */ + unsigned long orig_a0; }; #define NR_VECTORS 2 #define NR_EXCEPTIONS 32 #define EC_MASK (NR_EXCEPTIONS - 1) -typedef void(*exception_handler_fn)(struct ex_regs *); +typedef void(*exception_handler_fn)(struct pt_regs *); void vm_init_vector_tables(struct kvm_vm *vm); void vcpu_init_vector_tables(struct kvm_vcpu *vcpu); diff --git a/tools/testing/selftests/kvm/lib/riscv/handlers.S b/tools/testing/selftests/kvm/lib/riscv/handlers.S index aa0abd3f35bb..9c99b258cae7 100644 --- a/tools/testing/selftests/kvm/lib/riscv/handlers.S +++ b/tools/testing/selftests/kvm/lib/riscv/handlers.S @@ -9,86 +9,106 @@ #include +#ifdef __ASSEMBLY__ +#define __ASM_STR(x) x +#else +#define __ASM_STR(x) #x +#endif + +#if __riscv_xlen == 64 +#define __REG_SEL(a, b) __ASM_STR(a) +#elif __riscv_xlen == 32 +#define __REG_SEL(a, b) __ASM_STR(b) +#else +#error "Unexpected __riscv_xlen" +#endif + +#define REG_L __REG_SEL(ld, lw) +#define REG_S __REG_SEL(sd, sw) + .macro save_context - addi sp, sp, (-8*34) - sd x1, 0(sp) - sd x2, 8(sp) - sd x3, 16(sp) - sd x4, 24(sp) - sd x5, 32(sp) - sd x6, 40(sp) - sd x7, 48(sp) - sd x8, 56(sp) - sd x9, 64(sp) - sd x10, 72(sp) - sd x11, 80(sp) - sd x12, 88(sp) - sd x13, 96(sp) - sd x14, 104(sp) - sd x15, 112(sp) - sd x16, 120(sp) - sd x17, 128(sp) - sd x18, 136(sp) - sd x19, 144(sp) - sd x20, 152(sp) - sd x21, 160(sp) - sd x22, 168(sp) - sd x23, 176(sp) - sd x24, 184(sp) - sd x25, 192(sp) - sd x26, 200(sp) - sd x27, 208(sp) - sd x28, 216(sp) - sd x29, 224(sp) - sd x30, 232(sp) - sd x31, 240(sp) + addi sp, sp, (-8*36) + REG_S x1, 8(sp) + REG_S x2, 16(sp) + REG_S x3, 24(sp) + REG_S x4, 32(sp) + REG_S x5, 40(sp) + REG_S x6, 48(sp) + REG_S x7, 56(sp) + REG_S x8, 64(sp) + REG_S x9, 72(sp) + REG_S x10, 80(sp) + REG_S x11, 88(sp) + REG_S x12, 96(sp) + REG_S x13, 104(sp) + REG_S x14, 112(sp) + REG_S x15, 120(sp) + REG_S x16, 128(sp) + REG_S x17, 136(sp) + REG_S x18, 144(sp) + REG_S x19, 152(sp) + REG_S x20, 160(sp) + REG_S x21, 168(sp) + REG_S x22, 176(sp) + REG_S x23, 184(sp) + REG_S x24, 192(sp) + REG_S x25, 200(sp) + REG_S x26, 208(sp) + REG_S x27, 216(sp) + REG_S x28, 224(sp) + REG_S x29, 232(sp) + REG_S x30, 240(sp) + REG_S x31, 248(sp) csrr s0, CSR_SEPC csrr s1, CSR_SSTATUS - csrr s2, CSR_SCAUSE - sd s0, 248(sp) - sd s1, 256(sp) - sd s2, 264(sp) + csrr s2, CSR_STVAL + csrr s3, CSR_SCAUSE + REG_S s0, 0(sp) + REG_S s1, 256(sp) + REG_S s2, 264(sp) + REG_S s3, 272(sp) .endm .macro restore_context - ld s2, 264(sp) - ld s1, 256(sp) - ld s0, 248(sp) - csrw CSR_SCAUSE, s2 + REG_L s3, 272(sp) + REG_L s2, 264(sp) + REG_L s1, 256(sp) + REG_L s0, 0(sp) + csrw CSR_SCAUSE, s3 csrw CSR_SSTATUS, s1 csrw CSR_SEPC, s0 - ld x31, 240(sp) - ld x30, 232(sp) - ld x29, 224(sp) - ld x28, 216(sp) - ld x27, 208(sp) - ld x26, 200(sp) - ld x25, 192(sp) - ld x24, 184(sp) - ld x23, 176(sp) - ld x22, 168(sp) - ld x21, 160(sp) - ld x20, 152(sp) - ld x19, 144(sp) - ld x18, 136(sp) - ld x17, 128(sp) - ld x16, 120(sp) - ld x15, 112(sp) - ld x14, 104(sp) - ld x13, 96(sp) - ld x12, 88(sp) - ld x11, 80(sp) - ld x10, 72(sp) - ld x9, 64(sp) - ld x8, 56(sp) - ld x7, 48(sp) - ld x6, 40(sp) - ld x5, 32(sp) - ld x4, 24(sp) - ld x3, 16(sp) - ld x2, 8(sp) - ld x1, 0(sp) - addi sp, sp, (8*34) + REG_L x31, 248(sp) + REG_L x30, 240(sp) + REG_L x29, 232(sp) + REG_L x28, 224(sp) + REG_L x27, 216(sp) + REG_L x26, 208(sp) + REG_L x25, 200(sp) + REG_L x24, 192(sp) + REG_L x23, 184(sp) + REG_L x22, 176(sp) + REG_L x21, 168(sp) + REG_L x20, 160(sp) + REG_L x19, 152(sp) + REG_L x18, 144(sp) + REG_L x17, 136(sp) + REG_L x16, 128(sp) + REG_L x15, 120(sp) + REG_L x14, 112(sp) + REG_L x13, 104(sp) + REG_L x12, 96(sp) + REG_L x11, 88(sp) + REG_L x10, 80(sp) + REG_L x9, 72(sp) + REG_L x8, 64(sp) + REG_L x7, 56(sp) + REG_L x6, 48(sp) + REG_L x5, 40(sp) + REG_L x4, 32(sp) + REG_L x3, 24(sp) + REG_L x2, 16(sp) + REG_L x1, 8(sp) + addi sp, sp, (8*36) .endm .balign 4 diff --git a/tools/testing/selftests/kvm/lib/riscv/processor.c b/tools/testing/selftests/kvm/lib/riscv/processor.c index dd663bcf0cc0..2eac7d4b59e9 100644 --- a/tools/testing/selftests/kvm/lib/riscv/processor.c +++ b/tools/testing/selftests/kvm/lib/riscv/processor.c @@ -402,7 +402,7 @@ struct handlers { exception_handler_fn exception_handlers[NR_VECTORS][NR_EXCEPTIONS]; }; -void route_exception(struct ex_regs *regs) +void route_exception(struct pt_regs *regs) { struct handlers *handlers = (struct handlers *)exception_handlers; int vector = 0, ec; diff --git a/tools/testing/selftests/kvm/riscv/arch_timer.c b/tools/testing/selftests/kvm/riscv/arch_timer.c index 9e370800a6a2..f962fefc48fa 100644 --- a/tools/testing/selftests/kvm/riscv/arch_timer.c +++ b/tools/testing/selftests/kvm/riscv/arch_timer.c @@ -15,7 +15,7 @@ static int timer_irq = IRQ_S_TIMER; -static void guest_irq_handler(struct ex_regs *regs) +static void guest_irq_handler(struct pt_regs *regs) { uint64_t xcnt, xcnt_diff_us, cmp; unsigned int intid = regs->cause & ~CAUSE_IRQ_FLAG; diff --git a/tools/testing/selftests/kvm/riscv/ebreak_test.c b/tools/testing/selftests/kvm/riscv/ebreak_test.c index cfed6c727bfc..739d17befb5a 100644 --- a/tools/testing/selftests/kvm/riscv/ebreak_test.c +++ b/tools/testing/selftests/kvm/riscv/ebreak_test.c @@ -27,7 +27,7 @@ static void guest_code(void) GUEST_DONE(); } -static void guest_breakpoint_handler(struct ex_regs *regs) +static void guest_breakpoint_handler(struct pt_regs *regs) { WRITE_ONCE(sw_bp_addr, regs->epc); regs->epc += 4; diff --git a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c index 03406de4989d..6e66833e5941 100644 --- a/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c +++ b/tools/testing/selftests/kvm/riscv/sbi_pmu_test.c @@ -128,7 +128,7 @@ static void stop_counter(unsigned long counter, unsigned long stop_flags) "Unable to stop counter %ld error %ld\n", counter, ret.error); } -static void guest_illegal_exception_handler(struct ex_regs *regs) +static void guest_illegal_exception_handler(struct pt_regs *regs) { __GUEST_ASSERT(regs->cause == EXC_INST_ILLEGAL, "Unexpected exception handler %lx\n", regs->cause); @@ -138,7 +138,7 @@ static void guest_illegal_exception_handler(struct ex_regs *regs) regs->epc += 4; } -static void guest_irq_handler(struct ex_regs *regs) +static void guest_irq_handler(struct pt_regs *regs) { unsigned int irq_num = regs->cause & ~CAUSE_IRQ_FLAG; struct riscv_pmu_snapshot_data *snapshot_data = snapshot_gva; From patchwork Wed Apr 30 00:18:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Atish Patra X-Patchwork-Id: 886822 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5F593296FCD for ; Wed, 30 Apr 2025 00:18:56 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745972338; cv=none; b=FwJkkcezMvORu8cxSIEuDn33B5+vo4pGzbkMlaHoZpA7hxUaAv0/l7K1ZRHfQ/j1WIHEQVHmBrnzj1zyoiQEAf7IvGj6WIySE/wFcYbVUO/D3IUSs+U2avkkrB+vUG3JTQEK2JlZ66j9leBBvavoxI4ieh5ayANY/BZcFolmlTI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745972338; c=relaxed/simple; bh=4i9s3BZ04Xaul4JIJaZ1jBsAtPDXHoSG+D3zfdaYIkE=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=jalaRF8TDphD3yHykt9aYxHQIqd+1+0KiHheISS1LoApCmMDm5hHqtQqud6F/7aePJcUSL+VE3DJ04Cupfqb3p73K2LVuxzSlU877Db0JXTvBBN6n1iNsb6Y//4u3fl6Wv7C5OoneiDM85GQ8mLoD2LDO501ooqXN+VKS0MbDyc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com; spf=pass smtp.mailfrom=rivosinc.com; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b=j5FxAUTm; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=rivosinc.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.b="j5FxAUTm" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-73c17c770a7so8973318b3a.2 for ; Tue, 29 Apr 2025 17:18:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1745972336; x=1746577136; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=a8YO5iKhPcWfVDqBOx8UCOEpbW1W4oCDkW5wQpoY+ec=; b=j5FxAUTmwf++s4QJ49lnJbUAN2FQHcu3NmSayHcZyWPfepT5nD2RUIz0SbLcriWbTD D01NC+snQUE91GI0IJpQge1AIPZ0ZH7jVjyvlSI2DZ7fA4CidjbozpuVcTYjNmcp/xHq haRHwDOt2A9zasp2VuheFtNgnJWUKap37RDDAPs7SxPgLor8JOi/inbPi0pDgQsBMyFU omgpXUQeu0bdW39R6rp2GmkUkj04j91oMBbHKkSPqKBdlzVX65GoVwYnQ6jQRtg9CFaS pb4d0u05RJL5Qu5PHUsnYUug4dyNwAOAqlkmZscxfqQXUJT34oXCZqEENXpJvzGm11Bw u3jQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1745972336; x=1746577136; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=a8YO5iKhPcWfVDqBOx8UCOEpbW1W4oCDkW5wQpoY+ec=; b=JwWz7sBQWh2Q9m0UHOFoFZyZHOMOn51vJSMjyCwY87lq16lTdvrXDaiXpHi5vX5oFp 8vL5W921XDh0GId/kmYgiy6qWk85pYVwGzKWKk3Wt2lwCyS64MzJy/H+mLANrWFMR6Xi DSRaObUBx+QPZMiErTzUeDkerTVdwPp8EVrkG8ClwfyinihPxdWxCy6HVPLZUWSNLilS hBXSqmH9BlnHMFomdKzTDSUHT4Zqipk370DTHAqzxDoeJKdggCCyuPsIKnalln/rKMrD fUtkacXUL0+5BaDoFmP+N42G7oBoXL4fLGUuWLSX2WrC6PEYgvAf0R2AIKpZOeeTcHob mDmA== X-Forwarded-Encrypted: i=1; AJvYcCWQ4iTnSjlfzWuex5yyV5FbUBOeoXB0KHaKUn644nJMCwdfhLQYpJC8hBJ3kauHlJ4mOWb44og1JTtIm5wyRR0=@vger.kernel.org X-Gm-Message-State: AOJu0YwlXQMylmBFmBuKIXd1cxdTV1H7+zACNv/o00G3Z83tGO9wqZ1S O71pOjbdUFGPEDFiyvVqsLLpLqqlFjCE2KvtYE1rNCPI5lyBj4PoOfAxK7kcqkU= X-Gm-Gg: ASbGncsYkWanCwVooMZV6bK/+VlrtI4pF/7XFqpGfWoo47qgY/OJclqX/TPTfXjK74c S5fxOBkwDNBjIbTrbkKUPmzy8vgQXBXhhCXslWNpblUqGH+q3QWop2O5ol7ZJbskUzSodsINvMB LCeJ2/Y8bHuTG1pQmP9HKDCikKFG2RXmU4JLsJVnRspHK0DvWgvauShlzAFaM0nLelzSpqm/UR2 YLERhNv20J3nWygEzrPuMx6suuoA1IIk++Fgku5RgjaIfqnBfubCPrQxYlg6abMW7CSzwG4UZDO FOzaqcmmYrk394/Ik0TZ3UC+2Evqz39+oRqN9YL2B5hMF7jur5/HtCAifilO6rRn X-Google-Smtp-Source: AGHT+IFUQqkif7tyNajsLjY+g5KMuuCrLYrLd3gjDVZ022YnGAyiTdCKbHlRtIL/fogtzKbEqXLS2Q== X-Received: by 2002:a05:6a21:9104:b0:1f5:7d57:830f with SMTP id adf61e73a8af0-20a8931b12dmr1346023637.33.1745972335680; Tue, 29 Apr 2025 17:18:55 -0700 (PDT) Received: from atishp.ba.rivosinc.com ([64.71.180.162]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-b1f68988ca4sm1907790a12.74.2025.04.29.17.18.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 29 Apr 2025 17:18:55 -0700 (PDT) From: Atish Patra Date: Tue, 29 Apr 2025 17:18:47 -0700 Subject: [PATCH v2 3/3] KVM: riscv: selftests: Add vector extension tests Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250429-kvm_selftest_improve-v2-3-51713f91e04a@rivosinc.com> References: <20250429-kvm_selftest_improve-v2-0-51713f91e04a@rivosinc.com> In-Reply-To: <20250429-kvm_selftest_improve-v2-0-51713f91e04a@rivosinc.com> To: Anup Patel , Atish Patra , Paolo Bonzini , Shuah Khan , Paul Walmsley , Palmer Dabbelt , Alexandre Ghiti , Andrew Jones Cc: kvm@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kselftest@vger.kernel.org, linux-kernel@vger.kernel.org, Atish Patra X-Mailer: b4 0.15-dev-42535 Add vector related tests with the ISA extension standard template. However, the vector registers are bit tricky as the register length is variable based on vlenb value of the system. That's why the macros are defined with a default and overidden with actual value at runtime. Reviewed-by: Anup Patel Signed-off-by: Atish Patra --- tools/testing/selftests/kvm/riscv/get-reg-list.c | 133 +++++++++++++++++++++++ 1 file changed, 133 insertions(+) diff --git a/tools/testing/selftests/kvm/riscv/get-reg-list.c b/tools/testing/selftests/kvm/riscv/get-reg-list.c index 569f2d67c9b8..814dd981ce0b 100644 --- a/tools/testing/selftests/kvm/riscv/get-reg-list.c +++ b/tools/testing/selftests/kvm/riscv/get-reg-list.c @@ -17,6 +17,15 @@ enum { VCPU_FEATURE_SBI_EXT, }; +enum { + KVM_RISC_V_REG_OFFSET_VSTART = 0, + KVM_RISC_V_REG_OFFSET_VL, + KVM_RISC_V_REG_OFFSET_VTYPE, + KVM_RISC_V_REG_OFFSET_VCSR, + KVM_RISC_V_REG_OFFSET_VLENB, + KVM_RISC_V_REG_OFFSET_MAX, +}; + static bool isa_ext_cant_disable[KVM_RISCV_ISA_EXT_MAX]; bool filter_reg(__u64 reg) @@ -143,6 +152,39 @@ bool check_reject_set(int err) return err == EINVAL; } +static int override_vector_reg_size(struct kvm_vcpu *vcpu, struct vcpu_reg_sublist *s, + uint64_t feature) +{ + unsigned long vlenb_reg = 0; + int rc; + u64 reg, size; + + /* Enable V extension so that we can get the vlenb register */ + rc = __vcpu_set_reg(vcpu, feature, 1); + if (rc) + return rc; + + __vcpu_get_reg(vcpu, s->regs[KVM_RISC_V_REG_OFFSET_VLENB], &vlenb_reg); + + if (!vlenb_reg) { + TEST_FAIL("Can't compute vector register size from zero vlenb\n"); + return -EPERM; + } + + size = __builtin_ctzl(vlenb_reg); + size <<= KVM_REG_SIZE_SHIFT; + + for (int i = 0; i < 32; i++) { + reg = KVM_REG_RISCV | KVM_REG_RISCV_VECTOR | size | KVM_REG_RISCV_VECTOR_REG(i); + s->regs[KVM_RISC_V_REG_OFFSET_MAX + i] = reg; + } + + /* We should assert if disabling failed here while enabling succeeded before */ + vcpu_set_reg(vcpu, feature, 0); + + return 0; +} + void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) { unsigned long isa_ext_state[KVM_RISCV_ISA_EXT_MAX] = { 0 }; @@ -172,6 +214,13 @@ void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) if (!s->feature) continue; + if (s->feature == KVM_RISCV_ISA_EXT_V) { + feature = RISCV_ISA_EXT_REG(s->feature); + rc = override_vector_reg_size(vcpu, s, feature); + if (rc) + goto skip; + } + switch (s->feature_type) { case VCPU_FEATURE_ISA_EXT: feature = RISCV_ISA_EXT_REG(s->feature); @@ -186,6 +235,7 @@ void finalize_vcpu(struct kvm_vcpu *vcpu, struct vcpu_reg_list *c) /* Try to enable the desired extension */ __vcpu_set_reg(vcpu, feature, 1); +skip: /* Double check whether the desired extension was enabled */ __TEST_REQUIRE(__vcpu_has_ext(vcpu, feature), "%s not available, skipping tests", s->name); @@ -410,6 +460,35 @@ static const char *fp_d_id_to_str(const char *prefix, __u64 id) return strdup_printf("%lld /* UNKNOWN */", reg_off); } +static const char *vector_id_to_str(const char *prefix, __u64 id) +{ + /* reg_off is the offset into struct __riscv_v_ext_state */ + __u64 reg_off = id & ~(REG_MASK | KVM_REG_RISCV_VECTOR); + int reg_index = 0; + + assert((id & KVM_REG_RISCV_TYPE_MASK) == KVM_REG_RISCV_VECTOR); + + if (reg_off >= KVM_REG_RISCV_VECTOR_REG(0)) + reg_index = reg_off - KVM_REG_RISCV_VECTOR_REG(0); + switch (reg_off) { + case KVM_REG_RISCV_VECTOR_REG(0) ... + KVM_REG_RISCV_VECTOR_REG(31): + return strdup_printf("KVM_REG_RISCV_VECTOR_REG(%d)", reg_index); + case KVM_REG_RISCV_VECTOR_CSR_REG(vstart): + return "KVM_REG_RISCV_VECTOR_CSR_REG(vstart)"; + case KVM_REG_RISCV_VECTOR_CSR_REG(vl): + return "KVM_REG_RISCV_VECTOR_CSR_REG(vl)"; + case KVM_REG_RISCV_VECTOR_CSR_REG(vtype): + return "KVM_REG_RISCV_VECTOR_CSR_REG(vtype)"; + case KVM_REG_RISCV_VECTOR_CSR_REG(vcsr): + return "KVM_REG_RISCV_VECTOR_CSR_REG(vcsr)"; + case KVM_REG_RISCV_VECTOR_CSR_REG(vlenb): + return "KVM_REG_RISCV_VECTOR_CSR_REG(vlenb)"; + } + + return strdup_printf("%lld /* UNKNOWN */", reg_off); +} + #define KVM_ISA_EXT_ARR(ext) \ [KVM_RISCV_ISA_EXT_##ext] = "KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_" #ext @@ -639,6 +718,9 @@ void print_reg(const char *prefix, __u64 id) case KVM_REG_SIZE_U128: reg_size = "KVM_REG_SIZE_U128"; break; + case KVM_REG_SIZE_U256: + reg_size = "KVM_REG_SIZE_U256"; + break; default: printf("\tKVM_REG_RISCV | (%lld << KVM_REG_SIZE_SHIFT) | 0x%llx /* UNKNOWN */,\n", (id & KVM_REG_SIZE_MASK) >> KVM_REG_SIZE_SHIFT, id & ~REG_MASK); @@ -670,6 +752,10 @@ void print_reg(const char *prefix, __u64 id) printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_FP_D | %s,\n", reg_size, fp_d_id_to_str(prefix, id)); break; + case KVM_REG_RISCV_VECTOR: + printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_VECTOR | %s,\n", + reg_size, vector_id_to_str(prefix, id)); + break; case KVM_REG_RISCV_ISA_EXT: printf("\tKVM_REG_RISCV | %s | KVM_REG_RISCV_ISA_EXT | %s,\n", reg_size, isa_ext_id_to_str(prefix, id)); @@ -874,6 +960,48 @@ static __u64 fp_d_regs[] = { KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_D, }; +/* Define a default vector registers with length. This will be overwritten at runtime */ +static __u64 vector_regs[] = { + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_CSR_REG(vstart), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_CSR_REG(vl), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_CSR_REG(vtype), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_CSR_REG(vcsr), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_CSR_REG(vlenb), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(0), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(1), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(2), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(3), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(4), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(5), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(6), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(7), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(8), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(9), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(10), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(11), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(12), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(13), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(14), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(15), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(16), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(17), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(18), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(19), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(20), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(21), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(22), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(23), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(24), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(25), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(26), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(27), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(28), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(29), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(30), + KVM_REG_RISCV | KVM_REG_SIZE_U128 | KVM_REG_RISCV_VECTOR | KVM_REG_RISCV_VECTOR_REG(31), + KVM_REG_RISCV | KVM_REG_SIZE_ULONG | KVM_REG_RISCV_ISA_EXT | KVM_REG_RISCV_ISA_SINGLE | KVM_RISCV_ISA_EXT_V, +}; + #define SUBLIST_BASE \ {"base", .regs = base_regs, .regs_n = ARRAY_SIZE(base_regs), \ .skips_set = base_skips_set, .skips_set_n = ARRAY_SIZE(base_skips_set),} @@ -898,6 +1026,9 @@ static __u64 fp_d_regs[] = { {"fp_d", .feature = KVM_RISCV_ISA_EXT_D, .regs = fp_d_regs, \ .regs_n = ARRAY_SIZE(fp_d_regs),} +#define SUBLIST_V \ + {"v", .feature = KVM_RISCV_ISA_EXT_V, .regs = vector_regs, .regs_n = ARRAY_SIZE(vector_regs),} + #define KVM_ISA_EXT_SIMPLE_CONFIG(ext, extu) \ static __u64 regs_##ext[] = { \ KVM_REG_RISCV | KVM_REG_SIZE_ULONG | \ @@ -966,6 +1097,7 @@ KVM_SBI_EXT_SIMPLE_CONFIG(susp, SUSP); KVM_ISA_EXT_SUBLIST_CONFIG(aia, AIA); KVM_ISA_EXT_SUBLIST_CONFIG(fp_f, FP_F); KVM_ISA_EXT_SUBLIST_CONFIG(fp_d, FP_D); +KVM_ISA_EXT_SUBLIST_CONFIG(v, V); KVM_ISA_EXT_SIMPLE_CONFIG(h, H); KVM_ISA_EXT_SIMPLE_CONFIG(smnpm, SMNPM); KVM_ISA_EXT_SUBLIST_CONFIG(smstateen, SMSTATEEN); @@ -1040,6 +1172,7 @@ struct vcpu_reg_list *vcpu_configs[] = { &config_fp_f, &config_fp_d, &config_h, + &config_v, &config_smnpm, &config_smstateen, &config_sscofpmf,