From patchwork Mon Jun 17 16:12:08 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 804894 Delivered-To: patch@linaro.org Received: by 2002:adf:fb90:0:b0:360:93e7:1765 with SMTP id a16csp629428wrr; Mon, 17 Jun 2024 09:14:55 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVW6uubUgWx2shsdsSyhMS/yZ6YnKNO1zPveDZYeoLhYsKY51XguHfjbnTfo3BpW6XIzIBHqCNPGtkYxbfvJGWr X-Google-Smtp-Source: AGHT+IG8hxPuF+C+ZLWhdsOInus5NTrJmg0AFeQLDr2zZQlcX9zMjjszLwDc2jDIwVpbXVE115g4 X-Received: by 2002:a05:6512:b1c:b0:52b:88c3:b2bc with SMTP id 2adb3069b0e04-52ca6e90630mr7509440e87.48.1718640895029; Mon, 17 Jun 2024 09:14:55 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1718640894; cv=none; d=google.com; s=arc-20160816; b=igZ98JeBVoguHkQlSJEWUtYe+yRdyrr6QyeDf4K94UbaB5lsIS38C0d0Lhksyiqzks KNB8GF65fDYU9uoTs8p4bxUcHBH6yOHGEro/Cln/gix/dLBDlG/5L2BX+5T3DmqT2lKv N9HXGgc4hO2p+It/IoI8JFt/6fATrespyvFou2nbQwXvwrjpeDsXxiceDtlJ1e0k63qn 5Q0Yh94SNoTB8gk9tZtbU7OaC3msSSQy8em25QzsQLwwqPLKwjiVxVYVNP9OD0uhPONM rDg/0mXZ2P+bl1H6UJV5h64QF/ud643YjuuIyOa4o1wmjjqqptZ4vAExMGhwdzAz68Cy 2z0g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=RU8dA+BndeNrGR5fmXLNETHnH3CX1uGkm7Et/jlHqU4=; fh=3mEp7FusppN5Drq2PN08R8D/9xXM9YdbtG5TkJxJBiE=; b=ufKxBvl9kV2Ej7YSpfJJDIz3QdaV5jBjYyeoqdIa8FUcc5XVIKC75w7CoNJlRNt/1b LP6MPNYXUdWBiGxMJ1NXD+Fh2BppwTMwVexyNMWRKePPbDA4LT5ZJxEQRGy9zedC+GIf ahnk0hNv8XqIHCUTvu5knAMpTHcW62AA1MlSNJmGgUl7V1QRf0/7ru9iVZs+IMcJz58T FxzO89KS/CmbckJideBckIEygAw1VVgHpPRwYtk8sqwfubX3SVmuK/8BO3D6vSbd7as0 Z6xvFlTpTAWYWrg2qPmydoGvMVwg5YvFmoiY4y4hGKfPovwXGf3ns/T0A4U1MsKZWzij frZw==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ixvzRaxi; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id ffacd0b85a97d-360750b6394si6224490f8f.188.2024.06.17.09.14.54 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 17 Jun 2024 09:14:54 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=ixvzRaxi; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sJExx-0005ED-FC; Mon, 17 Jun 2024 12:12:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sJExt-0005DH-Dr for qemu-devel@nongnu.org; Mon, 17 Jun 2024 12:12:18 -0400 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sJExq-00089j-4G for qemu-devel@nongnu.org; Mon, 17 Jun 2024 12:12:16 -0400 Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1f4a5344ec7so31732275ad.1 for ; Mon, 17 Jun 2024 09:12:13 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718640732; x=1719245532; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RU8dA+BndeNrGR5fmXLNETHnH3CX1uGkm7Et/jlHqU4=; b=ixvzRaxiCLUDbC+0dEFLceP11No3+ZsmKGwJrV/dqxS2MR6li7VGCINHQQTzLngfEj KaoKdssUQ1zLN39+NXUOsGORAzbyfJMNEajk+Lp5Kq6yRSkaC6NJhGddHRMEh9L1ppI1 PPwUUeDT3ZPQTGH8Y9MIyITH4HDGxOuVdo4qrE1ZKdmik2BkVQR39zyVLLXamKSIIdhR VNS/x0lfpRZzJPzhp9rWSRremWtRCeqdkYVFFUYEU5a4iDfufN+yfCVpYIbfjWyjtcAX DghUuwt9QsvKnukqaVUHlXdylMX2nNkgScFBm2O3CEm8zQXgsuxIi4NId9KYn/BIVEyg qAeQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718640732; x=1719245532; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RU8dA+BndeNrGR5fmXLNETHnH3CX1uGkm7Et/jlHqU4=; b=XY8CpQh69Bh8+7i5fpLxbz8sZDuSN8S6waBc/qIasGYm24FdDwu4xYnXU8otND8X43 S8gS1NGLxCcILnD2LClgrR+IRkp6EAG0W3u7QyZSxD7RRLv6pYWqVBaDJQ2YTeDYwViZ 3nSi2hEwfEF8CCzyWL4ks8QAYxrx13kiy6q7ISjnk0sanYt8wDWJUNzWj/95EnVpXsnK u6nWK4mWDDch1Po+US00psGpT5ZzCOMlzwV0E1An4Kh49/BYAisOPtYZ4DW4UWpJ0SIv 7/kmVSsZuGg1s8rBDbJvXfBoaVkN8g6WEgV0JHB3Z49d5ADTYUkrtNQWYhO1xsdF61MZ XffA== X-Gm-Message-State: AOJu0Yxlod7ug8jY5LtCuD0S4G8FAma7ZOQjV8FMdb/O3ynNAa8FuyUY uDZ0TdoATnek6JHYnYjoIkw6rWlB4mryPYLvxKOMfi+c0HiNY3Zg/muwKfBW4PVtg3ubeHBQ2gL u X-Received: by 2002:a17:902:d492:b0:1f6:6a94:76c5 with SMTP id d9443c01a7336-1f98b23f6e5mr1392295ad.20.1718640732423; Mon, 17 Jun 2024 09:12:12 -0700 (PDT) Received: from stoup.. ([71.212.132.216]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855ee8317sm80829285ad.145.2024.06.17.09.12.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 09:12:12 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com Subject: [PATCH 1/3] target/i386: Introduce x86_mmu_index_{kernel_,}pl Date: Mon, 17 Jun 2024 09:12:08 -0700 Message-Id: <20240617161210.4639-2-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240617161210.4639-1-richard.henderson@linaro.org> References: <20240617161210.4639-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::630; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x630.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Disconnect mmu index computation from the current pl as stored in env->hflags. Signed-off-by: Richard Henderson --- target/i386/cpu.h | 12 +++--------- target/i386/cpu.c | 27 ++++++++++++++++++++++++--- 2 files changed, 27 insertions(+), 12 deletions(-) diff --git a/target/i386/cpu.h b/target/i386/cpu.h index 8fe28b67e0..a528c30616 100644 --- a/target/i386/cpu.h +++ b/target/i386/cpu.h @@ -2432,15 +2432,9 @@ static inline bool is_mmu_index_32(int mmu_index) return mmu_index & 1; } -static inline int cpu_mmu_index_kernel(CPUX86State *env) -{ - int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; - int mmu_index_base = - !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : - ((env->hflags & HF_CPL_MASK) < 3 && (env->eflags & AC_MASK)) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; - - return mmu_index_base + mmu_index_32; -} +int x86_mmu_index_pl(CPUX86State *env, unsigned pl); +int x86_mmu_index_kernel_pl(CPUX86State *env, unsigned pl); +int cpu_mmu_index_kernel(CPUX86State *env); #define CC_DST (env->cc_dst) #define CC_SRC (env->cc_src) diff --git a/target/i386/cpu.c b/target/i386/cpu.c index 7466217d5e..ee7767046d 100644 --- a/target/i386/cpu.c +++ b/target/i386/cpu.c @@ -8107,18 +8107,39 @@ static bool x86_cpu_has_work(CPUState *cs) return x86_cpu_pending_interrupt(cs, cs->interrupt_request) != 0; } -static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +int x86_mmu_index_pl(CPUX86State *env, unsigned pl) { - CPUX86State *env = cpu_env(cs); int mmu_index_32 = (env->hflags & HF_CS64_MASK) ? 0 : 1; int mmu_index_base = - (env->hflags & HF_CPL_MASK) == 3 ? MMU_USER64_IDX : + pl == 3 ? MMU_USER64_IDX : !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : (env->eflags & AC_MASK) ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX; return mmu_index_base + mmu_index_32; } +static int x86_cpu_mmu_index(CPUState *cs, bool ifetch) +{ + CPUX86State *env = cpu_env(cs); + return x86_mmu_index_pl(env, env->hflags & HF_CPL_MASK); +} + +int x86_mmu_index_kernel_pl(CPUX86State *env, unsigned pl) +{ + int mmu_index_32 = (env->hflags & HF_LMA_MASK) ? 0 : 1; + int mmu_index_base = + !(env->hflags & HF_SMAP_MASK) ? MMU_KNOSMAP64_IDX : + (pl < 3 && (env->eflags & AC_MASK) + ? MMU_KNOSMAP64_IDX : MMU_KSMAP64_IDX); + + return mmu_index_base + mmu_index_32; +} + +int cpu_mmu_index_kernel(CPUX86State *env) +{ + return x86_mmu_index_kernel_pl(env, env->hflags & HF_CPL_MASK); +} + static void x86_disas_set_info(CPUState *cs, disassemble_info *info) { X86CPU *cpu = X86_CPU(cs); From patchwork Mon Jun 17 16:12:09 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 804891 Delivered-To: patch@linaro.org Received: by 2002:adf:fb90:0:b0:360:93e7:1765 with SMTP id a16csp628825wrr; Mon, 17 Jun 2024 09:13:29 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVOeXQLxXGvUtwzo0p1cUw3jKuCnqWEO40P2odm6xcGmhaHecszQ5uwbZ1RX0ST1Hdju+kLSc8fDikCLTylRmjN X-Google-Smtp-Source: AGHT+IEFl2jX/FXNdWWQnNeLo4JiieP5no02ykEQENu5UKTE8/niZCDZYLiSCTKk0Agq7kDG2kwI X-Received: by 2002:a9d:68d7:0:b0:6fb:fb0e:5160 with SMTP id 46e09a7af769-6fbfb0e5273mr10512459a34.7.1718640808885; Mon, 17 Jun 2024 09:13:28 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1718640808; cv=none; d=google.com; s=arc-20160816; b=KwcmB9qVTXuiyhlLLKqA90klOWbndeeEmKTWlgfNoOiHmPAzDYxOP5Y9GyeLA8PTOS lY/1Q1Y0qd5dCTuAZ0/CvtnHojg3EqvoH+oC3wSyEMFk6mROHN1Ll3BtOIRdoBSgIgca iw0uFXx94c6EtB646GW41AAVq4xRCONUwhbpbd9Lkgm9HSa8ALen5e09ISmBHqm1q+fP Xxj8YB2WdpNCwCf0PGfGCn9i+4WEMp64zKKsdLyjRCnVXKfoGvHnGanaITRdVfYlSrCQ iwUDPhP+YKVkKveQ6XXNbxSdcaMMFFBYpNvre6m7plQjGdVD5mO5sL3nxkr9OPcPb98m qyIQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=QTNFGS8vX+zjeGw3tiagWM4V6BogHHP8tAf3WNuhRt4=; fh=3mEp7FusppN5Drq2PN08R8D/9xXM9YdbtG5TkJxJBiE=; b=YPc2OcLzBr6ZT6ihmaOzRSAeaOlqWo0E2DkPCAtM2GceDoKwasfGfYQTcYvYrDAUyj exbFb0Vti8z0Qv+DoDgErAkNVocf8kovboiW4HC6UpvRqcUhPFFYt2nST2kn8eRLBi3O e8Oaz33XS7hlKgTx8mMkbGCTZM1QmWad483XoS0PfQWrNzskHH1dJpANOYJHp7T0D2ia lHmAeVVEkDl6dhGTbedMABKGtFMc5IS9oJuljE2CPg36SApKYylJPLRxO09jHkzCFYLz Sw/7A9JnfkrKwIVC78r4eXbY+a48OeEMTb3AYVXLvUD9bYq9BzrXIOq059zhhHiPBLzj GmQQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=WrN4pQdH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6b2a5eaac24si101597866d6.606.2024.06.17.09.13.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 17 Jun 2024 09:13:28 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=WrN4pQdH; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sJExx-0005E8-CZ; Mon, 17 Jun 2024 12:12:21 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sJExu-0005DI-3y for qemu-devel@nongnu.org; Mon, 17 Jun 2024 12:12:18 -0400 Received: from mail-pl1-x62c.google.com ([2607:f8b0:4864:20::62c]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sJExs-00089q-Gf for qemu-devel@nongnu.org; Mon, 17 Jun 2024 12:12:17 -0400 Received: by mail-pl1-x62c.google.com with SMTP id d9443c01a7336-1f480624d0fso32880765ad.1 for ; Mon, 17 Jun 2024 09:12:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718640733; x=1719245533; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QTNFGS8vX+zjeGw3tiagWM4V6BogHHP8tAf3WNuhRt4=; b=WrN4pQdHuflbOaFHjX/eIds0gyjBpdAwHMVIZ0CvKuHwuvLhPQDOv9h5bdBY7E8O76 PpT+oarg6/rNr5Jh+4SKehJ/QQ/bI9BjD+HR31KT9ZCzt8h/Hj7IPb2ftk8yZ91A+U5G 59L+1272YkYLJao+t7yLE/SDUD9j7zeNH3Bn+1jiD4byvZx61gXoOUg+pQaKDGuUzMnp dQQzto7jBio6H6AePDzOZmaALNs85pXJ/n2Hm+4S2Y8PLZ8IGmSZR4oddAQbWFbhSpuh W0vZLEd/EKvJfFrpuMNzs8J3maJik97Vxxrnwtd9+ToUMBCOu9YVCDPbX5hPR5LJ+Vgo FIQQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718640733; x=1719245533; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QTNFGS8vX+zjeGw3tiagWM4V6BogHHP8tAf3WNuhRt4=; b=lpl1NHH//qFl6Wg+GXSFBBDw8kr5PriRdJnNAYInqFS6vIeTfJMVubcP265fDewUU6 sdSkBZDhXj6pWw7WHlQrpED+CjS34tggCXgrIkEZQVge5YlB9aShCQokTd4sxKgT4sLH Ov97G/oTYJXme/ezltHNYhwvJEDcJ1u3gUbOGDm/eYbt2XRaGtusAa/8Xw+OVDwTxuah AeQvyJFKSQeiL8feSGJcGE88QF0FuGJz28ezofPv2yeczMMpt86/5ipKnlksepaZLNsV MnLyN5kfMBVA+3xB92IwfrcygTB0n9AaOeAhYohZv+Ewr23AZIz7L4NmurhWU/nQJTxM AuWw== X-Gm-Message-State: AOJu0YyglZahS1gui7UstoIstJz7+hE7MeFgPcDF6bcDJDK8UoddaolY +8KDt6v+5dhopCr5zQgQ7ys6OLPGA1W+cj/MpJ7onWPY5kU7bQPrNDCOT9GwtCQaQ+tIuHSaOiJ f X-Received: by 2002:a17:902:c40f:b0:1f6:8157:b52f with SMTP id d9443c01a7336-1f8625c60a2mr115174255ad.8.1718640733455; Mon, 17 Jun 2024 09:12:13 -0700 (PDT) Received: from stoup.. ([71.212.132.216]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855ee8317sm80829285ad.145.2024.06.17.09.12.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 09:12:13 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com Subject: [PATCH 2/3] target/i386: Remove SEG_ADDL Date: Mon, 17 Jun 2024 09:12:09 -0700 Message-Id: <20240617161210.4639-3-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240617161210.4639-1-richard.henderson@linaro.org> References: <20240617161210.4639-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::62c; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x62c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org This truncation is now handled by MMU_*32_IDX, which is how this was working for PUSHW/POPW, which did not use SEG_ADDL. Signed-off-by: Richard Henderson --- target/i386/tcg/seg_helper.c | 8 ++------ 1 file changed, 2 insertions(+), 6 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 715db1f232..8884d82b33 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -579,10 +579,6 @@ int exception_has_error_code(int intno) } while (0) #endif -/* in 64-bit machines, this can overflow. So this segment addition macro - * can be used to trim the value to 32-bit whenever needed */ -#define SEG_ADDL(ssp, sp, sp_mask) ((uint32_t)((ssp) + (sp & (sp_mask)))) - /* XXX: add a is_user flag to have proper security support */ #define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ { \ @@ -593,7 +589,7 @@ int exception_has_error_code(int intno) #define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ { \ sp -= 4; \ - cpu_stl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), (uint32_t)(val), ra); \ + cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ } #define POPW_RA(ssp, sp, sp_mask, val, ra) \ @@ -604,7 +600,7 @@ int exception_has_error_code(int intno) #define POPL_RA(ssp, sp, sp_mask, val, ra) \ { \ - val = (uint32_t)cpu_ldl_kernel_ra(env, SEG_ADDL(ssp, sp, sp_mask), ra); \ + val = (uint32_t)cpu_ldl_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ sp += 4; \ } From patchwork Mon Jun 17 16:12:10 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 804893 Delivered-To: patch@linaro.org Received: by 2002:adf:fb90:0:b0:360:93e7:1765 with SMTP id a16csp628827wrr; Mon, 17 Jun 2024 09:13:29 -0700 (PDT) X-Forwarded-Encrypted: i=2; AJvYcCVwVldTI/hrT5XJqRECueqdB1YrPOFpWCc5xB4dJuY+6MefVjj+6/rnoDPkdHmwwIMfAqkhjZDrEWJsK/LuHGsT X-Google-Smtp-Source: AGHT+IGK436p29SmvR7UhSmFac1M02lFFt4RxKQ2jGDakJ+UAbB1EHOGGfLNqC+MmunRlqgW972Z X-Received: by 2002:a05:6214:f2c:b0:6ad:84aa:2956 with SMTP id 6a1803df08f44-6b2e230dd8cmr2170256d6.13.1718640809049; Mon, 17 Jun 2024 09:13:29 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1718640809; cv=none; d=google.com; s=arc-20160816; b=piAZF4XxKZdYGHshS6yVQoVvPw6A/D8QO2sNLiE9Q7czt/5HYTgdR4d6etlNYCa3DO 9zvTyNO0GJ3tb2hDJY5FXxeQKsKS1K+VL5zoMbWM/8h10oTjGdZJSvMcxtWIhfVek4oe 5EzPzbHxcqcHisSJ82xBWEpphb2JxROS1Y4iRBfb1uRdw9vLACLytUxTH8y/WPC321O3 ZT51VdjHSIJm/Ays+zfmvYDxKBG2UFHntHZZhZy+FzW6laE5PRc4LwwiUk0G4chVjUVD 2YzUo6WbgfUv0F3/qpuM77HixzHckgfgn63UwG95TOjMid3OruknLBfD7qgtkAO2Yya8 lwhQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:content-transfer-encoding :mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=0LU4SjJKqf+ERRYZSLiiBynsCwrBC0K0Sn0nz4JVZgM=; fh=3mEp7FusppN5Drq2PN08R8D/9xXM9YdbtG5TkJxJBiE=; b=awnrnpCBHOv3ISfJR7k9oRSw1AQUUO9+EFOQCY211EtI0kwTUeFFXI8yG37Ze9298l RYAd1mOIAQ6rOVlENxCZ1FGvU1+8avzjkmBAG4L7S0BR0t/1I1xTjjQIIXE0hbj8Oop6 xDNox0YODt50vn6njiLNbox1tja7eDJew3QPqtfDih9fH0+V7YSxXtIJ+Ltu67xQFx37 g4mc3f3OOKxbPK/jDCAou+gd38HNyEau6WxG4hfUfXgKVsY62FjYKkkk+j+4dikDL+uG CoKQLadcqyDoorYAewGRcY8xbC2801cP0X+1k8zPIWFMbLUVhrLLg2v5cOVB0RVCwfU9 07AQ==; dara=google.com ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tkY3Frkv; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.gnu.org (lists.gnu.org. [209.51.188.17]) by mx.google.com with ESMTPS id 6a1803df08f44-6b2a5a1996fsi105617196d6.112.2024.06.17.09.13.28 for (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256); Mon, 17 Jun 2024 09:13:29 -0700 (PDT) Received-SPF: pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; Authentication-Results: mx.google.com; dkim=pass header.i=@linaro.org header.s=google header.b=tkY3Frkv; spf=pass (google.com: domain of qemu-devel-bounces+patch=linaro.org@nongnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom="qemu-devel-bounces+patch=linaro.org@nongnu.org"; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1sJEy0-0005Ek-DW; Mon, 17 Jun 2024 12:12:24 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1sJExv-0005Dr-Uy for qemu-devel@nongnu.org; Mon, 17 Jun 2024 12:12:20 -0400 Received: from mail-pl1-x630.google.com ([2607:f8b0:4864:20::630]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1sJExs-0008AV-PB for qemu-devel@nongnu.org; Mon, 17 Jun 2024 12:12:19 -0400 Received: by mail-pl1-x630.google.com with SMTP id d9443c01a7336-1f717b3f2d8so38959945ad.1 for ; Mon, 17 Jun 2024 09:12:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; t=1718640735; x=1719245535; darn=nongnu.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0LU4SjJKqf+ERRYZSLiiBynsCwrBC0K0Sn0nz4JVZgM=; b=tkY3Frkv6sg9r9w+9WoM59yKC5SczI8N/REkb5fvmusk+yyuvVkX646Wp0fRCizbPf gWXX0Jg/rrDn40JmKfkXx0H6ZZS+TiE287qy/cPfXM/MS1UJsqQTica5rDHeUIrG4Fse HKuABe2y4ALLtU/SAkGmfcxGdkYILpf7RPhl2k/IvQ1bp3MT27TFoyXrezcWORN9s1QM /rLTI/5fSXau3dUNKz6vAPaQkBrhr1838amiGUrG0GeDCJiJ30tSZAMfOrSkrspK6fsT 7iKecPNcwZtgwDDxobf24iuNcDXzmwWFv5fdf7KiGwmc+qQkHZE6JW9rcqFwJrcbQNnD xVvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1718640735; x=1719245535; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0LU4SjJKqf+ERRYZSLiiBynsCwrBC0K0Sn0nz4JVZgM=; b=lKzQjMCJNMhqCTSof/TtxZeyR6AeV92MYxP/CV2gcpIcCg5oUf2xqzhlz0j4Or3Oti R6z3dBXPcV+8X5XbKaeiqJyG5GMi2aMwThhdI/pVYHLsF8vrt04NFR4X2DxtNTT9DCv8 bOi2G+sbeqrJ0sCUlO7Cx9QG1mhbv46jNXlCkI8n8mUW+P7aqRanrlCPkez2XYRIu/Py YbS48CvLVZicuurcW14AstFuvsDgX8WcMSssfCbku3+VTwX4I8UHs/UJZqFWRiXiXZAZ AWFyQ27bySlajAN7wmuNMb2DUHK5VJsgpu+K3pPtBnf5xU3j8tKFj8Mtn9g4aR9SpYuj 02fg== X-Gm-Message-State: AOJu0YxJEmaVHPL8b85h3JIsJL9IYRA9Gu2XnyWdnvduQ2zaZGVjSto3 CC27yck7mSWb6zzpEuZOA4u9QL5PQLwcrnggrOOPktJAGe7tURT8T2+1m2KBJu43q6yJTJ8G4Cg 9 X-Received: by 2002:a17:902:f54f:b0:1f7:1bf3:db10 with SMTP id d9443c01a7336-1f98b23f6afmr1542315ad.20.1718640734490; Mon, 17 Jun 2024 09:12:14 -0700 (PDT) Received: from stoup.. ([71.212.132.216]) by smtp.gmail.com with ESMTPSA id d9443c01a7336-1f855ee8317sm80829285ad.145.2024.06.17.09.12.13 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 17 Jun 2024 09:12:14 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Cc: pbonzini@redhat.com Subject: [PATCH 3/3] target/i386: Reorg push/pop within seg_helper.c Date: Mon, 17 Jun 2024 09:12:10 -0700 Message-Id: <20240617161210.4639-4-richard.henderson@linaro.org> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20240617161210.4639-1-richard.henderson@linaro.org> References: <20240617161210.4639-1-richard.henderson@linaro.org> MIME-Version: 1.0 Received-SPF: pass client-ip=2607:f8b0:4864:20::630; envelope-from=richard.henderson@linaro.org; helo=mail-pl1-x630.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+patch=linaro.org@nongnu.org Sender: qemu-devel-bounces+patch=linaro.org@nongnu.org Use a structure to contain the stack parameters, env, unwind return address, and mmu index. Rewrite the macros into functions. Signed-off-by: Richard Henderson --- target/i386/tcg/seg_helper.c | 465 +++++++++++++++++++---------------- 1 file changed, 253 insertions(+), 212 deletions(-) diff --git a/target/i386/tcg/seg_helper.c b/target/i386/tcg/seg_helper.c index 8884d82b33..d5bacd25f5 100644 --- a/target/i386/tcg/seg_helper.c +++ b/target/i386/tcg/seg_helper.c @@ -579,35 +579,47 @@ int exception_has_error_code(int intno) } while (0) #endif -/* XXX: add a is_user flag to have proper security support */ -#define PUSHW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 2; \ - cpu_stw_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } +typedef struct PushPop +{ + CPUX86State *env; + uintptr_t ra; + target_ulong ss_base; + target_ulong sp; + target_ulong sp_mask; + int mmu_index; +} PushPop; -#define PUSHL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - sp -= 4; \ - cpu_stl_kernel_ra(env, (ssp) + (sp & (sp_mask)), (val), ra); \ - } +static void pushw(PushPop *pp, uint16_t val) +{ + pp->sp -= 2; + cpu_stw_mmuidx_ra(pp->env, pp->ss_base + (pp->sp & pp->sp_mask), + val, pp->mmu_index, pp->ra); +} -#define POPW_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = cpu_lduw_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 2; \ - } +static void pushl(PushPop *pp, uint32_t val) +{ + pp->sp -= 4; + cpu_stl_mmuidx_ra(pp->env, pp->ss_base + (pp->sp & pp->sp_mask), + val, pp->mmu_index, pp->ra); +} -#define POPL_RA(ssp, sp, sp_mask, val, ra) \ - { \ - val = (uint32_t)cpu_ldl_kernel_ra(env, (ssp) + (sp & (sp_mask)), ra); \ - sp += 4; \ - } +static uint16_t popw(PushPop *pp) +{ + uint16_t ret = cpu_lduw_mmuidx_ra(pp->env, + pp->ss_base + (pp->sp & pp->sp_mask), + pp->mmu_index, pp->ra); + pp->sp += 2; + return ret; +} -#define PUSHW(ssp, sp, sp_mask, val) PUSHW_RA(ssp, sp, sp_mask, val, 0) -#define PUSHL(ssp, sp, sp_mask, val) PUSHL_RA(ssp, sp, sp_mask, val, 0) -#define POPW(ssp, sp, sp_mask, val) POPW_RA(ssp, sp, sp_mask, val, 0) -#define POPL(ssp, sp, sp_mask, val) POPL_RA(ssp, sp, sp_mask, val, 0) +static uint32_t popl(PushPop *pp) +{ + uint32_t ret = cpu_ldl_mmuidx_ra(pp->env, + pp->ss_base + (pp->sp & pp->sp_mask), + pp->mmu_index, pp->ra); + pp->sp += 4; + return ret; +} /* protected mode interrupt */ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, @@ -615,12 +627,13 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, int is_hw) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int type, dpl, selector, ss_dpl, cpl; int has_error_code, new_stack, shift; - uint32_t e1, e2, offset, ss = 0, esp, ss_e1 = 0, ss_e2 = 0; - uint32_t old_eip, sp_mask, eflags; + uint32_t e1, e2, offset, ss = 0, ss_e1 = 0, ss_e2 = 0; + uint32_t old_eip, eflags; int vm86 = env->eflags & VM_MASK; + PushPop pp; bool set_rf; has_error_code = 0; @@ -662,6 +675,10 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, intno * 8 + 2); } + pp.env = env; + pp.ra = 0; + pp.mmu_index = cpu_mmu_index_kernel(env); + if (type == 5) { /* task gate */ /* must do that check here to return the correct error code */ @@ -670,22 +687,20 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } shift = switch_tss(env, intno * 8, e1, e2, SWITCH_TSS_CALL, old_eip); if (has_error_code) { - uint32_t mask; - /* push the error code */ if (env->segs[R_SS].flags & DESC_B_MASK) { - mask = 0xffffffff; + pp.sp_mask = 0xffffffff; } else { - mask = 0xffff; + pp.sp_mask = 0xffff; } - esp = (env->regs[R_ESP] - (2 << shift)) & mask; - ssp = env->segs[R_SS].base + esp; + pp.sp = env->regs[R_ESP]; + pp.ss_base = env->segs[R_SS].base; if (shift) { - cpu_stl_kernel(env, ssp, error_code); + pushl(&pp, error_code); } else { - cpu_stw_kernel(env, ssp, error_code); + pushw(&pp, error_code); } - SET_ESP(esp, mask); + SET_ESP(pp.sp, pp.sp_mask); } return; } @@ -719,7 +734,9 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, } if (dpl < cpl) { /* to inner privilege */ + uint32_t esp; get_ss_esp_from_tss(env, &ss, &esp, dpl, 0); + pp.sp = esp; if ((ss & 0xfffc) == 0) { raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); } @@ -742,17 +759,17 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0A_TSS, ss & 0xfffc); } new_stack = 1; - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + pp.sp_mask = get_sp_mask(ss_e2); + pp.ss_base = get_seg_base(ss_e1, ss_e2); } else { /* to same privilege */ if (vm86) { raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; - esp = env->regs[R_ESP]; + pp.sp_mask = get_sp_mask(env->segs[R_SS].flags); + pp.ss_base = env->segs[R_SS].base; + pp.sp = env->regs[R_ESP]; } shift = type >> 3; @@ -777,36 +794,36 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, if (shift == 1) { if (new_stack) { if (vm86) { - PUSHL(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHL(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushl(&pp, env->segs[R_GS].selector); + pushl(&pp, env->segs[R_FS].selector); + pushl(&pp, env->segs[R_DS].selector); + pushl(&pp, env->segs[R_ES].selector); } - PUSHL(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHL(ssp, esp, sp_mask, env->regs[R_ESP]); + pushl(&pp, env->segs[R_SS].selector); + pushl(&pp, env->regs[R_ESP]); } - PUSHL(ssp, esp, sp_mask, eflags); - PUSHL(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHL(ssp, esp, sp_mask, old_eip); + pushl(&pp, eflags); + pushl(&pp, env->segs[R_CS].selector); + pushl(&pp, old_eip); if (has_error_code) { - PUSHL(ssp, esp, sp_mask, error_code); + pushl(&pp, error_code); } } else { if (new_stack) { if (vm86) { - PUSHW(ssp, esp, sp_mask, env->segs[R_GS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_FS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_DS].selector); - PUSHW(ssp, esp, sp_mask, env->segs[R_ES].selector); + pushw(&pp, env->segs[R_GS].selector); + pushw(&pp, env->segs[R_FS].selector); + pushw(&pp, env->segs[R_DS].selector); + pushw(&pp, env->segs[R_ES].selector); } - PUSHW(ssp, esp, sp_mask, env->segs[R_SS].selector); - PUSHW(ssp, esp, sp_mask, env->regs[R_ESP]); + pushw(&pp, env->segs[R_SS].selector); + pushw(&pp, env->regs[R_ESP]); } - PUSHW(ssp, esp, sp_mask, eflags); - PUSHW(ssp, esp, sp_mask, env->segs[R_CS].selector); - PUSHW(ssp, esp, sp_mask, old_eip); + pushw(&pp, eflags); + pushw(&pp, env->segs[R_CS].selector); + pushw(&pp, old_eip); if (has_error_code) { - PUSHW(ssp, esp, sp_mask, error_code); + pushw(&pp, error_code); } } @@ -824,10 +841,10 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, cpu_x86_load_seg_cache(env, R_GS, 0, 0, 0, 0); } ss = (ss & ~3) | dpl; - cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, get_seg_limit(ss_e1, ss_e2), ss_e2); + cpu_x86_load_seg_cache(env, R_SS, ss, pp.ss_base, + get_seg_limit(ss_e1, ss_e2), ss_e2); } - SET_ESP(esp, sp_mask); + SET_ESP(pp.sp, pp.sp_mask); selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -839,20 +856,18 @@ static void do_interrupt_protected(CPUX86State *env, int intno, int is_int, #ifdef TARGET_X86_64 -#define PUSHQ_RA(sp, val, ra) \ - { \ - sp -= 8; \ - cpu_stq_kernel_ra(env, sp, (val), ra); \ - } +static void pushq(PushPop *pp, uint64_t val) +{ + pp->sp -= 8; + cpu_stq_mmuidx_ra(pp->env, pp->sp, val, pp->mmu_index, pp->ra); +} -#define POPQ_RA(sp, val, ra) \ - { \ - val = cpu_ldq_kernel_ra(env, sp, ra); \ - sp += 8; \ - } - -#define PUSHQ(sp, val) PUSHQ_RA(sp, val, 0) -#define POPQ(sp, val) POPQ_RA(sp, val, 0) +static uint64_t popq(PushPop *pp) +{ + uint64_t ret = cpu_ldq_mmuidx_ra(pp->env, pp->sp, pp->mmu_index, pp->ra); + pp->sp += 8; + return ret; +} static inline target_ulong get_rsp_from_tss(CPUX86State *env, int level) { @@ -895,8 +910,15 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, int type, dpl, selector, cpl, ist; int has_error_code, new_stack; uint32_t e1, e2, e3, ss, eflags; - target_ulong old_eip, esp, offset; + target_ulong old_eip, offset; bool set_rf; + PushPop pp; + + pp.env = env; + pp.ra = 0; + pp.mmu_index = cpu_mmu_index_kernel(env); + pp.sp_mask = -1; + pp.ss_base = 0; has_error_code = 0; if (!is_int && !is_hw) { @@ -967,7 +989,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, if (dpl < cpl || ist != 0) { /* to inner privilege */ new_stack = 1; - esp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); + pp.sp = get_rsp_from_tss(env, ist != 0 ? ist + 3 : dpl); ss = 0; } else { /* to same privilege */ @@ -975,9 +997,9 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, raise_exception_err(env, EXCP0D_GPF, selector & 0xfffc); } new_stack = 0; - esp = env->regs[R_ESP]; + pp.sp = env->regs[R_ESP]; } - esp &= ~0xfLL; /* align stack */ + pp.sp &= ~0xfLL; /* align stack */ /* See do_interrupt_protected. */ eflags = cpu_compute_eflags(env); @@ -985,13 +1007,13 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, eflags |= RF_MASK; } - PUSHQ(esp, env->segs[R_SS].selector); - PUSHQ(esp, env->regs[R_ESP]); - PUSHQ(esp, eflags); - PUSHQ(esp, env->segs[R_CS].selector); - PUSHQ(esp, old_eip); + pushq(&pp, env->segs[R_SS].selector); + pushq(&pp, env->regs[R_ESP]); + pushq(&pp, eflags); + pushq(&pp, env->segs[R_CS].selector); + pushq(&pp, old_eip); if (has_error_code) { - PUSHQ(esp, error_code); + pushq(&pp, error_code); } /* interrupt gate clear IF mask */ @@ -1004,7 +1026,7 @@ static void do_interrupt64(CPUX86State *env, int intno, int is_int, ss = 0 | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, 0, 0, dpl << DESC_DPL_SHIFT); } - env->regs[R_ESP] = esp; + env->regs[R_ESP] = pp.sp; selector = (selector & ~3) | dpl; cpu_x86_load_seg_cache(env, R_CS, selector, @@ -1076,10 +1098,11 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, int error_code, unsigned int next_eip) { SegmentCache *dt; - target_ulong ptr, ssp; + target_ulong ptr; int selector; - uint32_t offset, esp; + uint32_t offset; uint32_t old_cs, old_eip; + PushPop pp; /* real mode (simpler!) */ dt = &env->idt; @@ -1089,8 +1112,14 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, ptr = dt->base + intno * 4; offset = cpu_lduw_kernel(env, ptr); selector = cpu_lduw_kernel(env, ptr + 2); - esp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + + pp.env = env; + pp.ra = 0; + pp.sp = env->regs[R_ESP]; + pp.sp_mask = 0xffff; + pp.ss_base = env->segs[R_SS].base; + pp.mmu_index = cpu_mmu_index_kernel(env); + if (is_int) { old_eip = next_eip; } else { @@ -1098,12 +1127,12 @@ static void do_interrupt_real(CPUX86State *env, int intno, int is_int, } old_cs = env->segs[R_CS].selector; /* XXX: use SS segment size? */ - PUSHW(ssp, esp, 0xffff, cpu_compute_eflags(env)); - PUSHW(ssp, esp, 0xffff, old_cs); - PUSHW(ssp, esp, 0xffff, old_eip); + pushw(&pp, cpu_compute_eflags(env)); + pushw(&pp, old_cs); + pushw(&pp, old_eip); /* update processor state */ - env->regs[R_ESP] = (env->regs[R_ESP] & ~0xffff) | (esp & 0xffff); + SET_ESP(pp.sp, pp.sp_mask); env->eip = offset; env->segs[R_CS].selector = selector; env->segs[R_CS].base = (selector << 4); @@ -1546,21 +1575,24 @@ void helper_ljmp_protected(CPUX86State *env, int new_cs, target_ulong new_eip, void helper_lcall_real(CPUX86State *env, uint32_t new_cs, uint32_t new_eip, int shift, uint32_t next_eip) { - uint32_t esp, esp_mask; - target_ulong ssp; + PushPop pp; + + pp.env = env; + pp.ra = GETPC(); + pp.sp = env->regs[R_ESP]; + pp.sp_mask = get_sp_mask(env->segs[R_SS].flags); + pp.ss_base = env->segs[R_SS].base; + pp.mmu_index = cpu_mmu_index_kernel(env); - esp = env->regs[R_ESP]; - esp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushl(&pp, env->segs[R_CS].selector); + pushl(&pp, next_eip); } else { - PUSHW_RA(ssp, esp, esp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, esp, esp_mask, next_eip, GETPC()); + pushw(&pp, env->segs[R_CS].selector); + pushw(&pp, next_eip); } - SET_ESP(esp, esp_mask); + SET_ESP(pp.sp, pp.sp_mask); env->eip = new_eip; env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); @@ -1572,9 +1604,14 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { int new_stack, i; uint32_t e1, e2, cpl, dpl, rpl, selector, param_count; - uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl, sp_mask; - uint32_t val, limit, old_sp_mask; - target_ulong ssp, old_ssp, offset, sp; + uint32_t ss = 0, ss_e1 = 0, ss_e2 = 0, type, ss_dpl; + uint32_t limit; + target_ulong offset; + PushPop pp; + + pp.env = env; + pp.ra = GETPC(); + pp.mmu_index = cpu_mmu_index_kernel(env); LOG_PCALL("lcall %04x:" TARGET_FMT_lx " s=%d\n", new_cs, new_eip, shift); LOG_PCALL_STATE(env_cpu(env)); @@ -1613,14 +1650,14 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, #ifdef TARGET_X86_64 /* XXX: check 16/32 bit cases in long mode */ if (shift == 2) { - target_ulong rsp; - /* 64 bit case */ - rsp = env->regs[R_ESP]; - PUSHQ_RA(rsp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(rsp, next_eip, GETPC()); + pp.sp = env->regs[R_ESP]; + pp.sp_mask = -1; + pp.ss_base = 0; + pushq(&pp, env->segs[R_CS].selector); + pushq(&pp, next_eip); /* from this point, not restartable */ - env->regs[R_ESP] = rsp; + env->regs[R_ESP] = pp.sp; cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); @@ -1628,15 +1665,15 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, } else #endif { - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + pp.sp = env->regs[R_ESP]; + pp.sp_mask = get_sp_mask(env->segs[R_SS].flags); + pp.ss_base = env->segs[R_SS].base; if (shift) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&pp, env->segs[R_CS].selector); + pushl(&pp, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&pp, env->segs[R_CS].selector); + pushw(&pp, next_eip); } limit = get_seg_limit(e1, e2); @@ -1644,7 +1681,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0D_GPF, new_cs & 0xfffc, GETPC()); } /* from this point, not restartable */ - SET_ESP(sp, sp_mask); + SET_ESP(pp.sp, pp.sp_mask); cpu_x86_load_seg_cache(env, R_CS, (new_cs & 0xfffc) | cpl, get_seg_base(e1, e2), limit, e2); env->eip = new_eip; @@ -1739,13 +1776,13 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* to inner privilege */ #ifdef TARGET_X86_64 if (shift == 2) { - sp = get_rsp_from_tss(env, dpl); + pp.sp = get_rsp_from_tss(env, dpl); ss = dpl; /* SS = NULL selector with RPL = new CPL */ new_stack = 1; - sp_mask = 0; - ssp = 0; /* SS base is always zero in IA-32e mode */ + pp.sp_mask = -1; + pp.ss_base = 0; /* SS base is always zero in IA-32e mode */ LOG_PCALL("new ss:rsp=%04x:%016llx env->regs[R_ESP]=" - TARGET_FMT_lx "\n", ss, sp, env->regs[R_ESP]); + TARGET_FMT_lx "\n", ss, pp.sp, env->regs[R_ESP]); } else #endif { @@ -1754,7 +1791,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, LOG_PCALL("new ss:esp=%04x:%08x param_count=%d env->regs[R_ESP]=" TARGET_FMT_lx "\n", ss, sp32, param_count, env->regs[R_ESP]); - sp = sp32; + pp.sp = sp32; if ((ss & 0xfffc) == 0) { raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } @@ -1777,63 +1814,63 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, raise_exception_err_ra(env, EXCP0A_TSS, ss & 0xfffc, GETPC()); } - sp_mask = get_sp_mask(ss_e2); - ssp = get_seg_base(ss_e1, ss_e2); + pp.sp_mask = get_sp_mask(ss_e2); + pp.ss_base = get_seg_base(ss_e1, ss_e2); } /* push_size = ((param_count * 2) + 8) << shift; */ - old_sp_mask = get_sp_mask(env->segs[R_SS].flags); - old_ssp = env->segs[R_SS].base; #ifdef TARGET_X86_64 if (shift == 2) { /* XXX: verify if new stack address is canonical */ - PUSHQ_RA(sp, env->segs[R_SS].selector, GETPC()); - PUSHQ_RA(sp, env->regs[R_ESP], GETPC()); + pushq(&pp, env->segs[R_SS].selector); + pushq(&pp, env->regs[R_ESP]); /* parameters aren't supported for 64-bit call gates */ } else #endif - if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); - for (i = param_count - 1; i >= 0; i--) { - val = cpu_ldl_kernel_ra(env, old_ssp + - ((env->regs[R_ESP] + i * 4) & - old_sp_mask), GETPC()); - PUSHL_RA(ssp, sp, sp_mask, val, GETPC()); - } - } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_SS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, env->regs[R_ESP], GETPC()); - for (i = param_count - 1; i >= 0; i--) { - val = cpu_lduw_kernel_ra(env, old_ssp + - ((env->regs[R_ESP] + i * 2) & - old_sp_mask), GETPC()); - PUSHW_RA(ssp, sp, sp_mask, val, GETPC()); + { + PushPop old_pp; + + old_pp = pp; + old_pp.sp_mask = get_sp_mask(env->segs[R_SS].flags); + old_pp.ss_base = env->segs[R_SS].base; + + if (shift == 1) { + pushl(&pp, env->segs[R_SS].selector); + pushl(&pp, env->regs[R_ESP]); + for (i = param_count - 1; i >= 0; i--) { + pushl(&pp, popl(&old_pp)); + } + } else { + pushw(&pp, env->segs[R_SS].selector); + pushw(&pp, env->regs[R_ESP]); + for (i = param_count - 1; i >= 0; i--) { + pushw(&pp, popw(&old_pp)); + } } } new_stack = 1; } else { /* to same privilege */ - sp = env->regs[R_ESP]; - sp_mask = get_sp_mask(env->segs[R_SS].flags); - ssp = env->segs[R_SS].base; + pp.sp = env->regs[R_ESP]; + pp.sp_mask = get_sp_mask(env->segs[R_SS].flags); + pp.ss_base = env->segs[R_SS].base; /* push_size = (4 << shift); */ new_stack = 0; } #ifdef TARGET_X86_64 if (shift == 2) { - PUSHQ_RA(sp, env->segs[R_CS].selector, GETPC()); - PUSHQ_RA(sp, next_eip, GETPC()); + pushq(&pp, env->segs[R_CS].selector); + pushq(&pp, next_eip); } else #endif if (shift == 1) { - PUSHL_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHL_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushl(&pp, env->segs[R_CS].selector); + pushl(&pp, next_eip); } else { - PUSHW_RA(ssp, sp, sp_mask, env->segs[R_CS].selector, GETPC()); - PUSHW_RA(ssp, sp, sp_mask, next_eip, GETPC()); + pushw(&pp, env->segs[R_CS].selector); + pushw(&pp, next_eip); } /* from this point, not restartable */ @@ -1847,7 +1884,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, { ss = (ss & ~3) | dpl; cpu_x86_load_seg_cache(env, R_SS, ss, - ssp, + pp.ss_base, get_seg_limit(ss_e1, ss_e2), ss_e2); } @@ -1858,7 +1895,7 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - SET_ESP(sp, sp_mask); + SET_ESP(pp.sp, pp.sp_mask); env->eip = offset; } } @@ -1866,26 +1903,29 @@ void helper_lcall_protected(CPUX86State *env, int new_cs, target_ulong new_eip, /* real and vm86 mode iret */ void helper_iret_real(CPUX86State *env, int shift) { - uint32_t sp, new_cs, new_eip, new_eflags, sp_mask; - target_ulong ssp; + uint32_t new_cs, new_eip, new_eflags; int eflags_mask; + PushPop pp; + + pp.env = env; + pp.ra = GETPC(); + pp.mmu_index = cpu_mmu_index_kernel(env); + pp.sp_mask = 0xffff; /* XXXX: use SS segment size? */ + pp.sp = env->regs[R_ESP]; + pp.ss_base = env->segs[R_SS].base; - sp_mask = 0xffff; /* XXXX: use SS segment size? */ - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPL_RA(ssp, sp, sp_mask, new_cs, GETPC()); - new_cs &= 0xffff; - POPL_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popl(&pp); + new_cs = popl(&pp) & 0xffff; + new_eflags = popl(&pp); } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_cs, GETPC()); - POPW_RA(ssp, sp, sp_mask, new_eflags, GETPC()); + new_eip = popw(&pp); + new_cs = popw(&pp); + new_eflags = popw(&pp); } - env->regs[R_ESP] = (env->regs[R_ESP] & ~sp_mask) | (sp & sp_mask); + SET_ESP(pp.sp, pp.sp_mask); env->segs[R_CS].selector = new_cs; env->segs[R_CS].base = (new_cs << 4); env->eip = new_eip; @@ -1938,47 +1978,50 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, uint32_t new_es, new_ds, new_fs, new_gs; uint32_t e1, e2, ss_e1, ss_e2; int cpl, dpl, rpl, eflags_mask, iopl; - target_ulong ssp, sp, new_eip, new_esp, sp_mask; + target_ulong new_eip, new_esp; + PushPop pp; + + pp.env = env; + pp.ra = retaddr; + pp.mmu_index = cpu_mmu_index_kernel(env); #ifdef TARGET_X86_64 if (shift == 2) { - sp_mask = -1; + pp.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(env->segs[R_SS].flags); + pp.sp_mask = get_sp_mask(env->segs[R_SS].flags); } - sp = env->regs[R_ESP]; - ssp = env->segs[R_SS].base; + pp.sp = env->regs[R_ESP]; + pp.ss_base = env->segs[R_SS].base; new_eflags = 0; /* avoid warning */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_eip, retaddr); - POPQ_RA(sp, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popq(&pp); + new_cs = popq(&pp) & 0xffff; if (is_iret) { - POPQ_RA(sp, new_eflags, retaddr); + new_eflags = popq(&pp); } } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPL_RA(ssp, sp, sp_mask, new_cs, retaddr); - new_cs &= 0xffff; + new_eip = popl(&pp); + new_cs = popl(&pp) & 0xffff; if (is_iret) { - POPL_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popl(&pp); if (new_eflags & VM_MASK) { goto return_to_vm86; } } } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_eip, retaddr); - POPW_RA(ssp, sp, sp_mask, new_cs, retaddr); + new_eip = popw(&pp); + new_cs = popw(&pp); if (is_iret) { - POPW_RA(ssp, sp, sp_mask, new_eflags, retaddr); + new_eflags = popw(&pp); } } } @@ -2014,7 +2057,7 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, raise_exception_err_ra(env, EXCP0B_NOSEG, new_cs & 0xfffc, retaddr); } - sp += addend; + pp.sp += addend; if (rpl == cpl && (!(env->hflags & HF_CS64_MASK) || ((env->hflags & HF_CS64_MASK) && !is_iret))) { /* return to same privilege level */ @@ -2026,21 +2069,19 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, /* return to different privilege level */ #ifdef TARGET_X86_64 if (shift == 2) { - POPQ_RA(sp, new_esp, retaddr); - POPQ_RA(sp, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popq(&pp); + new_ss = popq(&pp) & 0xffff; } else #endif { if (shift == 1) { /* 32 bits */ - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - new_ss &= 0xffff; + new_esp = popl(&pp); + new_ss = popl(&pp) & 0xffff; } else { /* 16 bits */ - POPW_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPW_RA(ssp, sp, sp_mask, new_ss, retaddr); + new_esp = popw(&pp); + new_ss = popw(&pp); } } LOG_PCALL("new ss:esp=%04x:" TARGET_FMT_lx "\n", @@ -2090,14 +2131,14 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, get_seg_base(e1, e2), get_seg_limit(e1, e2), e2); - sp = new_esp; + pp.sp = new_esp; #ifdef TARGET_X86_64 if (env->hflags & HF_CS64_MASK) { - sp_mask = -1; + pp.sp_mask = -1; } else #endif { - sp_mask = get_sp_mask(ss_e2); + pp.sp_mask = get_sp_mask(ss_e2); } /* validate data segments */ @@ -2106,9 +2147,9 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, validate_seg(env, R_FS, rpl); validate_seg(env, R_GS, rpl); - sp += addend; + pp.sp += addend; } - SET_ESP(sp, sp_mask); + SET_ESP(pp.sp, pp.sp_mask); env->eip = new_eip; if (is_iret) { /* NOTE: 'cpl' is the _old_ CPL */ @@ -2128,12 +2169,12 @@ static inline void helper_ret_protected(CPUX86State *env, int shift, return; return_to_vm86: - POPL_RA(ssp, sp, sp_mask, new_esp, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ss, retaddr); - POPL_RA(ssp, sp, sp_mask, new_es, retaddr); - POPL_RA(ssp, sp, sp_mask, new_ds, retaddr); - POPL_RA(ssp, sp, sp_mask, new_fs, retaddr); - POPL_RA(ssp, sp, sp_mask, new_gs, retaddr); + new_esp = popl(&pp); + new_ss = popl(&pp); + new_es = popl(&pp); + new_ds = popl(&pp); + new_fs = popl(&pp); + new_gs = popl(&pp); /* modify processor state */ cpu_load_eflags(env, new_eflags, TF_MASK | AC_MASK | ID_MASK |