From patchwork Mon Feb 17 22:06:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866029 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE03E1AC43A; Mon, 17 Feb 2025 22:07:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830057; cv=none; b=bZPr/27S6UsVYIG8PJ7tUjnBiSr+6za5Nrc1C7wOovbQo399sN8MUND6uU8unJrtlNuW6vV3VhJ+/o3Sa3O3w7nPZV22j3Uv7MQcqU/5MPO6y9UYFdRTCxSNfH/NAddXLFbBVTV606geX3/yRKabWFX1UPJl0NvYjZeRFJvmxSQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830057; c=relaxed/simple; bh=KIAWwlvgtB3R8MAUcew3Zv+qzjK3vwp7je4vfaGTHkU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qNdHncoIr/AGhjwFqQqQuOi55920RNhCKd2GAdAYE2M23nPnhglhGS9lE4ZGgAB5V0X2h4fyU/W1deE5eAoeLLo3HY4h6YNa2DLX6SEXt8U7CSjXJhfQLq02gIuffgYqlVbF+wWXSOujnHLjs4YNmKSE2AQ53vVcS9IV1CNJMGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DZbkFdBq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DZbkFdBq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C04FFC4CEE2; Mon, 17 Feb 2025 22:07:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830056; bh=KIAWwlvgtB3R8MAUcew3Zv+qzjK3vwp7je4vfaGTHkU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DZbkFdBqB5Pw6f9B63UidtEmRP0J5jwg6LjDVa9aYuuVbEnOeaaBEvsnYvfpbOIgC fMjuAhCak/Aso4rNtS3FBqiwmpmjoBWT9UrUER8PRosaTmJP5HxnA32l1PU1PdhC81 HlAtavdGliDf1F9sXx/QOA7yNCZOHjIuJ4hE7HN7VpDnD8RQKIjHV1OxM42n5rxwIi 5/l1/NNLgcGzfLR5sD4qqUx5Smn+kw9NjYh/0w14V8RGg2K4t2n2/x1Bv5TOZcwjEf agmdU2KZUjzESFqFLXUQ5XPmKtFSf88okq4+ouncKCkLUv1I3aAnUpU8Q3dJl9eNKg xFH9iU86hJLCg== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello Subject: [PATCH v3 02/18] cpufreq/amd-pstate: Show a warning when a CPU fails to setup Date: Mon, 17 Feb 2025 16:06:51 -0600 Message-ID: <20250217220707.1468365-3-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello I came across a system that MSR_AMD_CPPC_CAP1 for some CPUs isn't populated. This is an unexpected behavior that is most likely a BIOS bug. In the event it happens I'd like users to report bugs to properly root cause and get this fixed. Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 12fb63169a24c..87c605348a3dc 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -1034,6 +1034,7 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) free_cpudata2: freq_qos_remove_request(&cpudata->req[0]); free_cpudata1: + pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret); kfree(cpudata); return ret; } @@ -1527,6 +1528,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) return 0; free_cpudata1: + pr_warn("Failed to initialize CPU %d: %d\n", policy->cpu, ret); kfree(cpudata); return ret; } From patchwork Mon Feb 17 22:06:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866028 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC7EC1B0409; Mon, 17 Feb 2025 22:07:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830058; cv=none; b=aRo0ChR6/B24y2euQr1KDrY6XQ3L33pvKbXy2RkFphgUqad3ehath0RSfuzTY/FhVeJSj1gqr4UIm3f/fe1atArrOCrMVZc1NehR9yblG4NUwpLJ1FqMB3Dux0yg1kYLnEYzqT8PzgmXZ9aTpDhxgcHinXBCYPClnUs+fLraWZ8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830058; c=relaxed/simple; bh=nN4kKCM6Oy10YprC7I2oKgfSVmby4FdzGtUWiYJXAmA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jbJRXO3zrpkNWUTWQlmtGsett7oEhLq8YzfxHYtAjkOxN6cdvoMoJ3W2adfzEiMFWqW26vHS6uL945ykfGXjXIdDQJJ6VgDVwIcoU1DfCtFWU2Ip0wM/nYNztxn/xuCOvSlTxCiDrREoG9n2N+uCcFn1AHxp6Giwljty8F8LUBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VvuFrgTS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VvuFrgTS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 92FBDC4CED1; Mon, 17 Feb 2025 22:07:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830058; bh=nN4kKCM6Oy10YprC7I2oKgfSVmby4FdzGtUWiYJXAmA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VvuFrgTSRW33bN+FjL0GFd9iCOyAwGu8xKoHdfXO1hpRJbUJ3zKvLhKJqk6ZQUI8k Rto6bpbq2ukep2WFPm6u0pk2Xl1wdGnXSG67vq33IUrLykzSkBzvfXWmjiaCIA2It/ +Egf4X/YaHAcDSuo2gWfnY4xPc6m0DWs02JS1ciSAlB3EUhLdWEzbCtXmnsCYta/Jm E59PB5h4lXTMJo2OyyQ0dvj3t9OcNKVpxwCuHr62LhUSHONFArbfrIYj08RCS/Bk/G S6VQ/niw8Eo+wLqvf9NwybosYycVtL3o7OvgATH/70B8B0vYkb5SCqNF8kwOlRizDn r24bbK/4QFh8A== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello Subject: [PATCH v3 04/18] cpufreq/amd-pstate: Move perf values into a union Date: Mon, 17 Feb 2025 16:06:53 -0600 Message-ID: <20250217220707.1468365-5-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello By storing perf values in a union all the writes and reads can be done atomically, removing the need for some concurrency protections. While making this change, also drop the cached frequency values, using inline helpers to calculate them on demand from perf value. Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- v3: * Pick up tag v2: * cache perf variable in unit tests * Drop unnecessary check from amd_pstate_update_min_max_limit() * Consistency with READ_ONCE() * Drop unneeded policy checks * add kdoc --- drivers/cpufreq/amd-pstate-ut.c | 18 +-- drivers/cpufreq/amd-pstate.c | 195 ++++++++++++++++++-------------- drivers/cpufreq/amd-pstate.h | 49 +++++--- 3 files changed, 151 insertions(+), 111 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 445278cf40b61..ba3e06f349c6d 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -129,6 +129,7 @@ static void amd_pstate_ut_check_perf(u32 index) struct cppc_perf_caps cppc_perf; struct cpufreq_policy *policy = NULL; struct amd_cpudata *cpudata = NULL; + union perf_cached cur_perf; for_each_possible_cpu(cpu) { policy = cpufreq_cpu_get(cpu); @@ -162,19 +163,20 @@ static void amd_pstate_ut_check_perf(u32 index) lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); } - if (highest_perf != READ_ONCE(cpudata->highest_perf) && !cpudata->hw_prefcore) { + cur_perf = READ_ONCE(cpudata->perf); + if (highest_perf != cur_perf.highest_perf && !cpudata->hw_prefcore) { pr_err("%s cpu%d highest=%d %d highest perf doesn't match\n", - __func__, cpu, highest_perf, cpudata->highest_perf); + __func__, cpu, highest_perf, cpudata->perf.highest_perf); goto skip_test; } - if ((nominal_perf != READ_ONCE(cpudata->nominal_perf)) || - (lowest_nonlinear_perf != READ_ONCE(cpudata->lowest_nonlinear_perf)) || - (lowest_perf != READ_ONCE(cpudata->lowest_perf))) { + if (nominal_perf != cur_perf.nominal_perf || + (lowest_nonlinear_perf != cur_perf.lowest_nonlinear_perf) || + (lowest_perf != cur_perf.lowest_perf)) { amd_pstate_ut_cases[index].result = AMD_PSTATE_UT_RESULT_FAIL; pr_err("%s cpu%d nominal=%d %d lowest_nonlinear=%d %d lowest=%d %d, they should be equal!\n", - __func__, cpu, nominal_perf, cpudata->nominal_perf, - lowest_nonlinear_perf, cpudata->lowest_nonlinear_perf, - lowest_perf, cpudata->lowest_perf); + __func__, cpu, nominal_perf, cpudata->perf.nominal_perf, + lowest_nonlinear_perf, cpudata->perf.lowest_nonlinear_perf, + lowest_perf, cpudata->perf.lowest_perf); goto skip_test; } diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index a7c41f915b46e..5a23457a354d1 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -142,18 +142,17 @@ static struct quirk_entry quirk_amd_7k62 = { .lowest_freq = 550, }; -static inline u8 freq_to_perf(struct amd_cpudata *cpudata, unsigned int freq_val) +static inline u8 freq_to_perf(union perf_cached perf, u32 nominal_freq, unsigned int freq_val) { - u8 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * cpudata->nominal_perf, - cpudata->nominal_freq); + u8 perf_val = DIV_ROUND_UP_ULL((u64)freq_val * perf.nominal_perf, nominal_freq); - return clamp_t(u8, perf_val, cpudata->lowest_perf, cpudata->highest_perf); + return clamp_t(u8, perf_val, perf.lowest_perf, perf.highest_perf); } -static inline u32 perf_to_freq(struct amd_cpudata *cpudata, u8 perf_val) +static inline u32 perf_to_freq(union perf_cached perf, u32 nominal_freq, u8 perf_val) { - return DIV_ROUND_UP_ULL((u64)cpudata->nominal_freq * perf_val, - cpudata->nominal_perf); + return DIV_ROUND_UP_ULL((u64)nominal_freq * perf_val, + perf.nominal_perf); } static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi) @@ -347,7 +346,9 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy, } if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, + union perf_cached perf = READ_ONCE(cpudata->perf); + + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp, FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), @@ -425,6 +426,7 @@ static inline int amd_pstate_cppc_enable(bool enable) static int msr_init_perf(struct amd_cpudata *cpudata) { + union perf_cached perf = READ_ONCE(cpudata->perf); u64 cap1, numerator; int ret = rdmsrl_safe_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, @@ -436,19 +438,21 @@ static int msr_init_perf(struct amd_cpudata *cpudata) if (ret) return ret; - WRITE_ONCE(cpudata->highest_perf, numerator); - WRITE_ONCE(cpudata->max_limit_perf, numerator); - WRITE_ONCE(cpudata->nominal_perf, AMD_CPPC_NOMINAL_PERF(cap1)); - WRITE_ONCE(cpudata->lowest_nonlinear_perf, AMD_CPPC_LOWNONLIN_PERF(cap1)); - WRITE_ONCE(cpudata->lowest_perf, AMD_CPPC_LOWEST_PERF(cap1)); + perf.highest_perf = numerator; + perf.max_limit_perf = numerator; + perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1); + perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); + perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1); + perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); + WRITE_ONCE(cpudata->perf, perf); WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1)); - WRITE_ONCE(cpudata->min_limit_perf, AMD_CPPC_LOWEST_PERF(cap1)); return 0; } static int shmem_init_perf(struct amd_cpudata *cpudata) { struct cppc_perf_caps cppc_perf; + union perf_cached perf = READ_ONCE(cpudata->perf); u64 numerator; int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); @@ -459,14 +463,14 @@ static int shmem_init_perf(struct amd_cpudata *cpudata) if (ret) return ret; - WRITE_ONCE(cpudata->highest_perf, numerator); - WRITE_ONCE(cpudata->max_limit_perf, numerator); - WRITE_ONCE(cpudata->nominal_perf, cppc_perf.nominal_perf); - WRITE_ONCE(cpudata->lowest_nonlinear_perf, - cppc_perf.lowest_nonlinear_perf); - WRITE_ONCE(cpudata->lowest_perf, cppc_perf.lowest_perf); + perf.highest_perf = numerator; + perf.max_limit_perf = numerator; + perf.min_limit_perf = cppc_perf.lowest_perf; + perf.nominal_perf = cppc_perf.nominal_perf; + perf.lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf; + perf.lowest_perf = cppc_perf.lowest_perf; + WRITE_ONCE(cpudata->perf, perf); WRITE_ONCE(cpudata->prefcore_ranking, cppc_perf.highest_perf); - WRITE_ONCE(cpudata->min_limit_perf, cppc_perf.lowest_perf); if (cppc_state == AMD_PSTATE_ACTIVE) return 0; @@ -549,14 +553,14 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf, u8 des_perf, u8 max_perf, bool fast_switch, int gov_flags) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpudata->cpu); - u8 nominal_perf = READ_ONCE(cpudata->nominal_perf); + union perf_cached perf = READ_ONCE(cpudata->perf); if (!policy) return; des_perf = clamp_t(u8, des_perf, min_perf, max_perf); - policy->cur = perf_to_freq(cpudata, des_perf); + policy->cur = perf_to_freq(perf, cpudata->nominal_freq, des_perf); if ((cppc_state == AMD_PSTATE_GUIDED) && (gov_flags & CPUFREQ_GOV_DYNAMIC_SWITCHING)) { min_perf = des_perf; @@ -565,7 +569,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf, /* limit the max perf when core performance boost feature is disabled */ if (!cpudata->boost_supported) - max_perf = min_t(u8, nominal_perf, max_perf); + max_perf = min_t(u8, perf.nominal_perf, max_perf); if (trace_amd_pstate_perf_enabled() && amd_pstate_sample(cpudata)) { trace_amd_pstate_perf(min_perf, des_perf, max_perf, cpudata->freq, @@ -602,39 +606,41 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data) return 0; } -static int amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) +static void amd_pstate_update_min_max_limit(struct cpufreq_policy *policy) { - u8 max_limit_perf, min_limit_perf; struct amd_cpudata *cpudata = policy->driver_data; + union perf_cached perf = READ_ONCE(cpudata->perf); - max_limit_perf = freq_to_perf(cpudata, policy->max); - min_limit_perf = freq_to_perf(cpudata, policy->min); + perf.max_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->max); + perf.min_limit_perf = freq_to_perf(perf, cpudata->nominal_freq, policy->min); if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) - min_limit_perf = min(cpudata->nominal_perf, max_limit_perf); + perf.min_limit_perf = min(perf.nominal_perf, perf.max_limit_perf); - WRITE_ONCE(cpudata->max_limit_perf, max_limit_perf); - WRITE_ONCE(cpudata->min_limit_perf, min_limit_perf); WRITE_ONCE(cpudata->max_limit_freq, policy->max); WRITE_ONCE(cpudata->min_limit_freq, policy->min); - - return 0; + WRITE_ONCE(cpudata->perf, perf); } static int amd_pstate_update_freq(struct cpufreq_policy *policy, unsigned int target_freq, bool fast_switch) { struct cpufreq_freqs freqs; - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; + union perf_cached perf; u8 des_perf; + cpudata = policy->driver_data; + if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) amd_pstate_update_min_max_limit(policy); + perf = READ_ONCE(cpudata->perf); + freqs.old = policy->cur; freqs.new = target_freq; - des_perf = freq_to_perf(cpudata, target_freq); + des_perf = freq_to_perf(perf, cpudata->nominal_freq, target_freq); WARN_ON(fast_switch && !policy->fast_switch_enabled); /* @@ -645,8 +651,8 @@ static int amd_pstate_update_freq(struct cpufreq_policy *policy, if (!fast_switch) cpufreq_freq_transition_begin(policy, &freqs); - amd_pstate_update(cpudata, cpudata->min_limit_perf, des_perf, - cpudata->max_limit_perf, fast_switch, + amd_pstate_update(cpudata, perf.min_limit_perf, des_perf, + perf.max_limit_perf, fast_switch, policy->governor->flags); if (!fast_switch) @@ -675,9 +681,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu, unsigned long target_perf, unsigned long capacity) { - u8 max_perf, min_perf, des_perf, cap_perf, min_limit_perf; + u8 max_perf, min_perf, des_perf, cap_perf; struct cpufreq_policy *policy __free(put_cpufreq_policy) = cpufreq_cpu_get(cpu); struct amd_cpudata *cpudata; + union perf_cached perf; if (!policy) return; @@ -687,8 +694,8 @@ static void amd_pstate_adjust_perf(unsigned int cpu, if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) amd_pstate_update_min_max_limit(policy); - cap_perf = READ_ONCE(cpudata->highest_perf); - min_limit_perf = READ_ONCE(cpudata->min_limit_perf); + perf = READ_ONCE(cpudata->perf); + cap_perf = perf.highest_perf; des_perf = cap_perf; if (target_perf < capacity) @@ -699,10 +706,10 @@ static void amd_pstate_adjust_perf(unsigned int cpu, else min_perf = cap_perf; - if (min_perf < min_limit_perf) - min_perf = min_limit_perf; + if (min_perf < perf.min_limit_perf) + min_perf = perf.min_limit_perf; - max_perf = cpudata->max_limit_perf; + max_perf = perf.max_limit_perf; if (max_perf < min_perf) max_perf = min_perf; @@ -713,11 +720,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu, static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on) { struct amd_cpudata *cpudata = policy->driver_data; + union perf_cached perf = READ_ONCE(cpudata->perf); u32 nominal_freq, max_freq; int ret = 0; nominal_freq = READ_ONCE(cpudata->nominal_freq); - max_freq = perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf)); + max_freq = perf_to_freq(perf, cpudata->nominal_freq, perf.highest_perf); if (on) policy->cpuinfo.max_freq = max_freq; @@ -888,25 +896,24 @@ static u32 amd_pstate_get_transition_latency(unsigned int cpu) } /* - * amd_pstate_init_freq: Initialize the max_freq, min_freq, - * nominal_freq and lowest_nonlinear_freq for - * the @cpudata object. + * amd_pstate_init_freq: Initialize the nominal_freq and lowest_nonlinear_freq + * for the @cpudata object. * - * Requires: highest_perf, lowest_perf, nominal_perf and - * lowest_nonlinear_perf members of @cpudata to be - * initialized. + * Requires: all perf members of @cpudata to be initialized. * - * Returns 0 on success, non-zero value on failure. + * Returns 0 on success, non-zero value on failure. */ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) { - int ret; u32 min_freq, nominal_freq, lowest_nonlinear_freq; struct cppc_perf_caps cppc_perf; + union perf_cached perf; + int ret; ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); if (ret) return ret; + perf = READ_ONCE(cpudata->perf); if (quirks && quirks->nominal_freq) nominal_freq = quirks->nominal_freq; @@ -918,6 +925,7 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) if (quirks && quirks->lowest_freq) { min_freq = quirks->lowest_freq; + perf.lowest_perf = freq_to_perf(perf, nominal_freq, min_freq); } else min_freq = cppc_perf.lowest_freq; min_freq *= 1000; @@ -934,7 +942,7 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) return -EINVAL; } - lowest_nonlinear_freq = perf_to_freq(cpudata, cpudata->lowest_nonlinear_perf); + lowest_nonlinear_freq = perf_to_freq(perf, nominal_freq, perf.lowest_nonlinear_perf); WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq); if (lowest_nonlinear_freq <= min_freq || lowest_nonlinear_freq > nominal_freq) { @@ -949,6 +957,7 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) static int amd_pstate_cpu_init(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata; + union perf_cached perf; struct device *dev; int ret; @@ -984,8 +993,14 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu); policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu); - policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf); - policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf); + perf = READ_ONCE(cpudata->perf); + + policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf, + cpudata->nominal_freq, + perf.lowest_perf); + policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf, + cpudata->nominal_freq, + perf.highest_perf); policy->boost_enabled = READ_ONCE(cpudata->boost_supported); @@ -1066,23 +1081,27 @@ static int amd_pstate_cpu_suspend(struct cpufreq_policy *policy) static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy, char *buf) { - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; + union perf_cached perf; + cpudata = policy->driver_data; + perf = READ_ONCE(cpudata->perf); - return sysfs_emit(buf, "%u\n", perf_to_freq(cpudata, READ_ONCE(cpudata->highest_perf))); + return sysfs_emit(buf, "%u\n", + perf_to_freq(perf, cpudata->nominal_freq, perf.max_limit_perf)); } static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *policy, char *buf) { - int freq; - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; + union perf_cached perf; - freq = READ_ONCE(cpudata->lowest_nonlinear_freq); - if (freq < 0) - return freq; + cpudata = policy->driver_data; + perf = READ_ONCE(cpudata->perf); - return sysfs_emit(buf, "%u\n", freq); + return sysfs_emit(buf, "%u\n", + perf_to_freq(perf, cpudata->nominal_freq, perf.lowest_nonlinear_perf)); } /* @@ -1092,12 +1111,11 @@ static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *poli static ssize_t show_amd_pstate_highest_perf(struct cpufreq_policy *policy, char *buf) { - u8 perf; - struct amd_cpudata *cpudata = policy->driver_data; + struct amd_cpudata *cpudata; - perf = READ_ONCE(cpudata->highest_perf); + cpudata = policy->driver_data; - return sysfs_emit(buf, "%u\n", perf); + return sysfs_emit(buf, "%u\n", cpudata->perf.highest_perf); } static ssize_t show_amd_pstate_prefcore_ranking(struct cpufreq_policy *policy, @@ -1428,6 +1446,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void) static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata; + union perf_cached perf; struct device *dev; u64 value; int ret; @@ -1461,8 +1480,15 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) if (ret) goto free_cpudata1; - policy->cpuinfo.min_freq = policy->min = perf_to_freq(cpudata, cpudata->lowest_perf); - policy->cpuinfo.max_freq = policy->max = perf_to_freq(cpudata, cpudata->highest_perf); + perf = READ_ONCE(cpudata->perf); + + policy->cpuinfo.min_freq = policy->min = perf_to_freq(perf, + cpudata->nominal_freq, + perf.lowest_perf); + policy->cpuinfo.max_freq = policy->max = perf_to_freq(perf, + cpudata->nominal_freq, + perf.highest_perf); + /* It will be updated by governor */ policy->cur = policy->cpuinfo.min_freq; @@ -1523,6 +1549,7 @@ static void amd_pstate_epp_cpu_exit(struct cpufreq_policy *policy) static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; + union perf_cached perf; u8 epp; if (policy->min != cpudata->min_limit_freq || policy->max != cpudata->max_limit_freq) @@ -1533,15 +1560,16 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) else epp = READ_ONCE(cpudata->epp_cached); + perf = READ_ONCE(cpudata->perf); if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, epp, - cpudata->min_limit_perf, - cpudata->max_limit_perf, + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp, + perf.min_limit_perf, + perf.max_limit_perf, policy->boost_enabled); } - return amd_pstate_update_perf(cpudata, cpudata->min_limit_perf, 0U, - cpudata->max_limit_perf, epp, false); + return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U, + perf.max_limit_perf, epp, false); } static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) @@ -1573,20 +1601,18 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; - u8 max_perf; + union perf_cached perf = READ_ONCE(cpudata->perf); int ret; ret = amd_pstate_cppc_enable(true); if (ret) pr_err("failed to enable amd pstate during resume, return %d\n", ret); - max_perf = READ_ONCE(cpudata->highest_perf); - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, cpudata->epp_cached, FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), - max_perf, policy->boost_enabled); + perf.highest_perf, policy->boost_enabled); } return amd_pstate_epp_update_limit(policy); @@ -1610,22 +1636,21 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) { struct amd_cpudata *cpudata = policy->driver_data; - u8 min_perf; + union perf_cached perf = READ_ONCE(cpudata->perf); if (cpudata->suspended) return 0; - min_perf = READ_ONCE(cpudata->lowest_perf); - guard(mutex)(&amd_pstate_limits_lock); if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, cpudata->highest_perf, + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, - min_perf, min_perf, policy->boost_enabled); + perf.lowest_perf, perf.lowest_perf, + policy->boost_enabled); } - return amd_pstate_update_perf(cpudata, min_perf, 0, min_perf, + return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, false); } diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 0149933692458..8421c83c07919 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -13,6 +13,34 @@ /********************************************************************* * AMD P-state INTERFACE * *********************************************************************/ + +/** + * union perf_cached - A union to cache performance-related data. + * @highest_perf: the maximum performance an individual processor may reach, + * assuming ideal conditions + * For platforms that do not support the preferred core feature, the + * highest_pef may be configured with 166 or 255, to avoid max frequency + * calculated wrongly. we take the fixed value as the highest_perf. + * @nominal_perf: the maximum sustained performance level of the processor, + * assuming ideal operating conditions + * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power + * savings are achieved + * @lowest_perf: the absolute lowest performance level of the processor + * @min_limit_perf: Cached value of the performance corresponding to policy->min + * @max_limit_perf: Cached value of the performance corresponding to policy->max + */ +union perf_cached { + struct { + u8 highest_perf; + u8 nominal_perf; + u8 lowest_nonlinear_perf; + u8 lowest_perf; + u8 min_limit_perf; + u8 max_limit_perf; + }; + u64 val; +}; + /** * struct amd_aperf_mperf * @aperf: actual performance frequency clock count @@ -30,20 +58,9 @@ struct amd_aperf_mperf { * @cpu: CPU number * @req: constraint request to apply * @cppc_req_cached: cached performance request hints - * @highest_perf: the maximum performance an individual processor may reach, - * assuming ideal conditions - * For platforms that do not support the preferred core feature, the - * highest_pef may be configured with 166 or 255, to avoid max frequency - * calculated wrongly. we take the fixed value as the highest_perf. - * @nominal_perf: the maximum sustained performance level of the processor, - * assuming ideal operating conditions - * @lowest_nonlinear_perf: the lowest performance level at which nonlinear power - * savings are achieved - * @lowest_perf: the absolute lowest performance level of the processor + * @perf: cached performance-related data * @prefcore_ranking: the preferred core ranking, the higher value indicates a higher * priority. - * @min_limit_perf: Cached value of the performance corresponding to policy->min - * @max_limit_perf: Cached value of the performance corresponding to policy->max * @min_limit_freq: Cached value of policy->min (in khz) * @max_limit_freq: Cached value of policy->max (in khz) * @nominal_freq: the frequency (in khz) that mapped to nominal_perf @@ -68,13 +85,9 @@ struct amd_cpudata { struct freq_qos_request req[2]; u64 cppc_req_cached; - u8 highest_perf; - u8 nominal_perf; - u8 lowest_nonlinear_perf; - u8 lowest_perf; + union perf_cached perf; + u8 prefcore_ranking; - u8 min_limit_perf; - u8 max_limit_perf; u32 min_limit_freq; u32 max_limit_freq; u32 nominal_freq; From patchwork Mon Feb 17 22:06:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866027 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8945B1B3725; Mon, 17 Feb 2025 22:07:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830060; cv=none; b=L3JHPq7Pjjjf0h0YS5K8C/z37MhczXra6WN7d0ZrST65kl1r3ZEuO3Y7oVIxoIyLaV1dTYN3orYyW3AWTa1gJZCC40Ork90KH0hH/OZSCEmI0oIIbRszZLVVUbweefMaTMZYlSTOUIDf8g2XN4h12TRp8YyH4jVqqIGraWIFq4Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830060; c=relaxed/simple; bh=uOr/6HKQHJB65gZ4C/bWAqnEMWAtVnHUae8r3o/YY8E=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HqmpeX1DtsAbTaIu6b+jh6ngQrVAefeQCYR7iGELPkySKVKWOI4a66gnxF4UKjdVuxFuA1rTVCOzWz/SiMBMANj7S5xAAsXTpPSrHoO5ZYr2WJ7lvEg84QyyNLoqOuZiMRgpSY7qomYJaXjC1PIOfyzoBLKJ/ZzYL0O/D78kFDs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gWPaZ+kO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gWPaZ+kO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5E5CDC4CED1; Mon, 17 Feb 2025 22:07:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830060; bh=uOr/6HKQHJB65gZ4C/bWAqnEMWAtVnHUae8r3o/YY8E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gWPaZ+kOIgWp37BqCUZsyZBhJamA7AQ+E4g3Leq2dKW1ByxfkGdzhmOanxE3hGdce +puN3ok4i/MjcrICwZHbsytq4AfmO7ENuL7D5LAV75cHaU3flr+9q039jh2cOf+LiU B6gXAmL7iYSLtmElHg/jq6c16rN9dpIcTtUgZuf7B26cmHwkOdTGQTMjNMK4fJ7tn0 fLV9MK64ESXCffxKlgIkrkceA1RWTjn/rC5p2rqiX0GssH34wFTMrot1Rsdp2RLAMo Behvwgb6F97HBrefbYSO0ETXxeFC99jDZ9Q3OgoWMHBW9ntQZAjFVhzXXd6NpH9Ykf fieLWrOhDsBdA== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v3 06/18] cpufreq/amd-pstate: Drop `cppc_cap1_cached` Date: Mon, 17 Feb 2025 16:06:55 -0600 Message-ID: <20250217220707.1468365-7-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello The `cppc_cap1_cached` variable isn't used at all, there is no need to read it at initialization for each CPU. Reviewed-by: Gautham R. Shenoy Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 5 ----- drivers/cpufreq/amd-pstate.h | 2 -- 2 files changed, 7 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 92ef2e6612a62..c6934c9730bee 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -1511,11 +1511,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) if (ret) return ret; WRITE_ONCE(cpudata->cppc_req_cached, value); - - ret = rdmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_CAP1, &value); - if (ret) - return ret; - WRITE_ONCE(cpudata->cppc_cap1_cached, value); } ret = amd_pstate_set_epp(cpudata, cpudata->epp_default); if (ret) diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 8421c83c07919..1a52582dbac9d 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -74,7 +74,6 @@ struct amd_aperf_mperf { * AMD P-State driver supports preferred core featue. * @epp_cached: Cached CPPC energy-performance preference value * @policy: Cpufreq policy value - * @cppc_cap1_cached Cached MSR_AMD_CPPC_CAP1 register value * * The amd_cpudata is key private data for each CPU thread in AMD P-State, and * represents all the attributes and goals that AMD P-State requests at runtime. @@ -103,7 +102,6 @@ struct amd_cpudata { /* EPP feature related attributes*/ u8 epp_cached; u32 policy; - u64 cppc_cap1_cached; bool suspended; u8 epp_default; }; From patchwork Mon Feb 17 22:06:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D05E31BBBD3; Mon, 17 Feb 2025 22:07:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830063; cv=none; b=kZuZtVPvYW7Jfq3u6FyUIY8hK/PYpfqOd32h+3u6rojrTDpCekcM0qZNlJrlcMY/n97EfmtkYIc55WO/Y8C18Lkqrm4ivji2f14H1uE302mygds4ZrAU8e4DrTERAIL0e/KPnUHwNO2gTEUd/QD5mbIldonmPIgVzKUt4/QkPOk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830063; c=relaxed/simple; bh=w2ei7dAroLC14G4mlW/yqVmr6ZqUeR1xKhW7ZCS136U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HSn/Xd2JX/G4qO8N83g6+94o3RTFe7zxP0FBOFR/T6pGX9HiQ79L8vB3tbzWrfYDh6lK+zSnzCUtkmLPqqbUyElnpO9sQOFQEK48HxCEG8Yk1z6OIx6eMEz8RX41RjBua6MrUf1/v0ZoqjeHsHpC/t+Gxt3WxWAy3o7TKqTh8JQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=M9JcgW5F; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="M9JcgW5F" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15BBEC4CEE8; Mon, 17 Feb 2025 22:07:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830063; bh=w2ei7dAroLC14G4mlW/yqVmr6ZqUeR1xKhW7ZCS136U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=M9JcgW5FEIMIfKcjt4mLLZFhgD3PgSRnubX+PWJzdqvbX7vJNMUKVXvvYZJeQsZpM D+fRa3wWU13RbB7hmpOVtvENeWXSbBIL+HE9ZCS4KwcVLIb1gQk7BLJU6n0cU2MaU2 jPrwrOAYERFqrCdkc5opERlGWAyulkErbsuWovu/KyoID+lO5v+YOMNojngr5Nf6oR uvT6vOhjFT9cK/Yhn8rxp5oLSADn9wiL5SKe+X3lhyWjCZRnz1mpqP76uWw0hY7Xq/ F5Qyy0OPrYlez0Y5PrPQwCgCnrh0urfNEfB28rF+FXO+pwEph5lgxlDCFtIcgyv35w eyxMQv4WSSp9w== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello Subject: [PATCH v3 10/18] cpufreq/amd-pstate-ut: Run on all of the correct CPUs Date: Mon, 17 Feb 2025 16:06:59 -0600 Message-ID: <20250217220707.1468365-11-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello If a CPU is missing a policy or one has been offlined then the unit test is skipped for the rest of the CPUs on the system. Instead; iterate online CPUs and skip any missing policies to allow continuing to test the rest of them. Signed-off-by: Mario Limonciello --- v3: * Only check online CPUs * Update commit message --- drivers/cpufreq/amd-pstate-ut.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 028527a0019ca..3a541780f374f 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -116,12 +116,12 @@ static int amd_pstate_ut_check_perf(u32 index) struct amd_cpudata *cpudata = NULL; union perf_cached cur_perf; - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; policy = cpufreq_cpu_get(cpu); if (!policy) - break; + continue; cpudata = policy->driver_data; if (get_shared_mem()) { @@ -188,12 +188,12 @@ static int amd_pstate_ut_check_freq(u32 index) int cpu = 0; struct amd_cpudata *cpudata = NULL; - for_each_possible_cpu(cpu) { + for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; policy = cpufreq_cpu_get(cpu); if (!policy) - break; + continue; cpudata = policy->driver_data; if (!((policy->cpuinfo.max_freq >= cpudata->nominal_freq) && From patchwork Mon Feb 17 22:07:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B9FEA1DD539; Mon, 17 Feb 2025 22:07:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830064; cv=none; b=gjUeLhHRKnztK1Yi6ZpKomvkUXeEqWOQM8qTYdgtIuh+M2I3oCtpi1IdK3xOOrW1GRZORN+BVdCcV/2ndtyn983nWKMu51wxFdVBAssZNDNKs6JhBj3IG+qE/xfDBOV3QxFWvyi6qvlHREX9Iq4ChjFXH8a+2Vy4Cxi1JFLeojc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830064; c=relaxed/simple; bh=KIwG/kEIVb6cTf+T/wqq7Akr9rEXI8YurHlaC3SVGpc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R8xyUKFg6/8/+SKCl0l5fLGU3hXN0k6sYYrovTCrJDNUjVuroVGdb/PUM+158EACqpuDzNCnO8ivFBmdNskI4rsbPQsje/KMOlfsyj6WVhLKa80r3cICfjBjUtsVmY7XxQ6ZvzDe2Nm5fxppf2nWeb0M0NKKEM0OHeROaT6q/DE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=CBFHakTq; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="CBFHakTq" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F1F09C4CEEA; Mon, 17 Feb 2025 22:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830064; bh=KIwG/kEIVb6cTf+T/wqq7Akr9rEXI8YurHlaC3SVGpc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CBFHakTqHj57wKa/bEqkDs3pnfyy7gQqMQGI6CLnD7j7HrEsZh6rv3pO3u2kfiUFJ 1tGPC+kl2hSgMT9qFEQOXErClpfsOJnW8ohu/DbG6YABP1RbvQEhjDCXn0bVoZ4gRb d/GlULSMNWRpF0PNCDnfK/Abads23HTyJifZHLlXSvk96NR/PAe0MQlyzmLtdzJI/2 H14bPf58whMyxTfCgsQTbU4BvXbccQa0cIR8tvYZXutKuowtIHo3XX8Q3sgtiUGIhi 0h4oe81C7ZyvBrXPZZuT2e8Dm2JD4mRa/1QYGZvTMNnU2wUKw8IwZwpk4w7VvGtBq0 tl5/yfedLngfg== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello Subject: [PATCH v3 11/18] cpufreq/amd-pstate-ut: Adjust variable scope for amd_pstate_ut_check_freq() Date: Mon, 17 Feb 2025 16:07:00 -0600 Message-ID: <20250217220707.1468365-12-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello The cpudata variable is only needed in the scope of the for loop. Move it there. Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate-ut.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 3a541780f374f..6b04b5b54b3b5 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -186,10 +186,10 @@ static int amd_pstate_ut_check_perf(u32 index) static int amd_pstate_ut_check_freq(u32 index) { int cpu = 0; - struct amd_cpudata *cpudata = NULL; for_each_online_cpu(cpu) { struct cpufreq_policy *policy __free(put_cpufreq_policy) = NULL; + struct amd_cpudata *cpudata; policy = cpufreq_cpu_get(cpu); if (!policy) From patchwork Mon Feb 17 22:07:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866024 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C2ED01DE3A5; Mon, 17 Feb 2025 22:07:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830065; cv=none; b=SkdULIhhiUG9Frgn7gLk1OZu5DyU1wxbmTQRhuLLQgu78crh9VdltSISzQi36jt2LJn0+vXMBON//C+Ib0UTUZWrM4C+QApusIrFomSsuUDQMglqhxWVyj/vKVu/MwY9h11W8vEi+YLy2/CZ480y81GTo63bzymvINSlCiixp0s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830065; c=relaxed/simple; bh=K/7qaKxoxOFS1u+zBhk0UZK39MFTTZOG66iJNpiCHU0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EMCsW0PIUuSnTlTIbX7RWGJUZDhPTtVgCW7/j4UPX6+7I9lGFBWdGo1vjIFM5ne+p7Me3/MPGLXJK1Yf/YBKfxc/DhVU8QLaQHObNVLC3Lnfk3rz6FkH08Fe10u9mjxrGgQAOmUNvPiJ6pQQubxlT6xWEMjvj54KCsU09HOX5Lc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Pp46F/ET; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Pp46F/ET" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DC8F9C4CED1; Mon, 17 Feb 2025 22:07:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830065; bh=K/7qaKxoxOFS1u+zBhk0UZK39MFTTZOG66iJNpiCHU0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Pp46F/ET1wbaStMByMOmShnfcOApdkff1fzXPLH6TsqvG2ejOu8dUN+rczTQT3Amm UU/3cgDyr9hWQeElmXzWyCTMyiuPKXAW7IwrqM7t31elXC8V+phRwZyD6fdfuGSoou Zk8cRj0MofMFeObRVCzuuT0eFgrwUBPlRqZtwsny2M3zeY13k4ZGZozRvObyom9FU3 u4DrrcDCOGg+eXiAHZSB1FEdBXnaSoLrW1dMYiWf+MVjo7qwq4SXcIJfeDyO285BRy RmyvCYzuTcJf3yAkfIkn1wPBXkpkE/pAkl9As9e/digHQoYnA0HAVx7R8AfUlmWMG1 EPHeRH2GMwkgA== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v3 12/18] cpufreq/amd-pstate: Replace all AMD_CPPC_* macros with masks Date: Mon, 17 Feb 2025 16:07:01 -0600 Message-ID: <20250217220707.1468365-13-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello Bitfield masks are easier to follow and less error prone. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- v3: * Add tag * Add missing includes v2: * Add a comment in msr-index.h * Pick up tag --- arch/x86/include/asm/msr-index.h | 20 +++++++++++--------- arch/x86/kernel/acpi/cppc.c | 4 +++- drivers/cpufreq/amd-pstate-ut.c | 9 +++++---- drivers/cpufreq/amd-pstate.c | 16 ++++++---------- 4 files changed, 25 insertions(+), 24 deletions(-) diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h index c84930610c7e6..cfcc49e5cf925 100644 --- a/arch/x86/include/asm/msr-index.h +++ b/arch/x86/include/asm/msr-index.h @@ -701,15 +701,17 @@ #define MSR_AMD_CPPC_REQ 0xc00102b3 #define MSR_AMD_CPPC_STATUS 0xc00102b4 -#define AMD_CPPC_LOWEST_PERF(x) (((x) >> 0) & 0xff) -#define AMD_CPPC_LOWNONLIN_PERF(x) (((x) >> 8) & 0xff) -#define AMD_CPPC_NOMINAL_PERF(x) (((x) >> 16) & 0xff) -#define AMD_CPPC_HIGHEST_PERF(x) (((x) >> 24) & 0xff) - -#define AMD_CPPC_MAX_PERF(x) (((x) & 0xff) << 0) -#define AMD_CPPC_MIN_PERF(x) (((x) & 0xff) << 8) -#define AMD_CPPC_DES_PERF(x) (((x) & 0xff) << 16) -#define AMD_CPPC_ENERGY_PERF_PREF(x) (((x) & 0xff) << 24) +/* Masks for use with MSR_AMD_CPPC_CAP1 */ +#define AMD_CPPC_LOWEST_PERF_MASK GENMASK(7, 0) +#define AMD_CPPC_LOWNONLIN_PERF_MASK GENMASK(15, 8) +#define AMD_CPPC_NOMINAL_PERF_MASK GENMASK(23, 16) +#define AMD_CPPC_HIGHEST_PERF_MASK GENMASK(31, 24) + +/* Masks for use with MSR_AMD_CPPC_REQ */ +#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0) +#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8) +#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16) +#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24) /* AMD Performance Counter Global Status and Control MSRs */ #define MSR_AMD64_PERF_CNTR_GLOBAL_STATUS 0xc0000300 diff --git a/arch/x86/kernel/acpi/cppc.c b/arch/x86/kernel/acpi/cppc.c index d745dd586303c..77bfb846490c0 100644 --- a/arch/x86/kernel/acpi/cppc.c +++ b/arch/x86/kernel/acpi/cppc.c @@ -4,6 +4,8 @@ * Copyright (c) 2016, Intel Corporation. */ +#include + #include #include #include @@ -149,7 +151,7 @@ int amd_get_highest_perf(unsigned int cpu, u32 *highest_perf) if (ret) goto out; - val = AMD_CPPC_HIGHEST_PERF(val); + val = FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, val); } else { ret = cppc_get_highest_perf(cpu, &val); if (ret) diff --git a/drivers/cpufreq/amd-pstate-ut.c b/drivers/cpufreq/amd-pstate-ut.c index 6b04b5b54b3b5..9a2de25a4b749 100644 --- a/drivers/cpufreq/amd-pstate-ut.c +++ b/drivers/cpufreq/amd-pstate-ut.c @@ -22,6 +22,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include #include #include #include @@ -142,10 +143,10 @@ static int amd_pstate_ut_check_perf(u32 index) return ret; } - highest_perf = AMD_CPPC_HIGHEST_PERF(cap1); - nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); - lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1); - lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); + highest_perf = FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, cap1); + nominal_perf = FIELD_GET(AMD_CPPC_NOMINAL_PERF_MASK, cap1); + lowest_nonlinear_perf = FIELD_GET(AMD_CPPC_LOWNONLIN_PERF_MASK, cap1); + lowest_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1); } cur_perf = READ_ONCE(cpudata->perf); diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index c6934c9730bee..2c8f6e92ec8a8 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -89,11 +89,6 @@ static bool cppc_enabled; static bool amd_pstate_prefcore = true; static struct quirk_entry *quirks; -#define AMD_CPPC_MAX_PERF_MASK GENMASK(7, 0) -#define AMD_CPPC_MIN_PERF_MASK GENMASK(15, 8) -#define AMD_CPPC_DES_PERF_MASK GENMASK(23, 16) -#define AMD_CPPC_EPP_PERF_MASK GENMASK(31, 24) - /* * AMD Energy Preference Performance (EPP) * The EPP is used in the CCLK DPM controller to drive @@ -439,12 +434,13 @@ static int msr_init_perf(struct amd_cpudata *cpudata) perf.highest_perf = numerator; perf.max_limit_perf = numerator; - perf.min_limit_perf = AMD_CPPC_LOWEST_PERF(cap1); - perf.nominal_perf = AMD_CPPC_NOMINAL_PERF(cap1); - perf.lowest_nonlinear_perf = AMD_CPPC_LOWNONLIN_PERF(cap1); - perf.lowest_perf = AMD_CPPC_LOWEST_PERF(cap1); + perf.min_limit_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1); + perf.nominal_perf = FIELD_GET(AMD_CPPC_NOMINAL_PERF_MASK, cap1); + perf.lowest_nonlinear_perf = FIELD_GET(AMD_CPPC_LOWNONLIN_PERF_MASK, cap1); + perf.lowest_perf = FIELD_GET(AMD_CPPC_LOWEST_PERF_MASK, cap1); WRITE_ONCE(cpudata->perf, perf); - WRITE_ONCE(cpudata->prefcore_ranking, AMD_CPPC_HIGHEST_PERF(cap1)); + WRITE_ONCE(cpudata->prefcore_ranking, FIELD_GET(AMD_CPPC_HIGHEST_PERF_MASK, cap1)); + return 0; } From patchwork Mon Feb 17 22:07:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866023 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B4CE01C6FED; Mon, 17 Feb 2025 22:07:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830067; cv=none; b=SdNZMk1tHdtqWxuw6e49iN3aPL/6+fMyYZF5PGl461bMMVJKVMOMjLVyHL4LXBvN1Zz5VI5NGfpyTtuqgfTIjmL4Vn0EPtFkikQdv9GduMlxE9kZGY8cuI12Aq92tHcRj1FaGUxxe+hOSFX3O0ghnTycaQMUHzT7a2IOp0P0bPM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830067; c=relaxed/simple; bh=8b/C2Z5XXeeSbxzKyUe0eHPJYIACJogJ2mylDT5u0Ao=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kWGvffEsHxJFsmQYIwhOgfD9V8ktNCZFlMe41rdHTXepHQtuy5wHMeYDGdO6f8IqVyQ6xUFJcCmtKRgTwGy9Sudlki6BCwBh8BljGaKfNXBu7qBbcXz4Uu0Z42wBqzYgWFjAlJjZuQwAdmlFrpmozH19p+olZOULh3nHmvnW0mg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oyfw1v0y; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oyfw1v0y" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE14DC4CEE4; Mon, 17 Feb 2025 22:07:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830067; bh=8b/C2Z5XXeeSbxzKyUe0eHPJYIACJogJ2mylDT5u0Ao=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oyfw1v0yDCJHrlapQvFxBq42I5zIyvZ5OYRxg45Cw9LW4sMrJtxlQNjPuK8vp7ZSO /n3K9cIUXGCCNODNxFJAV4r4BHiDi0b+ROnBP5mw03Ma0PUKBnJIhh/quY0qSIc3Ef oi8xoHk3PG9e+UVB/1W2QEep9HMq5dnkz2E7u3qLYMarmqBmRVBXU2omvKZQI3FzKw RV77REveu04WUX0VT+UUnWcGo0LtizHwHa5uWXmdts3Q/bEXv8vmlcOQKBL+vNSKrQ rbHXdC43wAMs+BNPSx9XFo2j6nUpiMUZdobRBIBchpvDyVFPxEyjAxISNhab+nbMRM prtfvYde7gaHw== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v3 14/18] cpufreq/amd-pstate: Move all EPP tracing into *_update_perf and *_set_epp functions Date: Mon, 17 Feb 2025 16:07:03 -0600 Message-ID: <20250217220707.1468365-15-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello The EPP tracing is done by the caller today, but this precludes the information about whether the CPPC request has changed. Move it into the update_perf and set_epp functions and include information about whether the request has changed from the last one. amd_pstate_update_perf() and amd_pstate_set_epp() now require the policy as an argument instead of the cpudata. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- v3: * Add tag * Update commit message --- drivers/cpufreq/amd-pstate-trace.h | 13 +++- drivers/cpufreq/amd-pstate.c | 116 ++++++++++++++++++----------- 2 files changed, 80 insertions(+), 49 deletions(-) diff --git a/drivers/cpufreq/amd-pstate-trace.h b/drivers/cpufreq/amd-pstate-trace.h index f457d4af2c62e..32e1bdc588c52 100644 --- a/drivers/cpufreq/amd-pstate-trace.h +++ b/drivers/cpufreq/amd-pstate-trace.h @@ -90,7 +90,8 @@ TRACE_EVENT(amd_pstate_epp_perf, u8 epp, u8 min_perf, u8 max_perf, - bool boost + bool boost, + bool changed ), TP_ARGS(cpu_id, @@ -98,7 +99,8 @@ TRACE_EVENT(amd_pstate_epp_perf, epp, min_perf, max_perf, - boost), + boost, + changed), TP_STRUCT__entry( __field(unsigned int, cpu_id) @@ -107,6 +109,7 @@ TRACE_EVENT(amd_pstate_epp_perf, __field(u8, min_perf) __field(u8, max_perf) __field(bool, boost) + __field(bool, changed) ), TP_fast_assign( @@ -116,15 +119,17 @@ TRACE_EVENT(amd_pstate_epp_perf, __entry->min_perf = min_perf; __entry->max_perf = max_perf; __entry->boost = boost; + __entry->changed = changed; ), - TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u", + TP_printk("cpu%u: [%hhu<->%hhu]/%hhu, epp=%hhu, boost=%u, changed=%u", (unsigned int)__entry->cpu_id, (u8)__entry->min_perf, (u8)__entry->max_perf, (u8)__entry->highest_perf, (u8)__entry->epp, - (bool)__entry->boost + (bool)__entry->boost, + (bool)__entry->changed ) ); diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 4eb3ba6dfdbd9..eb54960f313be 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -228,9 +228,10 @@ static u8 shmem_get_epp(struct amd_cpudata *cpudata) return FIELD_GET(AMD_CPPC_EPP_PERF_MASK, epp); } -static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf, +static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { + struct amd_cpudata *cpudata = policy->driver_data; u64 value, prev; value = prev = READ_ONCE(cpudata->cppc_req_cached); @@ -242,6 +243,18 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf, value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf); value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = READ_ONCE(cpudata->perf); + + trace_amd_pstate_epp_perf(cpudata->cpu, + perf.highest_perf, + epp, + min_perf, + max_perf, + policy->boost_enabled, + value != prev); + } + if (value == prev) return 0; @@ -256,24 +269,26 @@ static int msr_update_perf(struct amd_cpudata *cpudata, u8 min_perf, } WRITE_ONCE(cpudata->cppc_req_cached, value); - WRITE_ONCE(cpudata->epp_cached, epp); + if (epp != cpudata->epp_cached) + WRITE_ONCE(cpudata->epp_cached, epp); return 0; } DEFINE_STATIC_CALL(amd_pstate_update_perf, msr_update_perf); -static inline int amd_pstate_update_perf(struct amd_cpudata *cpudata, +static inline int amd_pstate_update_perf(struct cpufreq_policy *policy, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { - return static_call(amd_pstate_update_perf)(cpudata, min_perf, des_perf, + return static_call(amd_pstate_update_perf)(policy, min_perf, des_perf, max_perf, epp, fast_switch); } -static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp) +static int msr_set_epp(struct cpufreq_policy *policy, u8 epp) { + struct amd_cpudata *cpudata = policy->driver_data; u64 value, prev; int ret; @@ -281,6 +296,19 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp) value &= ~AMD_CPPC_EPP_PERF_MASK; value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = cpudata->perf; + + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, + epp, + FIELD_GET(AMD_CPPC_MIN_PERF_MASK, + cpudata->cppc_req_cached), + FIELD_GET(AMD_CPPC_MAX_PERF_MASK, + cpudata->cppc_req_cached), + policy->boost_enabled, + value != prev); + } + if (value == prev) return 0; @@ -299,15 +327,29 @@ static int msr_set_epp(struct amd_cpudata *cpudata, u8 epp) DEFINE_STATIC_CALL(amd_pstate_set_epp, msr_set_epp); -static inline int amd_pstate_set_epp(struct amd_cpudata *cpudata, u8 epp) +static inline int amd_pstate_set_epp(struct cpufreq_policy *policy, u8 epp) { - return static_call(amd_pstate_set_epp)(cpudata, epp); + return static_call(amd_pstate_set_epp)(policy, epp); } -static int shmem_set_epp(struct amd_cpudata *cpudata, u8 epp) +static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) { - int ret; + struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; + int ret; + + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = cpudata->perf; + + trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, + epp, + FIELD_GET(AMD_CPPC_MIN_PERF_MASK, + cpudata->cppc_req_cached), + FIELD_GET(AMD_CPPC_MAX_PERF_MASK, + cpudata->cppc_req_cached), + policy->boost_enabled, + epp != cpudata->epp_cached); + } if (epp == cpudata->epp_cached) return 0; @@ -339,17 +381,7 @@ static int amd_pstate_set_energy_pref_index(struct cpufreq_policy *policy, return -EBUSY; } - if (trace_amd_pstate_epp_perf_enabled()) { - union perf_cached perf = READ_ONCE(cpudata->perf); - - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, - epp, - FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), - FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), - policy->boost_enabled); - } - - return amd_pstate_set_epp(cpudata, epp); + return amd_pstate_set_epp(policy, epp); } static inline int msr_cppc_enable(bool enable) @@ -492,15 +524,16 @@ static inline int amd_pstate_init_perf(struct amd_cpudata *cpudata) return static_call(amd_pstate_init_perf)(cpudata); } -static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf, +static int shmem_update_perf(struct cpufreq_policy *policy, u8 min_perf, u8 des_perf, u8 max_perf, u8 epp, bool fast_switch) { + struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; u64 value, prev; int ret; if (cppc_state == AMD_PSTATE_ACTIVE) { - int ret = shmem_set_epp(cpudata, epp); + int ret = shmem_set_epp(policy, epp); if (ret) return ret; @@ -515,6 +548,18 @@ static int shmem_update_perf(struct amd_cpudata *cpudata, u8 min_perf, value |= FIELD_PREP(AMD_CPPC_MIN_PERF_MASK, min_perf); value |= FIELD_PREP(AMD_CPPC_EPP_PERF_MASK, epp); + if (trace_amd_pstate_epp_perf_enabled()) { + union perf_cached perf = READ_ONCE(cpudata->perf); + + trace_amd_pstate_epp_perf(cpudata->cpu, + perf.highest_perf, + epp, + min_perf, + max_perf, + policy->boost_enabled, + value != prev); + } + if (value == prev) return 0; @@ -592,7 +637,7 @@ static void amd_pstate_update(struct amd_cpudata *cpudata, u8 min_perf, cpudata->cpu, fast_switch); } - amd_pstate_update_perf(cpudata, min_perf, des_perf, max_perf, 0, fast_switch); + amd_pstate_update_perf(policy, min_perf, des_perf, max_perf, 0, fast_switch); } static int amd_pstate_verify(struct cpufreq_policy_data *policy_data) @@ -1528,7 +1573,7 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) return ret; WRITE_ONCE(cpudata->cppc_req_cached, value); } - ret = amd_pstate_set_epp(cpudata, cpudata->epp_default); + ret = amd_pstate_set_epp(policy, cpudata->epp_default); if (ret) return ret; @@ -1569,14 +1614,8 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) epp = READ_ONCE(cpudata->epp_cached); perf = READ_ONCE(cpudata->perf); - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, epp, - perf.min_limit_perf, - perf.max_limit_perf, - policy->boost_enabled); - } - return amd_pstate_update_perf(cpudata, perf.min_limit_perf, 0U, + return amd_pstate_update_perf(policy, perf.min_limit_perf, 0U, perf.max_limit_perf, epp, false); } @@ -1616,12 +1655,6 @@ static int amd_pstate_epp_reenable(struct cpufreq_policy *policy) if (ret) pr_err("failed to enable amd pstate during resume, return %d\n", ret); - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, - cpudata->epp_cached, - FIELD_GET(AMD_CPPC_MIN_PERF_MASK, cpudata->cppc_req_cached), - perf.highest_perf, policy->boost_enabled); - } return amd_pstate_epp_update_limit(policy); } @@ -1649,14 +1682,7 @@ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) if (cpudata->suspended) return 0; - if (trace_amd_pstate_epp_perf_enabled()) { - trace_amd_pstate_epp_perf(cpudata->cpu, perf.highest_perf, - AMD_CPPC_EPP_BALANCE_POWERSAVE, - perf.lowest_perf, perf.lowest_perf, - policy->boost_enabled); - } - - return amd_pstate_update_perf(cpudata, perf.lowest_perf, 0, perf.lowest_perf, + return amd_pstate_update_perf(policy, perf.lowest_perf, 0, perf.lowest_perf, AMD_CPPC_EPP_BALANCE_POWERSAVE, false); } From patchwork Mon Feb 17 22:07:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866022 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A8B651DF24B; Mon, 17 Feb 2025 22:07:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830069; cv=none; b=X8OQzAd9qqHsWAd+uU+LHSLl1gbRSRzlPp52VLGE1rttjfkwq7KLv8fcff1uUrnoMLUvR/GUcIpRIx4p8S5ICf5VbHZKUKsWpxpHkPeH4OHRbIrgRLXU5NkFCE0jZgUAtyxcPVvnzqzdubMx22ICubrEnWdOtJqV515knI+BTQk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830069; c=relaxed/simple; bh=XB8H1wjb48jXgguiXOM2q3McwxHjzVCURinUHtPIn+4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Z2OnopINM7cwmZtgpDQpfzWEMQXhc6siQvEnFV1RAX3Nac/JYcVlI69DnB7CPeJvEFvIwqyA4T2gg4r7MFmsaGCf01ff0TtoQELJheOhIGMQh3di7Vqdt6JH9XcsidHSZI+VKYfS0JYg/ZUatKGOS8QGqxfjBCF5vBWFhgmvXJk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kkBTEfLE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kkBTEfLE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C6CB3C4CEE4; Mon, 17 Feb 2025 22:07:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830069; bh=XB8H1wjb48jXgguiXOM2q3McwxHjzVCURinUHtPIn+4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kkBTEfLE2nHSp7d5OxYCF1Cdu35QVP8IrrmCKL3RQ8WavLTtmlHBRVFEQt+5Om6IE +AxSZZx2La8fXHWl3LLuTNlqz9Lc0nJcn3tgdL7AY6jEtg7ZaGM4lN0JmlZLa5cjj2 GkFeOBGmyAlMyCJRhirktiy5Q6HfUKEwybhApOqpajdEUSsq4cVSjmsZeNU1mq4ffC c7hlJwANf4qfPhMSk/15cYELYwxchffy64gJqzSW5p9fzr1U8Jsrd9Hh7+vFn4SPGs gIIzoGr9NpuZfR9HxrJ09dXfnknRufa8wV5YKrg8ItR7N2Mcda9p6drW/GdjZWAPlR Wzwm+MtCm5M/w== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v3 16/18] cpufreq/amd-pstate: Drop debug statements for policy setting Date: Mon, 17 Feb 2025 16:07:05 -0600 Message-ID: <20250217220707.1468365-17-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello There are trace events that exist now for all amd-pstate modes that will output information right before programming to the hardware. This makes the existing debug statements unnecessary remaining overhead. Drop them. Reviewed-by: Dhananjay Ugwekar Reviewed-by: Gautham R. Shenoy Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 1f4c9d7fe28f5..9985edf9d0d55 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -667,7 +667,6 @@ static int amd_pstate_verify(struct cpufreq_policy_data *policy_data) } cpufreq_verify_within_cpu_limits(policy_data); - pr_debug("policy_max =%d, policy_min=%d\n", policy_data->max, policy_data->min); return 0; } @@ -1633,9 +1632,6 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy) if (!policy->cpuinfo.max_freq) return -ENODEV; - pr_debug("set_policy: cpuinfo.max %u policy->max %u\n", - policy->cpuinfo.max_freq, policy->max); - cpudata->policy = policy->policy; ret = amd_pstate_epp_update_limit(policy); From patchwork Mon Feb 17 22:07:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mario Limonciello X-Patchwork-Id: 866021 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F37111DFE0C; Mon, 17 Feb 2025 22:07:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830072; cv=none; b=usOiig4xn6l0Zfm6sc5TqZG7gezWBKyy1V0vo6ouZ9EpvHmS9yBQdbdDg6Ysw4LX01TMhMDxnwGgCG/gGOEb2tLGCfZPHE1xEvPtkaUStTKUZMNAK1MH39HI/OQFkYYZj8Nr1iVfW3kBBF6nQPnXxzCo9d0sd11GaXzkdkirvDQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1739830072; c=relaxed/simple; bh=4+EsuSMFzTIZV2rxHQ6HUvxk96Xy8TbFkE5Q3qjs4jE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=FVyhd8/qZLgW9YBBRshyzEhu7wjQmNpNmHdZqRiGryA35zjNIbYRAl2KLbykmpPANmFUyPUeV0QbXSZocn3WEpuOx+edO1On3SEIVsJGnzL2w0zd0LcvlvBeAj0C7N/xBMxgZRb8lCDQMol/abKYiate1/DfyoWzeyu/3ODoRIc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ISTODcCf; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ISTODcCf" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B717DC4CEE2; Mon, 17 Feb 2025 22:07:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1739830071; bh=4+EsuSMFzTIZV2rxHQ6HUvxk96Xy8TbFkE5Q3qjs4jE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ISTODcCfT3+LXIpktjC4BHK0QpysWBO1ZrlTYxBnFi3+Q9kjYi05Z5uDkz66MgJAs LVwH5vJ+4PezW6S3p2Yukqr8g45Gq4wghqDiT3M7S1743ptkL2vENSakipKBfUfntL unqGuWfMF5fB96czgT3nNGd8TeWctPET9UFPJTwYC6CtIg+IU5MGQklAKrFw4PR5F0 QuIQwz4TZwfZIh5SOvNYzUjKZ6s/7Sdhap2FBkO/WG1rJRc4y9Z9t0vOxD25Dp+tvv XoTxCWkqU9DYXgxuY1zamQ2XeTwTHigPDVH6UmhSFD96yEK/fIO/6qspxsDtqJRaCu 7UG49967Ku9QA== From: Mario Limonciello To: "Gautham R . Shenoy" , Perry Yuan Cc: Dhananjay Ugwekar , linux-kernel@vger.kernel.org (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT)), linux-pm@vger.kernel.org (open list:CPU FREQUENCY SCALING FRAMEWORK), Mario Limonciello , Dhananjay Ugwekar Subject: [PATCH v3 18/18] cpufreq/amd-pstate: Stop caching EPP Date: Mon, 17 Feb 2025 16:07:07 -0600 Message-ID: <20250217220707.1468365-19-superm1@kernel.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20250217220707.1468365-1-superm1@kernel.org> References: <20250217220707.1468365-1-superm1@kernel.org> Precedence: bulk X-Mailing-List: linux-pm@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 From: Mario Limonciello EPP values are cached in the cpudata structure per CPU. This is needless though because they are also cached in the CPPC request variable. Drop the separate cache for EPP values and always reference the CPPC request variable when needed. Reviewed-by: Dhananjay Ugwekar Signed-off-by: Mario Limonciello --- drivers/cpufreq/amd-pstate.c | 27 ++++++++++++++------------- drivers/cpufreq/amd-pstate.h | 1 - 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 4660dd3f04796..48ec5e6527c68 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -268,8 +268,6 @@ static int msr_update_perf(struct cpufreq_policy *policy, u8 min_perf, } WRITE_ONCE(cpudata->cppc_req_cached, value); - if (epp != cpudata->epp_cached) - WRITE_ONCE(cpudata->epp_cached, epp); return 0; } @@ -318,7 +316,6 @@ static int msr_set_epp(struct cpufreq_policy *policy, u8 epp) } /* update both so that msr_update_perf() can effectively check */ - WRITE_ONCE(cpudata->epp_cached, epp); WRITE_ONCE(cpudata->cppc_req_cached, value); return ret; @@ -335,9 +332,12 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) { struct amd_cpudata *cpudata = policy->driver_data; struct cppc_perf_ctrls perf_ctrls; + u8 epp_cached; u64 value; int ret; + + epp_cached = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); if (trace_amd_pstate_epp_perf_enabled()) { union perf_cached perf = cpudata->perf; @@ -348,10 +348,10 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) FIELD_GET(AMD_CPPC_MAX_PERF_MASK, cpudata->cppc_req_cached), policy->boost_enabled, - epp != cpudata->epp_cached); + epp != epp_cached); } - if (epp == cpudata->epp_cached) + if (epp == epp_cached) return 0; perf_ctrls.energy_perf = epp; @@ -360,7 +360,6 @@ static int shmem_set_epp(struct cpufreq_policy *policy, u8 epp) pr_debug("failed to set energy perf value (%d)\n", ret); return ret; } - WRITE_ONCE(cpudata->epp_cached, epp); value = READ_ONCE(cpudata->cppc_req_cached); value &= ~AMD_CPPC_EPP_PERF_MASK; @@ -1203,9 +1202,11 @@ static ssize_t show_energy_performance_preference( struct cpufreq_policy *policy, char *buf) { struct amd_cpudata *cpudata = policy->driver_data; - u8 preference; + u8 preference, epp; + + epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); - switch (cpudata->epp_cached) { + switch (epp) { case AMD_CPPC_EPP_PERFORMANCE: preference = EPP_INDEX_PERFORMANCE; break; @@ -1568,7 +1569,7 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy) if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE) epp = 0; else - epp = READ_ONCE(cpudata->epp_cached); + epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); perf = READ_ONCE(cpudata->perf); @@ -1604,22 +1605,22 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy) struct amd_cpudata *cpudata = policy->driver_data; union perf_cached perf = READ_ONCE(cpudata->perf); int ret; + u8 epp; + + epp = FIELD_GET(AMD_CPPC_EPP_PERF_MASK, cpudata->cppc_req_cached); pr_debug("AMD CPU Core %d going online\n", cpudata->cpu); ret = amd_pstate_cppc_enable(policy); if (ret) return ret; - - - ret = amd_pstate_update_perf(policy, 0, 0, perf.highest_perf, cpudata->epp_cached, false); + ret = amd_pstate_update_perf(policy, 0, 0, perf.highest_perf, epp, false); if (ret) return ret; cpudata->suspended = false; return 0; - } static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy) diff --git a/drivers/cpufreq/amd-pstate.h b/drivers/cpufreq/amd-pstate.h index 1a52582dbac9d..13918853f0a82 100644 --- a/drivers/cpufreq/amd-pstate.h +++ b/drivers/cpufreq/amd-pstate.h @@ -100,7 +100,6 @@ struct amd_cpudata { bool hw_prefcore; /* EPP feature related attributes*/ - u8 epp_cached; u32 policy; bool suspended; u8 epp_default;