Message ID | 20241219201833.2750998-1-naresh.solanki@9elements.com |
---|---|
State | Accepted |
Commit | 857a61c2ce74e30fc3b10bc89d68ddd8d05b188c |
Headers | show |
Series | [v3] cpufreq/amd-pstate: Refactor max frequency calculation | expand |
On 12/19/2024 14:18, Naresh Solanki wrote: > The previous approach introduced roundoff errors during division when > calculating the boost ratio. This, in turn, affected the maximum > frequency calculation, often resulting in reporting lower frequency > values. > > For example, on the Glinda SoC based board with the following > parameters: > > max_perf = 208 > nominal_perf = 100 > nominal_freq = 2600 MHz > > The Linux kernel previously calculated the frequency as: > freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024 > freq = 5405 MHz // Integer arithmetic. > > With the updated formula: > freq = (max_perf * nominal_freq) / nominal_perf > freq = 5408 MHz > > This change ensures more accurate frequency calculations by eliminating > unnecessary shifts and divisions, thereby improving precision. > > Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com> > > Changes in V3: > 1. Also update the same for lowest_nonlinear_freq > > Changes in V2: > 1. Rebase on superm1.git/linux-next branch > --- Reviewed-by: Mario Limonciello <mario.limonciello@amd.com> BTW - Over the holiday I added this to my bleeding-edge branch [1] and have done testing with it. I will be including it in my next 6.14 PR, thanks! [1] https://git.kernel.org/pub/scm/linux/kernel/git/superm1/linux.git/commit/?h=bleeding-edge&id=857a61c2ce74e30fc3b10bc89d68ddd8d05b188c > drivers/cpufreq/amd-pstate.c | 13 ++++--------- > 1 file changed, 4 insertions(+), 9 deletions(-) > > diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c > index d7b1de97727a..6f6f3220ffe4 100644 > --- a/drivers/cpufreq/amd-pstate.c > +++ b/drivers/cpufreq/amd-pstate.c > @@ -908,9 +908,8 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) > { > int ret; > u32 min_freq, max_freq; > - u32 nominal_perf, nominal_freq; > + u32 highest_perf, nominal_perf, nominal_freq; > u32 lowest_nonlinear_perf, lowest_nonlinear_freq; > - u32 boost_ratio, lowest_nonlinear_ratio; > struct cppc_perf_caps cppc_perf; > > ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); > @@ -927,16 +926,12 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) > else > nominal_freq = cppc_perf.nominal_freq; > > + highest_perf = READ_ONCE(cpudata->highest_perf); > nominal_perf = READ_ONCE(cpudata->nominal_perf); > - > - boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); > - max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT); > + max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf); > > lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); > - lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT, > - nominal_perf); > - lowest_nonlinear_freq = (nominal_freq * lowest_nonlinear_ratio >> SCHED_CAPACITY_SHIFT); > - > + lowest_nonlinear_freq = div_u64((u64)nominal_freq * lowest_nonlinear_perf, nominal_perf); > WRITE_ONCE(cpudata->min_freq, min_freq * 1000); > WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq * 1000); > WRITE_ONCE(cpudata->nominal_freq, nominal_freq * 1000);
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index d7b1de97727a..6f6f3220ffe4 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -908,9 +908,8 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) { int ret; u32 min_freq, max_freq; - u32 nominal_perf, nominal_freq; + u32 highest_perf, nominal_perf, nominal_freq; u32 lowest_nonlinear_perf, lowest_nonlinear_freq; - u32 boost_ratio, lowest_nonlinear_ratio; struct cppc_perf_caps cppc_perf; ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); @@ -927,16 +926,12 @@ static int amd_pstate_init_freq(struct amd_cpudata *cpudata) else nominal_freq = cppc_perf.nominal_freq; + highest_perf = READ_ONCE(cpudata->highest_perf); nominal_perf = READ_ONCE(cpudata->nominal_perf); - - boost_ratio = div_u64(cpudata->highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); - max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT); + max_freq = div_u64((u64)highest_perf * nominal_freq, nominal_perf); lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); - lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT, - nominal_perf); - lowest_nonlinear_freq = (nominal_freq * lowest_nonlinear_ratio >> SCHED_CAPACITY_SHIFT); - + lowest_nonlinear_freq = div_u64((u64)nominal_freq * lowest_nonlinear_perf, nominal_perf); WRITE_ONCE(cpudata->min_freq, min_freq * 1000); WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq * 1000); WRITE_ONCE(cpudata->nominal_freq, nominal_freq * 1000);
The previous approach introduced roundoff errors during division when calculating the boost ratio. This, in turn, affected the maximum frequency calculation, often resulting in reporting lower frequency values. For example, on the Glinda SoC based board with the following parameters: max_perf = 208 nominal_perf = 100 nominal_freq = 2600 MHz The Linux kernel previously calculated the frequency as: freq = ((max_perf * 1024 / nominal_perf) * nominal_freq) / 1024 freq = 5405 MHz // Integer arithmetic. With the updated formula: freq = (max_perf * nominal_freq) / nominal_perf freq = 5408 MHz This change ensures more accurate frequency calculations by eliminating unnecessary shifts and divisions, thereby improving precision. Signed-off-by: Naresh Solanki <naresh.solanki@9elements.com> Changes in V3: 1. Also update the same for lowest_nonlinear_freq Changes in V2: 1. Rebase on superm1.git/linux-next branch --- drivers/cpufreq/amd-pstate.c | 13 ++++--------- 1 file changed, 4 insertions(+), 9 deletions(-)