From patchwork Wed Nov 19 20:34:53 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Chaugule X-Patchwork-Id: 41210 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-la0-f71.google.com (mail-la0-f71.google.com [209.85.215.71]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id 2962C20CBB for ; Wed, 19 Nov 2014 20:35:12 +0000 (UTC) Received: by mail-la0-f71.google.com with SMTP id s18sf922732lam.2 for ; Wed, 19 Nov 2014 12:35:11 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:x-original-sender :x-original-authentication-results:precedence:mailing-list:list-id :list-post:list-help:list-archive:list-unsubscribe; bh=AOiKodvyqu82u6Tu4T62yumEROHq6kIGd84zwzuIiRo=; b=RGhRqIy+BT1SMQjFPMBCfyiQnS0l5ULvJDZL4apN0ZDdNdHqaYxzIlVsA4WUcpqMm+ 2A28UlJ5jOmpRcxMwovTsXlgUO5OSdGwafbi8LW3TLDAsX27fjStAI0RZMUIyZwrgtm3 Aan9tbVEiwltEbCojYm66bY0avj2VAayT5Ofizr3Dx6gXVWRfUlCq/wz3keS/lVQZgJW RQesqaPfcELOT4pEoFCBI2uDFsCWBLFCw7ttM2QB3MJmjQWd2mVYtkicOtop08mOrPM6 vZVYqFFL0W52L8wgglVhIMIJvRIcroTifgB2Hb03PQSeqTZTPvCrzeZYSnW0zCmiS1ci 6vsw== X-Gm-Message-State: ALoCoQnUW/8YeqmM7Kpp1RvYnteFdZCWQtUz49ckSJcgh7+0VSczpTrEt5FpvDEtU1GWBnaOwUKF X-Received: by 10.112.201.169 with SMTP id kb9mr5556lbc.20.1416429311171; Wed, 19 Nov 2014 12:35:11 -0800 (PST) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.205.10 with SMTP id lc10ls723067lac.22.gmail; Wed, 19 Nov 2014 12:35:10 -0800 (PST) X-Received: by 10.112.137.234 with SMTP id ql10mr3924048lbb.91.1416429310538; Wed, 19 Nov 2014 12:35:10 -0800 (PST) Received: from mail-lb0-f169.google.com (mail-lb0-f169.google.com. [209.85.217.169]) by mx.google.com with ESMTPS id g5si304665lbd.20.2014.11.19.12.35.10 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 12:35:10 -0800 (PST) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.169 as permitted sender) client-ip=209.85.217.169; Received: by mail-lb0-f169.google.com with SMTP id p9so177913lbv.28 for ; Wed, 19 Nov 2014 12:35:10 -0800 (PST) X-Received: by 10.112.189.10 with SMTP id ge10mr43928601lbc.23.1416429310408; Wed, 19 Nov 2014 12:35:10 -0800 (PST) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patches@linaro.org Received: by 10.112.184.201 with SMTP id ew9csp157999lbc; Wed, 19 Nov 2014 12:35:09 -0800 (PST) X-Received: by 10.69.20.74 with SMTP id ha10mr24413427pbd.122.1416429308593; Wed, 19 Nov 2014 12:35:08 -0800 (PST) Received: from mail-pa0-f46.google.com (mail-pa0-f46.google.com. [209.85.220.46]) by mx.google.com with ESMTPS id up6si237394pac.131.2014.11.19.12.35.07 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Wed, 19 Nov 2014 12:35:08 -0800 (PST) Received-SPF: pass (google.com: domain of ashwin.chaugule@linaro.org designates 209.85.220.46 as permitted sender) client-ip=209.85.220.46; Received: by mail-pa0-f46.google.com with SMTP id lj1so963958pab.19 for ; Wed, 19 Nov 2014 12:35:07 -0800 (PST) X-Received: by 10.70.61.37 with SMTP id m5mr15251077pdr.162.1416429307720; Wed, 19 Nov 2014 12:35:07 -0800 (PST) Received: from esagroth.qualcomm.com (rrcs-67-52-130-30.west.biz.rr.com. [67.52.130.30]) by mx.google.com with ESMTPSA id fn4sm126328pab.39.2014.11.19.12.35.05 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 19 Nov 2014 12:35:06 -0800 (PST) From: Ashwin Chaugule To: viresh.kumar@linaro.org Cc: rwells@codeaurora.org, linda.knippers@hp.com, linux-pm@vger.kernel.org, Catalin.Marinas@arm.com, dirk.brandewie@gmail.com, patches@linaro.org, linaro-acpi@list.linaro.org, rjw@rjwysocki.net, Ashwin Chaugule Subject: [PATCH v3 2/2] ACPI PID: Add frequency domain awareness. Date: Wed, 19 Nov 2014 15:34:53 -0500 Message-Id: <1416429293-3798-3-git-send-email-ashwin.chaugule@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1416429293-3798-1-git-send-email-ashwin.chaugule@linaro.org> References: <1416429293-3798-1-git-send-email-ashwin.chaugule@linaro.org> X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: ashwin.chaugule@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.169 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Precedence: list Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org List-ID: X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Previously the driver assumed each CPU to be in its own frequency domain. But this may not be always true in practice. Search for the PSD ACPI package for each CPU and parse its domain information. Once this information is known, the first CPU to wake up from a timeout evaluates all other CPUs in its domain and makes a collective vote for all. Each sibling CPUs timeout is defferred as it is evaluated. There could be a pending IRQ for such a CPU, in which case a spinlock protects the sample data, and on spin_unlock, we let it proceed and re-evaluate. Signed-off-by: Ashwin Chaugule --- drivers/cpufreq/acpi_pid.c | 112 +++++++++++++++++++++++++++++++++++++-------- 1 file changed, 93 insertions(+), 19 deletions(-) diff --git a/drivers/cpufreq/acpi_pid.c b/drivers/cpufreq/acpi_pid.c index f8d8376..ccaace9 100644 --- a/drivers/cpufreq/acpi_pid.c +++ b/drivers/cpufreq/acpi_pid.c @@ -60,6 +60,7 @@ #include #include +#include #include #define FRAC_BITS 8 @@ -114,6 +115,14 @@ struct cpudata { u64 prev_reference; u64 prev_delivered; struct sample sample; + cpumask_var_t shared_cpus; + /* + * This lock protects a CPU sample + * from being overwritten while it + * is being evaluated by another CPU + * in the shared_cpu map. + */ + spinlock_t sample_lock; }; static struct cpudata **all_cpu_data; @@ -207,6 +216,7 @@ struct cpc_desc { }; static DEFINE_PER_CPU(struct cpc_desc *, cpc_desc_ptr); +static struct acpi_processor_performance __percpu *acpi_perf_info; static int cpc_read64(u64 *val, struct cpc_register_resource *reg) { @@ -535,6 +545,8 @@ static inline int acpi_pid_sample(struct cpudata *cpu) { int ret = 0; + spin_lock(&cpu->sample_lock); + cpu->last_sample_time = cpu->sample.time; cpu->sample.time = ktime_get(); @@ -545,6 +557,8 @@ static inline int acpi_pid_sample(struct cpudata *cpu) acpi_pid_calc_busy(cpu); + spin_unlock(&cpu->sample_lock); + return ret; } @@ -579,40 +593,51 @@ static inline int32_t acpi_pid_get_scaled_busy(struct cpudata *cpu) return core_busy; } -static inline int acpi_pid_adjust_busy_pstate(struct cpudata *cpu) +static void acpi_pid_timer_func(unsigned long __data) { - int32_t busy_scaled; + struct cpudata *cpu = (struct cpudata *) __data; + struct sample *sample; + struct cpudata *sibling_cpu; + struct cpudata *max_busy_cpu = NULL; struct _pid *pid; signed int ctl; + int32_t max_busy = 0, busy, i; - pid = &cpu->pid; - busy_scaled = acpi_pid_get_scaled_busy(cpu); + for_each_cpu(i, cpu->shared_cpus) { + /* Get sibling cpu ptr. */ + sibling_cpu = all_cpu_data[i]; - ctl = pid_calc(pid, busy_scaled); + /* Get its sample data. */ + acpi_pid_sample(sibling_cpu); - /* Negative values of ctl increase the pstate and vice versa */ - return acpi_pid_set_pstate(cpu, cpu->pstate.current_pstate - ctl); -} + /* Defer its timeout. */ + acpi_pid_set_sample_time(sibling_cpu); -static void acpi_pid_timer_func(unsigned long __data) -{ - struct cpudata *cpu = (struct cpudata *) __data; - struct sample *sample; + /* Calc how busy it was. */ + busy = acpi_pid_get_scaled_busy(sibling_cpu); - acpi_pid_sample(cpu); + /* Was this the most busiest? */ + if (busy >= max_busy) { + max_busy = busy; + max_busy_cpu = sibling_cpu; + } + } - sample = &cpu->sample; + sample = &max_busy_cpu->sample; - acpi_pid_adjust_busy_pstate(cpu); + pid = &max_busy_cpu->pid; + ctl = pid_calc(pid, max_busy); + + /* XXX: This needs to change depending on SW_ANY/SW_ALL */ + /* Negative values of ctl increase the pstate and vice versa */ + acpi_pid_set_pstate(max_busy_cpu, max_busy_cpu->pstate.current_pstate - ctl); trace_pstate_sample(fp_toint(sample->core_pct_busy), fp_toint(acpi_pid_get_scaled_busy(cpu)), cpu->pstate.current_pstate, - sample->delivered, sample->reference, + sample->delivered, sample->freq); - - acpi_pid_set_sample_time(cpu); } static int acpi_pid_init_cpu(unsigned int cpunum) @@ -627,6 +652,7 @@ static int acpi_pid_init_cpu(unsigned int cpunum) cpu = all_cpu_data[cpunum]; cpu->cpu = cpunum; + spin_lock_init(&cpu->sample_lock); ret = acpi_pid_get_cpu_pstates(cpu); if (ret < 0) @@ -712,6 +738,7 @@ static void acpi_pid_stop_cpu(struct cpufreq_policy *policy) static int acpi_pid_cpu_init(struct cpufreq_policy *policy) { struct cpudata *cpu; + struct acpi_processor_performance *perf; int rc; rc = acpi_pid_init_cpu(policy->cpu); @@ -732,7 +759,25 @@ static int acpi_pid_cpu_init(struct cpufreq_policy *policy) policy->cpuinfo.min_freq = cpu->pstate.min_pstate * 100000; policy->cpuinfo.max_freq = cpu->pstate.max_pstate * 100000; policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL; + + if (!zalloc_cpumask_var_node(&cpu->shared_cpus, + GFP_KERNEL, cpu_to_node(policy->cpu))) { + pr_err("No mem for shared_cpus cpumask\n"); + return -ENOMEM; + } + + /* Parse the PSD info we acquired in acpi_cppc_init */ + perf = per_cpu_ptr(acpi_perf_info, policy->cpu); + policy->shared_type = perf->shared_type; + + if (policy->shared_type == CPUFREQ_SHARED_TYPE_ALL || + policy->shared_type == CPUFREQ_SHARED_TYPE_ANY) { + cpumask_copy(policy->cpus, perf->shared_cpu_map); + cpumask_copy(cpu->shared_cpus, perf->shared_cpu_map); + } + cpumask_set_cpu(policy->cpu, policy->cpus); + cpumask_set_cpu(policy->cpu, cpu->shared_cpus); return 0; } @@ -1103,8 +1148,20 @@ static struct cpu_defaults acpi_pid_cppc = { }, }; +static void free_acpi_perf_info(void) +{ + unsigned int i; + + for_each_possible_cpu(i) + free_cpumask_var(per_cpu_ptr(acpi_perf_info, i) + ->shared_cpu_map); + free_percpu(acpi_perf_info); +} + static int __init acpi_cppc_init(void) { + unsigned int i; + if (acpi_disabled || acpi_cppc_processor_probe()) { pr_err("Err initializing CPC structures or ACPI is disabled\n"); return -ENODEV; @@ -1113,7 +1170,24 @@ static int __init acpi_cppc_init(void) copy_pid_params(&acpi_pid_cppc.pid_policy); copy_cpu_funcs(&acpi_pid_cppc.funcs); - return 0; + acpi_perf_info = alloc_percpu(struct acpi_processor_performance); + if (!acpi_perf_info) { + pr_err("Out for mem for acpi_perf_info\n"); + return -ENOMEM; + } + + for_each_possible_cpu(i) { + if (!zalloc_cpumask_var_node( + &per_cpu_ptr(acpi_perf_info, i)->shared_cpu_map, + GFP_KERNEL, cpu_to_node(i))) { + + free_acpi_perf_info(); + return -ENOMEM; + } + } + + /* Get _PSD info about CPUs and the freq domain they belong to. */ + return acpi_processor_preregister_performance(acpi_perf_info); } static int __init acpi_pid_init(void)