From patchwork Mon Jul 27 12:28:08 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 51516 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-lb0-f200.google.com (mail-lb0-f200.google.com [209.85.217.200]) by patches.linaro.org (Postfix) with ESMTPS id DF97F22918 for ; Mon, 27 Jul 2015 12:29:01 +0000 (UTC) Received: by lbvb1 with SMTP id b1sf27087965lbv.3 for ; Mon, 27 Jul 2015 05:29:00 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:in-reply-to:references :sender:precedence:list-id:x-original-sender :x-original-authentication-results:mailing-list:list-post:list-help :list-archive:list-unsubscribe; bh=cq1STDjgl9rc+OAq7FIBX3tqoqReM39TxbYj/UUNkr8=; b=E0+sE+uEoqcXoNmm1Dsp+LZkh+sZOOsGBG6/CuhdEZX2CwYILCwsfAEVLXH94O5Ye1 RW5VT1xV4nCjn5D2Odgt09i2FGX9zEIWKYGDvpzBDBCSPvuhMt5rAzBPy6A3S+2S1agP SI0i/OzcyqRKboucSj1VUYdDINeBbwvaeXU4VHUVm5i/z+a97Bjzk4HKYM7WJO7u0vcQ ScwR9r+kR1QGwiDBMJ2/MZ2GNq7NeYBxcrauUic6srkvtVotjluP8hzRSBtu4JlX7oWo NOydhevgdMhO39UJMZNgGj7mKBMeb7VW41OjCsgMFPPvl16Z5KLcNsr/+dSbBpZw6rfN RP9Q== X-Gm-Message-State: ALoCoQnvqbkWrAeqEEHeXV9Fac5qbJY0y85BAKC+z+NNRVp07r7QUI6LKjymPVwKuE4A+jZFdgdM X-Received: by 10.112.148.101 with SMTP id tr5mr12157485lbb.13.1438000135045; Mon, 27 Jul 2015 05:28:55 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.25.134 with SMTP id c6ls662995lag.54.gmail; Mon, 27 Jul 2015 05:28:54 -0700 (PDT) X-Received: by 10.152.10.72 with SMTP id g8mr26392141lab.97.1438000134858; Mon, 27 Jul 2015 05:28:54 -0700 (PDT) Received: from mail-lb0-f178.google.com (mail-lb0-f178.google.com. [209.85.217.178]) by mx.google.com with ESMTPS id t5si15125424lbb.35.2015.07.27.05.28.54 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 27 Jul 2015 05:28:54 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) client-ip=209.85.217.178; Received: by lbbqi7 with SMTP id qi7so52179815lbb.3 for ; Mon, 27 Jul 2015 05:28:54 -0700 (PDT) X-Received: by 10.112.145.169 with SMTP id sv9mr13369282lbb.73.1438000134722; Mon, 27 Jul 2015 05:28:54 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.7.198 with SMTP id l6csp1501922lba; Mon, 27 Jul 2015 05:28:53 -0700 (PDT) X-Received: by 10.66.66.166 with SMTP id g6mr67051286pat.114.1438000132103; Mon, 27 Jul 2015 05:28:52 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o6si43892713pdj.121.2015.07.27.05.28.51; Mon, 27 Jul 2015 05:28:52 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753375AbbG0M2r (ORCPT + 28 others); Mon, 27 Jul 2015 08:28:47 -0400 Received: from mail-pd0-f170.google.com ([209.85.192.170]:36597 "EHLO mail-pd0-f170.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751668AbbG0M2o (ORCPT ); Mon, 27 Jul 2015 08:28:44 -0400 Received: by pdjr16 with SMTP id r16so52373126pdj.3 for ; Mon, 27 Jul 2015 05:28:44 -0700 (PDT) X-Received: by 10.70.123.37 with SMTP id lx5mr68948255pdb.158.1438000124128; Mon, 27 Jul 2015 05:28:44 -0700 (PDT) Received: from localhost ([122.171.186.190]) by smtp.gmail.com with ESMTPSA id jg7sm8292664pac.1.2015.07.27.05.28.43 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 27 Jul 2015 05:28:43 -0700 (PDT) From: Viresh Kumar To: Rafael Wysocki Cc: linaro-kernel@lists.linaro.org, linux-pm@vger.kernel.org, preeti.lkml@gmail.com, Viresh Kumar , linux-kernel@vger.kernel.org (open list) Subject: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate() Date: Mon, 27 Jul 2015 17:58:08 +0530 Message-Id: <5b568d732469de1a902e0aa1034ea24e863aa524.1437999691.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 2.4.0 In-Reply-To: References: In-Reply-To: References: Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: viresh.kumar@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.217.178 as permitted sender) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , The sampling rate is updated with a call to update_sampling_rate(), and we process CPUs one by one here. While the work is canceled on per-cpu basis, it is getting queued (by mistake) for all policy->cpus. This would result in wasting cpu cycles for queuing works which are already queued and never canceled. This patch changes this behavior to queue work only on the cpu, for which it was canceled earlier. To do that, replace 'modify_all' parameter to gov_queue_work() with a mask of CPUs. Also the last parameter to ->gov_dbs_timer() was named 'modify_all' earlier, but its purpose was to decide if load has to be evaluated again or not. Lets rename that to load_eval. Fixes: 031299b3be30 ("cpufreq: governors: Avoid unnecessary per cpu timer interrupts") Signed-off-by: Viresh Kumar --- drivers/cpufreq/cpufreq_conservative.c | 4 ++-- drivers/cpufreq/cpufreq_governor.c | 30 ++++++++++-------------------- drivers/cpufreq/cpufreq_governor.h | 4 ++-- drivers/cpufreq/cpufreq_ondemand.c | 7 ++++--- 4 files changed, 18 insertions(+), 27 deletions(-) diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index 18bfbc313e48..1aa3bd46cea3 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -116,11 +116,11 @@ static void cs_check_cpu(int cpu, unsigned int load) } static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, bool modify_all) + struct dbs_data *dbs_data, bool load_eval) { struct cs_dbs_tuners *cs_tuners = dbs_data->tuners; - if (modify_all) + if (load_eval) dbs_check_cpu(dbs_data, cdbs->shared->policy->cpu); return delay_for_sampling_rate(cs_tuners->sampling_rate); diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c index 750626d8fb03..a890450711bb 100644 --- a/drivers/cpufreq/cpufreq_governor.c +++ b/drivers/cpufreq/cpufreq_governor.c @@ -167,7 +167,7 @@ static inline void __gov_queue_work(int cpu, struct dbs_data *dbs_data, } void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus) + unsigned int delay, const struct cpumask *cpus) { int i; @@ -175,19 +175,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, if (!policy->governor_enabled) goto out_unlock; - if (!all_cpus) { - /* - * Use raw_smp_processor_id() to avoid preemptible warnings. - * We know that this is only called with all_cpus == false from - * works that have been queued with *_work_on() functions and - * those works are canceled during CPU_DOWN_PREPARE so they - * can't possibly run on any other CPU. - */ - __gov_queue_work(raw_smp_processor_id(), dbs_data, delay); - } else { - for_each_cpu(i, policy->cpus) - __gov_queue_work(i, dbs_data, delay); - } + for_each_cpu(i, cpus) + __gov_queue_work(i, dbs_data, delay); out_unlock: mutex_unlock(&cpufreq_governor_lock); @@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work) struct cpufreq_policy *policy = shared->policy; struct dbs_data *dbs_data = policy->governor_data; unsigned int sampling_rate, delay; - bool modify_all = true; + const struct cpumask *cpus; + bool load_eval; mutex_lock(&shared->timer_mutex); @@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work) sampling_rate = od_tuners->sampling_rate; } - if (!need_load_eval(cdbs->shared, sampling_rate)) - modify_all = false; + load_eval = need_load_eval(cdbs->shared, sampling_rate); + cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id()); - delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all); - gov_queue_work(dbs_data, policy, delay, modify_all); + delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval); + gov_queue_work(dbs_data, policy, delay, cpus); mutex_unlock(&shared->timer_mutex); } @@ -474,7 +464,7 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy, } gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate), - true); + policy->cpus); return 0; } diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h index 5621bb03e874..52665a0624b2 100644 --- a/drivers/cpufreq/cpufreq_governor.h +++ b/drivers/cpufreq/cpufreq_governor.h @@ -211,7 +211,7 @@ struct common_dbs_data { void *(*get_cpu_dbs_info_s)(int cpu); unsigned int (*gov_dbs_timer)(struct cpu_dbs_info *cdbs, struct dbs_data *dbs_data, - bool modify_all); + bool load_eval); void (*gov_check_cpu)(int cpu, unsigned int load); int (*init)(struct dbs_data *dbs_data, bool notify); void (*exit)(struct dbs_data *dbs_data, bool notify); @@ -273,7 +273,7 @@ void dbs_check_cpu(struct dbs_data *dbs_data, int cpu); int cpufreq_governor_dbs(struct cpufreq_policy *policy, struct common_dbs_data *cdata, unsigned int event); void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy, - unsigned int delay, bool all_cpus); + unsigned int delay, const struct cpumask *cpus); void od_register_powersave_bias_handler(unsigned int (*f) (struct cpufreq_policy *, unsigned int, unsigned int), unsigned int powersave_bias); diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 1fa9088c84a8..2474c9c34022 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -192,7 +192,7 @@ static void od_check_cpu(int cpu, unsigned int load) } static unsigned int od_dbs_timer(struct cpu_dbs_info *cdbs, - struct dbs_data *dbs_data, bool modify_all) + struct dbs_data *dbs_data, bool load_eval) { struct cpufreq_policy *policy = cdbs->shared->policy; unsigned int cpu = policy->cpu; @@ -201,7 +201,7 @@ static unsigned int od_dbs_timer(struct cpu_dbs_info *cdbs, struct od_dbs_tuners *od_tuners = dbs_data->tuners; int delay = 0, sample_type = dbs_info->sample_type; - if (!modify_all) + if (!load_eval) goto max_delay; /* Common NORMAL_SAMPLE setup */ @@ -284,7 +284,8 @@ static void update_sampling_rate(struct dbs_data *dbs_data, mutex_lock(&dbs_info->cdbs.shared->timer_mutex); gov_queue_work(dbs_data, policy, - usecs_to_jiffies(new_rate), true); + usecs_to_jiffies(new_rate), + cpumask_of(cpu)); } mutex_unlock(&dbs_info->cdbs.shared->timer_mutex);