From patchwork Tue Jun 13 04:24:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ricardo Neri X-Patchwork-Id: 692927 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2056CC88CBA for ; Tue, 13 Jun 2023 04:23:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239729AbjFMEXc (ORCPT ); Tue, 13 Jun 2023 00:23:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:45452 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239335AbjFMEWl (ORCPT ); Tue, 13 Jun 2023 00:22:41 -0400 Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7A9DB1734; Mon, 12 Jun 2023 21:21:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686630114; x=1718166114; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=Rg6bBLEQ3AC4QDzW6yr8kT0VDnQBk2LkMuNm2FBH7m4=; b=KTPz9mb27IBpJnSggCKlaeBB0xZ5nQdVRqzBHO/LXGSgLzgvw5oY+20H 3i7YxXtGBSbGNrkgmUiwhWt7WS9J+yDThGMx5Kt2XFpvY+z19Lest6Y5i 2P1K81sF9W+WaNsEeCRRB+prch9ICqniNy4O8osHEITmrvdkDa45KfzBO Ovw6JPKfw3BpMZS5pKO8lmTEr9YCevz9s7h18qRrItZzhOqDpcIvpFzzF +dMrOYMxOhd1O1NwVOuIQI3VrXVgrQ31+L8UIK/8UBpSZYPxY9m1fGFxO kCvmogFVXA9jHKVUdkNWTtUhTRq9aSJqk6E+mhTNW71+GrPiCMrQX9g4k w==; X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="358222201" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="358222201" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 21:21:53 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="661854974" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="661854974" Received: from ranerica-svr.sc.intel.com ([172.25.110.23]) by orsmga003.jf.intel.com with ESMTP; 12 Jun 2023 21:21:52 -0700 From: Ricardo Neri To: "Peter Zijlstra (Intel)" , Juri Lelli , Vincent Guittot Cc: Ricardo Neri , "Ravi V. Shankar" , Ben Segall , Daniel Bristot de Oliveira , Dietmar Eggemann , Len Brown , Mel Gorman , "Rafael J. Wysocki" , Srinivas Pandruvada , Steven Rostedt , Tim Chen , Valentin Schneider , Lukasz Luba , Ionela Voinescu , Zhao Liu , "Yuan, Perry" , x86@kernel.org, "Joel Fernandes (Google)" , linux-kernel@vger.kernel.org, linux-pm@vger.kernel.org, Ricardo Neri , "Tim C . Chen" , Zhao Liu Subject: [PATCH v4 14/24] thermal: intel: hfi: Store per-CPU IPCC scores Date: Mon, 12 Jun 2023 21:24:12 -0700 Message-Id: <20230613042422.5344-15-ricardo.neri-calderon@linux.intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20230613042422.5344-1-ricardo.neri-calderon@linux.intel.com> References: <20230613042422.5344-1-ricardo.neri-calderon@linux.intel.com> Precedence: bulk List-ID: X-Mailing-List: linux-pm@vger.kernel.org The scheduler reads the IPCC scores when balancing load. These reads can occur frequently and originate from many CPUs. Hardware may also occasionally update the HFI table. Controlling access with locks would cause contention. Cache the IPCC scores in separate per-CPU variables that the scheduler can use. Use a seqcount to synchronize memory accesses to these cached values. This eliminates the need for locks, as the sequence counter provides the memory ordering required to prevent the use of stale data. The HFI delayed workqueue guarantees that only one CPU writes the cached IPCC scores. The frequency of updates is low (every CONFIG_HZ jiffies or less), and the number of writes per update is in the order of tens. Writes should not starve reads. Only cache IPCC scores in this changeset. A subsequent changeset will use these scores. Cc: Ben Segall Cc: Daniel Bristot de Oliveira Cc: Dietmar Eggemann Cc: Ionela Voinescu Cc: Joel Fernandes (Google) Cc: Len Brown Cc: Lukasz Luba Cc: Mel Gorman Cc: Perry Yuan Cc: Rafael J. Wysocki Cc: Srinivas Pandruvada Cc: Steven Rostedt Cc: Tim C. Chen Cc: Valentin Schneider Cc: Zhao Liu Cc: x86@kernel.org Cc: linux-pm@vger.kernel.org Cc: linux-kernel@vger.kernel.org Suggested-by: Peter Zijlstra (Intel) Signed-off-by: Ricardo Neri --- Changes since v3: * As Rafael requested, I reworked the memory ordering of the cached IPCC scores. I selected a seqcount as is less expensive than a memory barrier, which is not necessary anyways. * Made alloc_hfi_ipcc_scores() return -ENOMEM on allocation failure. (Rafael) * Added a comment to describe hfi_ipcc_scores. (Rafael) Changes since v2: * Only create these per-CPU variables when Intel Thread Director is supported. Changes since v1: * Added this patch. --- drivers/thermal/intel/intel_hfi.c | 66 +++++++++++++++++++++++++++++++ 1 file changed, 66 insertions(+) diff --git a/drivers/thermal/intel/intel_hfi.c b/drivers/thermal/intel/intel_hfi.c index 20ee4264dcd4..d822ed0bb5c1 100644 --- a/drivers/thermal/intel/intel_hfi.c +++ b/drivers/thermal/intel/intel_hfi.c @@ -29,9 +29,11 @@ #include #include #include +#include #include #include #include +#include #include #include #include @@ -180,6 +182,62 @@ static struct workqueue_struct *hfi_updates_wq; #define HFI_UPDATE_INTERVAL HZ #define HFI_MAX_THERM_NOTIFY_COUNT 16 +/* A cache of the HFI perf capabilities for lockless access. */ +static int __percpu *hfi_ipcc_scores; +/* Sequence counter for hfi_ipcc_scores */ +static seqcount_t hfi_ipcc_seqcount = SEQCNT_ZERO(hfi_ipcc_seqcount); + +static int alloc_hfi_ipcc_scores(void) +{ + if (!cpu_feature_enabled(X86_FEATURE_ITD)) + return 0; + + hfi_ipcc_scores = __alloc_percpu(sizeof(*hfi_ipcc_scores) * + hfi_features.nr_classes, + sizeof(*hfi_ipcc_scores)); + + return hfi_ipcc_scores ? 0 : -ENOMEM; +} + +static void set_hfi_ipcc_scores(struct hfi_instance *hfi_instance) +{ + int cpu; + + if (!cpu_feature_enabled(X86_FEATURE_ITD)) + return; + + /* + * Serialize with writes to the HFI table. It also protects the write + * loop against seqcount readers running in interrupt context. + */ + raw_spin_lock_irq(&hfi_instance->table_lock); + /* + * The seqcount implies store-release semantics to order stores with + * lockless loads from the seqcount read side. It also implies a + * compiler barrier. + */ + write_seqcount_begin(&hfi_ipcc_seqcount); + for_each_cpu(cpu, hfi_instance->cpus) { + int c, *scores; + s16 index; + + index = per_cpu(hfi_cpu_info, cpu).index; + scores = per_cpu_ptr(hfi_ipcc_scores, cpu); + + for (c = 0; c < hfi_features.nr_classes; c++) { + struct hfi_cpu_data *caps; + + caps = hfi_instance->data + + index * hfi_features.cpu_stride + + c * hfi_features.class_stride; + scores[c] = caps->perf_cap; + } + } + + write_seqcount_end(&hfi_ipcc_seqcount); + raw_spin_unlock_irq(&hfi_instance->table_lock); +} + /** * intel_hfi_read_classid() - Read the currrent classid * @classid: Variable to which the classid will be written. @@ -275,6 +333,8 @@ static void update_capabilities(struct hfi_instance *hfi_instance) thermal_genl_cpu_capability_event(cpu_count, &cpu_caps[i]); kfree(cpu_caps); + + set_hfi_ipcc_scores(hfi_instance); out: mutex_unlock(&hfi_instance_lock); } @@ -618,8 +678,14 @@ void __init intel_hfi_init(void) if (!hfi_updates_wq) goto err_nomem; + if (alloc_hfi_ipcc_scores()) + goto err_ipcc; + return; +err_ipcc: + destroy_workqueue(hfi_updates_wq); + err_nomem: for (j = 0; j < i; ++j) { hfi_instance = &hfi_instances[j];