From patchwork Mon Aug 8 09:18:15 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Leizhen \(ThunderTown\)" X-Patchwork-Id: 73418 Delivered-To: patch@linaro.org Received: by 10.140.29.52 with SMTP id a49csp3116261qga; Mon, 8 Aug 2016 02:20:56 -0700 (PDT) X-Received: by 10.98.58.149 with SMTP id v21mr159028273pfj.19.1470648056767; Mon, 08 Aug 2016 02:20:56 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b185si36068879pfa.125.2016.08.08.02.20.56; Mon, 08 Aug 2016 02:20:56 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752594AbcHHJUp (ORCPT + 27 others); Mon, 8 Aug 2016 05:20:45 -0400 Received: from szxga03-in.huawei.com ([119.145.14.66]:21182 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752521AbcHHJU1 (ORCPT ); Mon, 8 Aug 2016 05:20:27 -0400 Received: from 172.24.1.47 (EHLO szxeml428-hub.china.huawei.com) ([172.24.1.47]) by szxrg03-dlp.huawei.com (MOS 4.4.3-GA FastPath queued) with ESMTP id CGA52113; Mon, 08 Aug 2016 17:20:21 +0800 (CST) Received: from localhost (10.177.23.164) by szxeml428-hub.china.huawei.com (10.82.67.183) with Microsoft SMTP Server id 14.3.235.1; Mon, 8 Aug 2016 17:19:46 +0800 From: Zhen Lei To: Catalin Marinas , Will Deacon , linux-arm-kernel , linux-kernel , Rob Herring , "Frank Rowand" , devicetree CC: Zefan Li , Xinwei Hu , "Tianhong Ding" , Hanjun Guo , Zhen Lei Subject: [PATCH v5 10/14] arm64/numa: define numa_distance as array to simplify code Date: Mon, 8 Aug 2016 17:18:15 +0800 Message-ID: <1470647899-6324-11-git-send-email-thunder.leizhen@huawei.com> X-Mailer: git-send-email 1.9.5.msysgit.1 In-Reply-To: <1470647899-6324-1-git-send-email-thunder.leizhen@huawei.com> References: <1470647899-6324-1-git-send-email-thunder.leizhen@huawei.com> MIME-Version: 1.0 X-Originating-IP: [10.177.23.164] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020201.57A84ED8.0105, ss=1, re=0.000, recu=0.000, reip=0.000, cl=1, cld=1, fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 436e18170083ae92e8140d1477b31151 Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org 1. MAX_NUMNODES is base on CONFIG_NODES_SHIFT, the default value of the latter is very small now. 2. Suppose the default value of MAX_NUMNODES is enlarged to 64, so the size of numa_distance is 4K, it's still acceptable if run the Image on other processors. 3. It will make function __node_distance quicker than before. Signed-off-by: Zhen Lei --- arch/arm64/include/asm/numa.h | 1 - arch/arm64/mm/numa.c | 74 +++---------------------------------------- 2 files changed, 5 insertions(+), 70 deletions(-) -- 2.5.0 diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h index 600887e..9b6cc38 100644 --- a/arch/arm64/include/asm/numa.h +++ b/arch/arm64/include/asm/numa.h @@ -32,7 +32,6 @@ static inline const struct cpumask *cpumask_of_node(int node) void __init arm64_numa_init(void); int __init numa_add_memblk(int nodeid, u64 start, u64 end); void __init numa_set_distance(int from, int to, int distance); -void __init numa_free_distance(void); void __init early_map_cpu_to_node(unsigned int cpu, int nid); void numa_store_cpu_info(unsigned int cpu); diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c index 99401aa..df5c842 100644 --- a/arch/arm64/mm/numa.c +++ b/arch/arm64/mm/numa.c @@ -32,8 +32,7 @@ EXPORT_SYMBOL(node_data); nodemask_t numa_nodes_parsed __initdata; static int cpu_to_node_map[NR_CPUS] = { [0 ... NR_CPUS-1] = NUMA_NO_NODE }; -static int numa_distance_cnt; -static u8 *numa_distance; +static u8 numa_distance[MAX_NUMNODES][MAX_NUMNODES]; static bool numa_off; static __init int numa_parse_early_param(char *opt) @@ -247,59 +246,6 @@ static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn) } /** - * numa_free_distance - * - * The current table is freed. - */ -void __init numa_free_distance(void) -{ - size_t size; - - if (!numa_distance) - return; - - size = numa_distance_cnt * numa_distance_cnt * - sizeof(numa_distance[0]); - - memblock_free(__pa(numa_distance), size); - numa_distance_cnt = 0; - numa_distance = NULL; -} - -/** - * - * Create a new NUMA distance table. - * - */ -static int __init numa_alloc_distance(void) -{ - size_t size; - u64 phys; - int i, j; - - size = nr_node_ids * nr_node_ids * sizeof(numa_distance[0]); - phys = memblock_find_in_range(0, PFN_PHYS(max_pfn), - size, PAGE_SIZE); - if (WARN_ON(!phys)) - return -ENOMEM; - - memblock_reserve(phys, size); - - numa_distance = __va(phys); - numa_distance_cnt = nr_node_ids; - - /* fill with the default distances */ - for (i = 0; i < numa_distance_cnt; i++) - for (j = 0; j < numa_distance_cnt; j++) - numa_distance[i * numa_distance_cnt + j] = i == j ? - LOCAL_DISTANCE : REMOTE_DISTANCE; - - pr_debug("Initialized distance table, cnt=%d\n", numa_distance_cnt); - - return 0; -} - -/** * numa_set_distance - Set inter node NUMA distance from node to node. * @from: the 'from' node to set distance * @to: the 'to' node to set distance @@ -314,12 +260,7 @@ static int __init numa_alloc_distance(void) */ void __init numa_set_distance(int from, int to, int distance) { - if (!numa_distance) { - pr_warn_once("Warning: distance table not allocated yet\n"); - return; - } - - if (from >= numa_distance_cnt || to >= numa_distance_cnt || + if (from >= MAX_NUMNODES || to >= MAX_NUMNODES || from < 0 || to < 0) { pr_warn_once("Warning: node ids are out of bound, from=%d to=%d distance=%d\n", from, to, distance); @@ -333,7 +274,7 @@ void __init numa_set_distance(int from, int to, int distance) return; } - numa_distance[from * numa_distance_cnt + to] = distance; + numa_distance[from][to] = distance; } /** @@ -341,9 +282,9 @@ void __init numa_set_distance(int from, int to, int distance) */ int __node_distance(int from, int to) { - if (from >= numa_distance_cnt || to >= numa_distance_cnt) + if (from >= MAX_NUMNODES || to >= MAX_NUMNODES) return from == to ? LOCAL_DISTANCE : REMOTE_DISTANCE; - return numa_distance[from * numa_distance_cnt + to]; + return numa_distance[from][to]; } EXPORT_SYMBOL(__node_distance); @@ -383,11 +324,6 @@ static int __init numa_init(int (*init_func)(void)) nodes_clear(numa_nodes_parsed); nodes_clear(node_possible_map); nodes_clear(node_online_map); - numa_free_distance(); - - ret = numa_alloc_distance(); - if (ret < 0) - return ret; ret = init_func(); if (ret < 0)