From patchwork Fri Dec 13 09:54:04 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 851169 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B722D1B85FD for ; Fri, 13 Dec 2024 09:55:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083718; cv=none; b=TdatBkYzMwrn6Kbetc2u1X0VrdoTqbMveMJx+7rv/5CYNyeP7ujPlCZnZzV5WC/ggxuySgW6zj0ZNGrZhcv4lJBI77zCZUR0QvjhLeNzm3I+GZNBF8VJxxfWtPemk2L3UF79HnMH7omwG+/r3t9gyZxmVnScoU+WcqXZPmUDkIw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083718; c=relaxed/simple; bh=sfx40/i3U3wd88FlMvV1TSE1IsMZMv4mbjRWXp87fKQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lObmFGBbf3Tfm1Nx75mwVNuoIh8iK8SbKUjzGVjEFY/gbCoDAtq0V9kQ2NhNln+oC+LtrVfubR3ssH6KVQ1n2TJO14wH2oZLSjqk5/b65OhXNSi+9POqrdTVW09IsK3Clwu2vy4BQ6Y1zVSiJ+9GcWOtVCD9KqgcbwJfgmKKGvs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=bezX9l1R; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="bezX9l1R" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734083715; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=JAzbIE5lEihVL5VaG4JjJ0SMlqmrPzzCPdmb4qvN7Uk=; b=bezX9l1RYTx+idoI8n0okDLR/EvJnFjyFoC+p/ryv7c42HswkOwIKPvwc/8Tb4FjrMVc2U 6vZVitb9k2IzOb5TQZBAyxQ5HMwEJBKSTKBD1DCagKWBG5jEuSoDyiLlD4qs+awlF3xYk0 pMH4GFxAn5gJHns17WHfAoYfyp0TWS4= Received: from mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-333-bqgCsWERMqW_tJfsr6ogGA-1; Fri, 13 Dec 2024 04:55:11 -0500 X-MC-Unique: bqgCsWERMqW_tJfsr6ogGA-1 X-Mimecast-MFC-AGG-ID: bqgCsWERMqW_tJfsr6ogGA Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C734C195605E; Fri, 13 Dec 2024 09:55:09 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.39.192.43]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7A17E195395A; Fri, 13 Dec 2024 09:55:05 +0000 (UTC) From: Gabriele Monaco To: Mathieu Desnoyers , Ingo Molnar , Peter Zijlstra , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Juri Lelli , Vincent Guittot , Mel Gorman , Shuah Khan , linux-kselftest@vger.kernel.org, Gabriele Monaco Subject: [PATCH v2 1/4] sched: Move task_mm_cid_work to mm delayed work Date: Fri, 13 Dec 2024 10:54:04 +0100 Message-ID: <20241213095407.271357-2-gmonaco@redhat.com> In-Reply-To: <20241213095407.271357-1-gmonaco@redhat.com> References: <20241213095407.271357-1-gmonaco@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 Currently, the task_mm_cid_work function is called in a task work triggered by a scheduler tick. This can delay the execution of the task for the entire duration of the function, negatively affecting the response of real time tasks. This patch runs the task_mm_cid_work in a new delayed work connected to the mm_struct rather than in the task context before returning to userspace. This delayed work is initialised while allocating the mm and disabled before freeing it, its execution is no longer triggered by scheduler ticks but run periodically based on the defined MM_CID_SCAN_DELAY. The main advantage of this change is that the function can be offloaded to a different CPU and even preempted by RT tasks. Moreover, this new behaviour could be more predictable in some situations since the delayed work is always scheduled with the same periodicity for each mm. Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 11 +++++++++ include/linux/sched.h | 1 - kernel/sched/core.c | 51 ++++++---------------------------------- kernel/sched/sched.h | 7 ------ 4 files changed, 18 insertions(+), 52 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 7361a8f3ab68..92acb827fee4 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -856,6 +856,7 @@ struct mm_struct { * mm nr_cpus_allowed updates. */ raw_spinlock_t cpus_allowed_lock; + struct delayed_work mm_cid_work; #endif #ifdef CONFIG_MMU atomic_long_t pgtables_bytes; /* size of all page tables */ @@ -1144,11 +1145,16 @@ static inline void vma_iter_init(struct vma_iterator *vmi, #ifdef CONFIG_SCHED_MM_CID +#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */ +#define MM_CID_SCAN_DELAY 100 /* 100ms */ + enum mm_cid_state { MM_CID_UNSET = -1U, /* Unset state has lazy_put flag set. */ MM_CID_LAZY_PUT = (1U << 31), }; +extern void task_mm_cid_work(struct work_struct *work); + static inline bool mm_cid_is_unset(int cid) { return cid == MM_CID_UNSET; @@ -1221,12 +1227,17 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct * if (!mm->pcpu_cid) return -ENOMEM; mm_init_cid(mm, p); + INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work); + mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY); + schedule_delayed_work(&mm->mm_cid_work, + msecs_to_jiffies(MM_CID_SCAN_DELAY)); return 0; } #define mm_alloc_cid(...) alloc_hooks(mm_alloc_cid_noprof(__VA_ARGS__)) static inline void mm_destroy_cid(struct mm_struct *mm) { + disable_delayed_work_sync(&mm->mm_cid_work); free_percpu(mm->pcpu_cid); mm->pcpu_cid = NULL; } diff --git a/include/linux/sched.h b/include/linux/sched.h index d380bffee2ef..5d141c310917 100644 --- a/include/linux/sched.h +++ b/include/linux/sched.h @@ -1374,7 +1374,6 @@ struct task_struct { int last_mm_cid; /* Most recent cid in mm */ int migrate_from_cpu; int mm_cid_active; /* Whether cid bitmap is active */ - struct callback_head cid_work; #endif struct tlbflush_unmap_batch tlb_ubc; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c6d8232ad9ee..e3b27b73301c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4516,7 +4516,6 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p) p->wake_entry.u_flags = CSD_TYPE_TTWU; p->migration_pending = NULL; #endif - init_sched_mm_cid(p); } DEFINE_STATIC_KEY_FALSE(sched_numa_balancing); @@ -5654,7 +5653,6 @@ void sched_tick(void) resched_latency = cpu_resched_latency(rq); calc_global_load_tick(rq); sched_core_tick(rq); - task_tick_mm_cid(rq, donor); scx_tick(rq); rq_unlock(rq, &rf); @@ -10520,22 +10518,14 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu, sched_mm_cid_remote_clear(mm, pcpu_cid, cpu); } -static void task_mm_cid_work(struct callback_head *work) +void task_mm_cid_work(struct work_struct *work) { unsigned long now = jiffies, old_scan, next_scan; - struct task_struct *t = current; struct cpumask *cidmask; - struct mm_struct *mm; + struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); + struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work); int weight, cpu; - SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work)); - - work->next = work; /* Prevent double-add */ - if (t->flags & PF_EXITING) - return; - mm = t->mm; - if (!mm) - return; old_scan = READ_ONCE(mm->mm_cid_next_scan); next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY); if (!old_scan) { @@ -10548,9 +10538,9 @@ static void task_mm_cid_work(struct callback_head *work) old_scan = next_scan; } if (time_before(now, old_scan)) - return; + goto out; if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan)) - return; + goto out; cidmask = mm_cidmask(mm); /* Clear cids that were not recently used. */ for_each_possible_cpu(cpu) @@ -10562,35 +10552,8 @@ static void task_mm_cid_work(struct callback_head *work) */ for_each_possible_cpu(cpu) sched_mm_cid_remote_clear_weight(mm, cpu, weight); -} - -void init_sched_mm_cid(struct task_struct *t) -{ - struct mm_struct *mm = t->mm; - int mm_users = 0; - - if (mm) { - mm_users = atomic_read(&mm->mm_users); - if (mm_users == 1) - mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY); - } - t->cid_work.next = &t->cid_work; /* Protect against double add */ - init_task_work(&t->cid_work, task_mm_cid_work); -} - -void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) -{ - struct callback_head *work = &curr->cid_work; - unsigned long now = jiffies; - - if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) || - work->next != work) - return; - if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan))) - return; - - /* No page allocation under rq lock */ - task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC); +out: + schedule_delayed_work(delayed_work, msecs_to_jiffies(MM_CID_SCAN_DELAY)); } void sched_mm_cid_exit_signals(struct task_struct *t) diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 76f5f53a645f..21be461ff913 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3581,16 +3581,11 @@ extern void sched_dynamic_update(int mode); #ifdef CONFIG_SCHED_MM_CID -#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */ -#define MM_CID_SCAN_DELAY 100 /* 100ms */ - extern raw_spinlock_t cid_lock; extern int use_cid_lock; extern void sched_mm_cid_migrate_from(struct task_struct *t); extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t); -extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr); -extern void init_sched_mm_cid(struct task_struct *t); static inline void __mm_cid_put(struct mm_struct *mm, int cid) { @@ -3839,8 +3834,6 @@ static inline void switch_mm_cid(struct rq *rq, static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { } static inline void sched_mm_cid_migrate_from(struct task_struct *t) { } static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { } -static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { } -static inline void init_sched_mm_cid(struct task_struct *t) { } #endif /* !CONFIG_SCHED_MM_CID */ extern u64 avg_vruntime(struct cfs_rq *cfs_rq); From patchwork Fri Dec 13 09:54:05 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 850615 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E890F1C3BFB for ; Fri, 13 Dec 2024 09:55:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083731; cv=none; b=iI0gCx+OXoToYxMuycdSU4GksVeEXR6U9SOyR/iboKojNAAv7YuDKop9/HJA9vlVx6dJUFw1j3SAcy3SokeXGavB76IE1ASGhKGmPfba7ARPQivgQXLdaIHX1E+89ylbIEGLlarNmQe66qCPC6e7DiHG2OJwpW+u04mH3OvE0e4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083731; c=relaxed/simple; bh=6qIwBxURNeiKQ7WOfYT7k2Laq20fkTyAqgmtL1dLl68=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bnJoHENkY0FBtfXADPCP2lQeJ41+Abdxi1sqdUZM0gTT1+4wJzpObW1bqTgoLTNXbvsokgjaviaAdbkTsMHDkI+kg9Aq8DCJWs/ENZpguj0fAMl6dLF/hkD+bcZiAmwAut0avEIER09cInE7jcoUObmw0DpEP6hnGgJdHUfG6WE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=CYZwMRSR; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="CYZwMRSR" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734083725; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Zoc1rmo1ZxPdELSIwUBxgusiNNtCrVkbWCsaBTVZCO8=; b=CYZwMRSRwsqztNd1aIK/aTnQ/SiLVOUDQx5Vc7jlxWH7apg/AXzep+mhZvXa/ReUK/JpWC ADC71xd7TT6MfaVjNXqwQU9InOm4I+Or+gWVoEHYzxgkwt1qSw1W6A7NR9+UILAclkETwm mMMKTu5Rv5f9W1sprdOR8T6eFk+KThc= Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-515-uVqnbA6wPtKoeuvb26eCFw-1; Fri, 13 Dec 2024 04:55:22 -0500 X-MC-Unique: uVqnbA6wPtKoeuvb26eCFw-1 X-Mimecast-MFC-AGG-ID: uVqnbA6wPtKoeuvb26eCFw Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C8A721955BFE; Fri, 13 Dec 2024 09:55:20 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.39.192.43]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 6B19E1953951; Fri, 13 Dec 2024 09:55:16 +0000 (UTC) From: Gabriele Monaco To: Mathieu Desnoyers , Ingo Molnar , Peter Zijlstra , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Juri Lelli , Vincent Guittot , Mel Gorman , Shuah Khan , linux-kselftest@vger.kernel.org, Gabriele Monaco Subject: [PATCH v2 2/4] sched: Remove mm_cid_next_scan as obsolete Date: Fri, 13 Dec 2024 10:54:05 +0100 Message-ID: <20241213095407.271357-3-gmonaco@redhat.com> In-Reply-To: <20241213095407.271357-1-gmonaco@redhat.com> References: <20241213095407.271357-1-gmonaco@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 The checks for the scan time in task_mm_cid_work are now superfluous since the task runs in a delayed_work and the minimum periodicity is already implied. This patch removes those checks and the field from the mm_struct. Additionally, we include a simple check to quickly terminate the function if we have no work to be done (i.e. no mm_cid is allocated). This is helpful for tasks that sleep for a long time, but also for terminated task. We are no longer following the process' state, hence the function continues to run after a process terminates but before its mm is freed. Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 7 ------- kernel/sched/core.c | 19 +++---------------- 2 files changed, 3 insertions(+), 23 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 92acb827fee4..8a76a1c09234 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -829,12 +829,6 @@ struct mm_struct { * runqueue locks. */ struct mm_cid __percpu *pcpu_cid; - /* - * @mm_cid_next_scan: Next mm_cid scan (in jiffies). - * - * When the next mm_cid scan is due (in jiffies). - */ - unsigned long mm_cid_next_scan; /** * @nr_cpus_allowed: Number of CPUs allowed for mm. * @@ -1228,7 +1222,6 @@ static inline int mm_alloc_cid_noprof(struct mm_struct *mm, struct task_struct * return -ENOMEM; mm_init_cid(mm, p); INIT_DELAYED_WORK(&mm->mm_cid_work, task_mm_cid_work); - mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY); schedule_delayed_work(&mm->mm_cid_work, msecs_to_jiffies(MM_CID_SCAN_DELAY)); return 0; diff --git a/kernel/sched/core.c b/kernel/sched/core.c index e3b27b73301c..30d78fe14eff 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -10520,28 +10520,15 @@ static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu, void task_mm_cid_work(struct work_struct *work) { - unsigned long now = jiffies, old_scan, next_scan; struct cpumask *cidmask; struct delayed_work *delayed_work = container_of(work, struct delayed_work, work); struct mm_struct *mm = container_of(delayed_work, struct mm_struct, mm_cid_work); int weight, cpu; - old_scan = READ_ONCE(mm->mm_cid_next_scan); - next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY); - if (!old_scan) { - unsigned long res; - - res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan); - if (res != old_scan) - old_scan = res; - else - old_scan = next_scan; - } - if (time_before(now, old_scan)) - goto out; - if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan)) - goto out; cidmask = mm_cidmask(mm); + /* Nothing to clear for now */ + if (cpumask_empty(cidmask)) + goto out; /* Clear cids that were not recently used. */ for_each_possible_cpu(cpu) sched_mm_cid_remote_clear_old(mm, cpu); From patchwork Fri Dec 13 09:54:06 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 851168 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 640D11B4F02 for ; Fri, 13 Dec 2024 09:55:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083745; cv=none; b=cc9F+CFxSPMCXrSk6wcRAL3rjvxwa1HwQtqR9SYGpdeWxobuziN1lLS16h9CuILlCqrNX+tZIgxeHj/G89TMmq7skBDvg3pGXjAnHlo6DJ5kIgKXNsLpDJvebciJsl7DwxQUniZf5shqY5gsLi5Yp0GPqMeVsMsNpajleCfZdYo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083745; c=relaxed/simple; bh=+TQHJM9v9aaOLsDG+IZ4qua8NgZjG3RnwA8nKvzS9pE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iPYTFhyCzad0rgf/BTgn0UfCH6Owl63wVQShDjlRJlJO/XuKlcI0RV+oOR+hqOwcY+pjrfS3ElcQNl0Jpjupls/Cz0M5h2FRM7nNPAFC6lXH2GGNEq8CDfPwItLO5IKptu7qmP2xO5RoQBHS6URY4f7G66Ut57iG343DEbtgcv8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=Q9Hy6jSx; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="Q9Hy6jSx" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734083742; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=+blQkZ+8pXZzsTpzFalfD/SItztwEIvwWJSLIJ51tVA=; b=Q9Hy6jSx9sYeeCzGuYbH24wob87Eh5TpdjMJJKsgJh+Vj1ewgq2dsDiZkccFOmh8/M00lG r/jY39X7O15m8w1Z5NMg7oYhj458/1E8OtP5E2ClHMjxGNt1WxXqr7kSP64EKtTilazNzT IPGLgWACdR/XSsjGeErJqcMXhtGdDKo= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-624-lfytZsNrPn2sf6rsSwqFKA-1; Fri, 13 Dec 2024 04:55:37 -0500 X-MC-Unique: lfytZsNrPn2sf6rsSwqFKA-1 X-Mimecast-MFC-AGG-ID: lfytZsNrPn2sf6rsSwqFKA Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 385041955F2D; Fri, 13 Dec 2024 09:55:35 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.39.192.43]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 9C471195394B; Fri, 13 Dec 2024 09:55:30 +0000 (UTC) From: Gabriele Monaco To: Mathieu Desnoyers , Ingo Molnar , Peter Zijlstra , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Juri Lelli , Vincent Guittot , Mel Gorman , Shuah Khan , linux-kselftest@vger.kernel.org, Marco Elver , Ingo Molnar , Gabriele Monaco Subject: [PATCH v2 3/4] sched: Compact RSEQ concurrency IDs with reduced threads and affinity Date: Fri, 13 Dec 2024 10:54:06 +0100 Message-ID: <20241213095407.271357-4-gmonaco@redhat.com> In-Reply-To: <20241213095407.271357-1-gmonaco@redhat.com> References: <20241213095407.271357-1-gmonaco@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 From: Mathieu Desnoyers When a process reduces its number of threads or clears bits in its CPU affinity mask, the mm_cid allocation should eventually converge towards smaller values. However, the change introduced by: commit 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads") adds a per-mm/CPU recent_cid which is never unset unless a thread migrates. This is a tradeoff between: A) Preserving cache locality after a transition from many threads to few threads, or after reducing the hamming weight of the allowed CPU mask. B) Making the mm_cid upper bounds wrt nr threads and allowed CPU mask easy to document and understand. C) Allowing applications to eventually react to mm_cid compaction after reduction of the nr threads or allowed CPU mask, making the tracking of mm_cid compaction easier by shrinking it back towards 0 or not. D) Making sure applications that periodically reduce and then increase again the nr threads or allowed CPU mask still benefit from good cache locality with mm_cid. Introduce the following changes: * After shrinking the number of threads or reducing the number of allowed CPUs, reduce the value of max_nr_cid so expansion of CID allocation will preserve cache locality if the number of threads or allowed CPUs increase again. * Only re-use a recent_cid if it is within the max_nr_cid upper bound, else find the first available CID. Fixes: 7e019dcc470f ("sched: Improve cache locality of RSEQ concurrency IDs for intermittent workloads") Cc: Peter Zijlstra (Intel) Cc: Marco Elver Cc: Ingo Molnar Tested-by: Gabriele Monaco Signed-off-by: Mathieu Desnoyers Signed-off-by: Gabriele Monaco --- include/linux/mm_types.h | 7 ++++--- kernel/sched/sched.h | 25 ++++++++++++++++++++++--- 2 files changed, 26 insertions(+), 6 deletions(-) diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 8a76a1c09234..16076e70a6b9 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -837,10 +837,11 @@ struct mm_struct { */ unsigned int nr_cpus_allowed; /** - * @max_nr_cid: Maximum number of concurrency IDs allocated. + * @max_nr_cid: Maximum number of allowed concurrency + * IDs allocated. * - * Track the highest number of concurrency IDs allocated for the - * mm. + * Track the highest number of allowed concurrency IDs + * allocated for the mm. */ atomic_t max_nr_cid; /** diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h index 21be461ff913..f3b0d1d86622 100644 --- a/kernel/sched/sched.h +++ b/kernel/sched/sched.h @@ -3652,10 +3652,28 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm) { struct cpumask *cidmask = mm_cidmask(mm); struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid; - int cid = __this_cpu_read(pcpu_cid->recent_cid); + int cid, max_nr_cid, allowed_max_nr_cid; + /* + * After shrinking the number of threads or reducing the number + * of allowed cpus, reduce the value of max_nr_cid so expansion + * of cid allocation will preserve cache locality if the number + * of threads or allowed cpus increase again. + */ + max_nr_cid = atomic_read(&mm->max_nr_cid); + while ((allowed_max_nr_cid = min_t(int, READ_ONCE(mm->nr_cpus_allowed), + atomic_read(&mm->mm_users))), + max_nr_cid > allowed_max_nr_cid) { + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into max_nr_cid. */ + if (atomic_try_cmpxchg(&mm->max_nr_cid, &max_nr_cid, allowed_max_nr_cid)) { + max_nr_cid = allowed_max_nr_cid; + break; + } + } /* Try to re-use recent cid. This improves cache locality. */ - if (!mm_cid_is_unset(cid) && !cpumask_test_and_set_cpu(cid, cidmask)) + cid = __this_cpu_read(pcpu_cid->recent_cid); + if (!mm_cid_is_unset(cid) && cid < max_nr_cid && + !cpumask_test_and_set_cpu(cid, cidmask)) return cid; /* * Expand cid allocation if the maximum number of concurrency @@ -3663,8 +3681,9 @@ static inline int __mm_cid_try_get(struct task_struct *t, struct mm_struct *mm) * and number of threads. Expanding cid allocation as much as * possible improves cache locality. */ - cid = atomic_read(&mm->max_nr_cid); + cid = max_nr_cid; while (cid < READ_ONCE(mm->nr_cpus_allowed) && cid < atomic_read(&mm->mm_users)) { + /* atomic_try_cmpxchg loads previous mm->max_nr_cid into cid. */ if (!atomic_try_cmpxchg(&mm->max_nr_cid, &cid, cid + 1)) continue; if (!cpumask_test_and_set_cpu(cid, cidmask)) From patchwork Fri Dec 13 09:54:07 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gabriele Monaco X-Patchwork-Id: 850614 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CA3711B4F17 for ; Fri, 13 Dec 2024 09:56:00 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.129.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083762; cv=none; b=iGJnOHC/MUgCYIkKeMFqOYETG5LcIO5Wz9E3YISOlCl14ECSMOIhot4jZ+igrs3Bu/Mj/jnhnW7+vnuacvOkJNrFUoNXEPdkLswzezvUJKnMLgk3jHyv5RQbzZz4rNnuACTHIF/F1uRtm+wP5wzneebotBvuXoL8m49S4/zBqWA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1734083762; c=relaxed/simple; bh=MSeG7NpeNJTJ59YezyPYGEMwMzK1klEVtFDKR9aNIjM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Oz49zuDBaEbDfGORaHqrE/jgSw4Ox+Qwgh/aaxqEC+2bwnE/jyCgypf985r2reWd0D9RQcQU+KbgbBWu73ep2WC27elurs6amMgg6Se6AwokqN+kLOJ8sG/D6iAK3bftSme1A9yBXcBxvxmfn1b5N67ytNSBLMcLVHWe1Bry6x4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=aQ+k69NE; arc=none smtp.client-ip=170.10.129.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="aQ+k69NE" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1734083759; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=4j0XdIbHKHBrG40Ri7lqb7eCCeV+N5WkxyRh1PruNqc=; b=aQ+k69NEi9odG+QFTufSS5yYrepbAhqEY2oSQBO1Jd6ncg9/PIEt3tq1ndqp/mKf27xBmw KcvK2LTGm5Sc5FtQh9hWye2mnuMOLEm7VekPH4dxqbj5STOy1Wvy5VZZp+N7RowAx1yj5/ aBAaw1cgInBOjnE7k6F1wJ7OAtCjUwE= Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-503-ljxb3HsdNUafwUf2srUAIw-1; Fri, 13 Dec 2024 04:55:54 -0500 X-MC-Unique: ljxb3HsdNUafwUf2srUAIw-1 X-Mimecast-MFC-AGG-ID: ljxb3HsdNUafwUf2srUAIw Received: from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1827E1955F25; Fri, 13 Dec 2024 09:55:53 +0000 (UTC) Received: from gmonaco-thinkpadt14gen3.rmtit.com (unknown [10.39.192.43]) by mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id F02EC195394B; Fri, 13 Dec 2024 09:55:48 +0000 (UTC) From: Gabriele Monaco To: Mathieu Desnoyers , Ingo Molnar , Peter Zijlstra , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Juri Lelli , Vincent Guittot , Mel Gorman , Shuah Khan , linux-kselftest@vger.kernel.org, Gabriele Monaco Subject: [PATCH v2 4/4] rseq/selftests: Add test for mm_cid compaction Date: Fri, 13 Dec 2024 10:54:07 +0100 Message-ID: <20241213095407.271357-5-gmonaco@redhat.com> In-Reply-To: <20241213095407.271357-1-gmonaco@redhat.com> References: <20241213095407.271357-1-gmonaco@redhat.com> Precedence: bulk X-Mailing-List: linux-kselftest@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.0 on 10.30.177.12 A task in the kernel (task_mm_cid_work) runs somewhat periodically to compact the mm_cid for each process, this test tries to validate that it runs correctly and timely. The test spawns 1 thread pinned to each CPU, then each thread, including the main one, run in short bursts for some time. During this period, the mm_cids should be spanning all numbers between 0 and nproc. At the end of this phase, a thread with high enough mm_cid (> nproc/2) is selected to be the new leader, all other threads terminate. After some time, the only running thread should see 0 as mm_cid, if that doesn't happen, the compaction mechanism didn't work and the test fails. The test never fails if only 1 core is available, in which case, we cannot test anything as the only available mm_cid is 0. Signed-off-by: Gabriele Monaco --- tools/testing/selftests/rseq/.gitignore | 1 + tools/testing/selftests/rseq/Makefile | 2 +- .../selftests/rseq/mm_cid_compaction_test.c | 157 ++++++++++++++++++ 3 files changed, 159 insertions(+), 1 deletion(-) create mode 100644 tools/testing/selftests/rseq/mm_cid_compaction_test.c diff --git a/tools/testing/selftests/rseq/.gitignore b/tools/testing/selftests/rseq/.gitignore index 16496de5f6ce..2c89f97e4f73 100644 --- a/tools/testing/selftests/rseq/.gitignore +++ b/tools/testing/selftests/rseq/.gitignore @@ -3,6 +3,7 @@ basic_percpu_ops_test basic_percpu_ops_mm_cid_test basic_test basic_rseq_op_test +mm_cid_compaction_test param_test param_test_benchmark param_test_compare_twice diff --git a/tools/testing/selftests/rseq/Makefile b/tools/testing/selftests/rseq/Makefile index 5a3432fceb58..ce1b38f46a35 100644 --- a/tools/testing/selftests/rseq/Makefile +++ b/tools/testing/selftests/rseq/Makefile @@ -16,7 +16,7 @@ OVERRIDE_TARGETS = 1 TEST_GEN_PROGS = basic_test basic_percpu_ops_test basic_percpu_ops_mm_cid_test param_test \ param_test_benchmark param_test_compare_twice param_test_mm_cid \ - param_test_mm_cid_benchmark param_test_mm_cid_compare_twice + param_test_mm_cid_benchmark param_test_mm_cid_compare_twice mm_cid_compaction_test TEST_GEN_PROGS_EXTENDED = librseq.so diff --git a/tools/testing/selftests/rseq/mm_cid_compaction_test.c b/tools/testing/selftests/rseq/mm_cid_compaction_test.c new file mode 100644 index 000000000000..9bc7310c3cb5 --- /dev/null +++ b/tools/testing/selftests/rseq/mm_cid_compaction_test.c @@ -0,0 +1,157 @@ +// SPDX-License-Identifier: LGPL-2.1 +#define _GNU_SOURCE +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../kselftest.h" +#include "rseq.h" + +#define VERBOSE 0 +#define printf_verbose(fmt, ...) \ + do { \ + if (VERBOSE) \ + printf(fmt, ##__VA_ARGS__); \ + } while (0) + +/* 0.5 s */ +#define RUNNER_PERIOD 500000 +/* Number of runs before we terminate or get the token */ +#define THREAD_RUNS 5 + +/* + * Number of times we check that the mm_cid were compacted. + * Checks are repeated every RUNNER_PERIOD + */ +#define MM_CID_CLEANUP_TIMEOUT 10 + +struct thread_args { + int num_cpus; + pthread_mutex_t token; + pthread_t *tinfo; +}; + +static void *thread_runner(void *arg) +{ + struct thread_args *args = arg; + int i, ret, curr_mm_cid; + + for (i = 0; i < THREAD_RUNS; i++) + usleep(RUNNER_PERIOD); + curr_mm_cid = rseq_current_mm_cid(); + /* + * We select one thread with high enough mm_cid to be the new leader + * all other threads (including the main thread) will terminate + * After some time, the mm_cid of the only remaining thread should + * converge to 0, if not, the test fails + */ + if (curr_mm_cid > args->num_cpus / 2 && + !pthread_mutex_trylock(&args->token)) { + printf_verbose("cpu%d has %d and will be the new leader\n", + sched_getcpu(), curr_mm_cid); + for (i = 0; i < args->num_cpus; i++) { + if (args->tinfo[i] == pthread_self()) + continue; + ret = pthread_join(args->tinfo[i], NULL); + if (ret) { + fprintf(stderr, + "Error: failed to join thread %d (%d): %s\n", + i, ret, strerror(ret)); + assert(ret == 0); + } + } + free(args->tinfo); + + for (i = 0; i < MM_CID_CLEANUP_TIMEOUT; i++) { + curr_mm_cid = rseq_current_mm_cid(); + printf_verbose("run %d: mm_cid %d on cpu%d\n", i, + curr_mm_cid, sched_getcpu()); + if (curr_mm_cid == 0) { + printf_verbose( + "mm_cids successfully compacted, exiting\n"); + pthread_exit(NULL); + } + usleep(RUNNER_PERIOD); + } + assert(false); + } + printf_verbose("cpu%d has %d and is going to terminate\n", + sched_getcpu(), curr_mm_cid); + pthread_exit(NULL); +} + +void test_mm_cid_compaction(void) +{ + cpu_set_t affinity, test_affinity; + int i, j, ret, num_threads; + pthread_t *tinfo; + struct thread_args args = { .token = PTHREAD_MUTEX_INITIALIZER }; + + sched_getaffinity(0, sizeof(affinity), &affinity); + CPU_ZERO(&test_affinity); + num_threads = CPU_COUNT(&affinity); + tinfo = calloc(num_threads, sizeof(*tinfo)); + if (!tinfo) { + fprintf(stderr, "Error: failed to allocate tinfo(%d): %s\n", + errno, strerror(errno)); + assert(ret == 0); + } + args.num_cpus = num_threads; + args.tinfo = tinfo; + if (num_threads == 1) { + printf_verbose( + "Running on a single cpu, cannot test anything\n"); + return; + } + for (i = 0, j = 0; i < CPU_SETSIZE && j < num_threads; i++) { + if (CPU_ISSET(i, &affinity)) { + ret = pthread_create(&tinfo[j], NULL, thread_runner, + &args); + if (ret) { + fprintf(stderr, + "Error: failed to create thread(%d): %s\n", + ret, strerror(ret)); + assert(ret == 0); + } + CPU_SET(i, &test_affinity); + pthread_setaffinity_np(tinfo[j], sizeof(test_affinity), + &test_affinity); + CPU_CLR(i, &test_affinity); + ++j; + } + } + printf_verbose("Started %d threads\n", num_threads); + + /* Also main thread will terminate if it is not selected as leader */ + thread_runner(&args); +} + +int main(int argc, char **argv) +{ + if (rseq_register_current_thread()) { + fprintf(stderr, + "Error: rseq_register_current_thread(...) failed(%d): %s\n", + errno, strerror(errno)); + goto error; + } + if (!rseq_mm_cid_available()) { + fprintf(stderr, "Error: rseq_mm_cid unavailable\n"); + goto error; + } + test_mm_cid_compaction(); + if (rseq_unregister_current_thread()) { + fprintf(stderr, + "Error: rseq_unregister_current_thread(...) failed(%d): %s\n", + errno, strerror(errno)); + goto error; + } + return 0; + +error: + return -1; +}