From patchwork Tue Jun 17 13:43:23 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 897491 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B60752E2658; Tue, 17 Jun 2025 13:43:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750167812; cv=none; b=Em7DKxsTsYxIyVt4COZ6JBCItuEE7qQgzv18z9pzhbl7G6ggQL1cKrZ72/Wz07STs6HfrIj6/JWcDAp7xSHEvqtFGcPpchwc7w1T8feJFlnujTgpe0B1iB/zilFRQKfVyQy12a8ZgvEgW7kJcuE5+WU/AvslU9/+mLbGJ9xopRU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750167812; c=relaxed/simple; bh=y28xAfylXh2rdRLTqKxx7aDOa1vFkQYxy1i35oempSA=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=MoesUqbSVcnHqHaigrU1qZo9DfBcG3jMx4imCVxWvFZ2DPgCxPuHy3Nd0cpjr7eI0woLzIrjAJY0bpwuE/VRCsFDC/9/pfeu+VWTrx5TnKaLnGtIwDQyE8jqrC0OBItunifFNLUOST9NLDLsALOjWrp8f1nnWR+deqEsdwOmTuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pssgh3Al; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pssgh3Al" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DFE0DC4CEF1; Tue, 17 Jun 2025 13:43:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750167812; bh=y28xAfylXh2rdRLTqKxx7aDOa1vFkQYxy1i35oempSA=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=pssgh3Al5vy6j/4p9KScq/o/Ti3hh1aeNJWZm4FJ/snp05kXRmmrz4S1i6c16IstP UrDa16/QS9nRYwLeHhWrEmCAbjxDbVTvxQ23QE3362vjm4LNPhpP1waumRGYKWwvkV LaLPAQsjWy2lWBTvofNeUOlqbjQDAs7jBEmEs6golgP+ahVxEHQV8ZbdDF5+oEKBC1 zs79+xsqtgL69aenOLInKqyQL7UGhAPfZlp/iopBsGbMI5hx+yHB2jVqXsSpRCwq7m RPNKULk4BKNw/6DANWA8GaRAlDVHfnaUqg68nQcNRUWdRfFtA8QdRRYUteKxAkn33U IIzgBXfnYnlog== From: Daniel Wagner Date: Tue, 17 Jun 2025 15:43:23 +0200 Subject: [PATCH 1/5] lib/group_cpus: Let group_cpu_evenly() return the number of initialized masks Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250617-isolcpus-queue-counters-v1-1-13923686b54b@kernel.org> References: <20250617-isolcpus-queue-counters-v1-0-13923686b54b@kernel.org> In-Reply-To: <20250617-isolcpus-queue-counters-v1-0-13923686b54b@kernel.org> To: Jens Axboe , Christoph Hellwig Cc: Keith Busch , Sagi Grimberg , "Michael S. Tsirkin" , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Hannes Reinecke , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 group_cpu_evenly() might have allocated less groups then requested: group_cpu_evenly() __group_cpus_evenly() alloc_nodes_groups() # allocated total groups may be less than numgrps when # active total CPU number is less then numgrps In this case, the caller will do an out of bound access because the caller assumes the masks returned has numgrps. Return the number of groups created so the caller can limit the access range accordingly. Acked-by: Thomas Gleixner Reviewed-by: Hannes Reinecke Reviewed-by: Ming Lei Signed-off-by: Daniel Wagner --- block/blk-mq-cpumap.c | 6 +++--- drivers/virtio/virtio_vdpa.c | 9 +++++---- fs/fuse/virtio_fs.c | 6 +++--- include/linux/group_cpus.h | 2 +- kernel/irq/affinity.c | 11 +++++------ lib/group_cpus.c | 16 ++++++++-------- 6 files changed, 25 insertions(+), 25 deletions(-) diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c index 444798c5374f48088b661b519f2638bda8556cf2..269161252add756897fce1b65cae5b2e6aebd647 100644 --- a/block/blk-mq-cpumap.c +++ b/block/blk-mq-cpumap.c @@ -19,9 +19,9 @@ void blk_mq_map_queues(struct blk_mq_queue_map *qmap) { const struct cpumask *masks; - unsigned int queue, cpu; + unsigned int queue, cpu, nr_masks; - masks = group_cpus_evenly(qmap->nr_queues); + masks = group_cpus_evenly(qmap->nr_queues, &nr_masks); if (!masks) { for_each_possible_cpu(cpu) qmap->mq_map[cpu] = qmap->queue_offset; @@ -29,7 +29,7 @@ void blk_mq_map_queues(struct blk_mq_queue_map *qmap) } for (queue = 0; queue < qmap->nr_queues; queue++) { - for_each_cpu(cpu, &masks[queue]) + for_each_cpu(cpu, &masks[queue % nr_masks]) qmap->mq_map[cpu] = qmap->queue_offset + queue; } kfree(masks); diff --git a/drivers/virtio/virtio_vdpa.c b/drivers/virtio/virtio_vdpa.c index 1f60c9d5cb1810a6f208c24bb2ac640d537391a0..a7b297dae4890c9d6002744b90fc133bbedb7b44 100644 --- a/drivers/virtio/virtio_vdpa.c +++ b/drivers/virtio/virtio_vdpa.c @@ -329,20 +329,21 @@ create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) { unsigned int this_vecs = affd->set_size[i]; + unsigned int nr_masks; int j; - struct cpumask *result = group_cpus_evenly(this_vecs); + struct cpumask *result = group_cpus_evenly(this_vecs, &nr_masks); if (!result) { kfree(masks); return NULL; } - for (j = 0; j < this_vecs; j++) + for (j = 0; j < nr_masks; j++) cpumask_copy(&masks[curvec + j], &result[j]); kfree(result); - curvec += this_vecs; - usedvecs += this_vecs; + curvec += nr_masks; + usedvecs += nr_masks; } /* Fill out vectors at the end that don't need affinity */ diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c index 53c2626e90e723ad88f1aee69d7507b4f197ab13..3fbfb1a2942b753643015a45fa0c5d89ff72aa2f 100644 --- a/fs/fuse/virtio_fs.c +++ b/fs/fuse/virtio_fs.c @@ -862,7 +862,7 @@ static void virtio_fs_requests_done_work(struct work_struct *work) static void virtio_fs_map_queues(struct virtio_device *vdev, struct virtio_fs *fs) { const struct cpumask *mask, *masks; - unsigned int q, cpu; + unsigned int q, cpu, nr_masks; /* First attempt to map using existing transport layer affinities * e.g. PCIe MSI-X @@ -882,7 +882,7 @@ static void virtio_fs_map_queues(struct virtio_device *vdev, struct virtio_fs *f return; fallback: /* Attempt to map evenly in groups over the CPUs */ - masks = group_cpus_evenly(fs->num_request_queues); + masks = group_cpus_evenly(fs->num_request_queues, &nr_masks); /* If even this fails we default to all CPUs use first request queue */ if (!masks) { for_each_possible_cpu(cpu) @@ -891,7 +891,7 @@ static void virtio_fs_map_queues(struct virtio_device *vdev, struct virtio_fs *f } for (q = 0; q < fs->num_request_queues; q++) { - for_each_cpu(cpu, &masks[q]) + for_each_cpu(cpu, &masks[q % nr_masks]) fs->mq_map[cpu] = q + VQ_REQUEST; } kfree(masks); diff --git a/include/linux/group_cpus.h b/include/linux/group_cpus.h index e42807ec61f6e8cf3787af7daa0d8686edfef0a3..9d4e5ab6c314b31c09fda82c3f6ac18f77e9de36 100644 --- a/include/linux/group_cpus.h +++ b/include/linux/group_cpus.h @@ -9,6 +9,6 @@ #include #include -struct cpumask *group_cpus_evenly(unsigned int numgrps); +struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *nummasks); #endif diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 44a4eba80315cc098ecfa366ca1d88483641b12a..4013e6ad2b2f1cb91de12bb428b3281105f7d23b 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -69,21 +69,20 @@ irq_create_affinity_masks(unsigned int nvecs, struct irq_affinity *affd) * have multiple sets, build each sets affinity mask separately. */ for (i = 0, usedvecs = 0; i < affd->nr_sets; i++) { - unsigned int this_vecs = affd->set_size[i]; - int j; - struct cpumask *result = group_cpus_evenly(this_vecs); + unsigned int nr_masks, this_vecs = affd->set_size[i]; + struct cpumask *result = group_cpus_evenly(this_vecs, &nr_masks); if (!result) { kfree(masks); return NULL; } - for (j = 0; j < this_vecs; j++) + for (int j = 0; j < nr_masks; j++) cpumask_copy(&masks[curvec + j].mask, &result[j]); kfree(result); - curvec += this_vecs; - usedvecs += this_vecs; + curvec += nr_masks; + usedvecs += nr_masks; } /* Fill out vectors at the end that don't need affinity */ diff --git a/lib/group_cpus.c b/lib/group_cpus.c index ee272c4cefcc13907ce9f211f479615d2e3c9154..a075959ccb212ece84334e4859c884f4217d30b6 100644 --- a/lib/group_cpus.c +++ b/lib/group_cpus.c @@ -332,9 +332,11 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps, /** * group_cpus_evenly - Group all CPUs evenly per NUMA/CPU locality * @numgrps: number of groups + * @nummasks: number of initialized cpumasks * * Return: cpumask array if successful, NULL otherwise. And each element - * includes CPUs assigned to this group + * includes CPUs assigned to this group. nummasks contains the number + * of initialized masks which can be less than numgrps. * * Try to put close CPUs from viewpoint of CPU and NUMA locality into * same group, and run two-stage grouping: @@ -344,7 +346,7 @@ static int __group_cpus_evenly(unsigned int startgrp, unsigned int numgrps, * We guarantee in the resulted grouping that all CPUs are covered, and * no same CPU is assigned to multiple groups */ -struct cpumask *group_cpus_evenly(unsigned int numgrps) +struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *nummasks) { unsigned int curgrp = 0, nr_present = 0, nr_others = 0; cpumask_var_t *node_to_cpumask; @@ -386,7 +388,7 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) ret = __group_cpus_evenly(curgrp, numgrps, node_to_cpumask, npresmsk, nmsk, masks); if (ret < 0) - goto fail_build_affinity; + goto fail_node_to_cpumask; nr_present = ret; /* @@ -405,10 +407,6 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) if (ret >= 0) nr_others = ret; - fail_build_affinity: - if (ret >= 0) - WARN_ON(nr_present + nr_others < numgrps); - fail_node_to_cpumask: free_node_to_cpumask(node_to_cpumask); @@ -421,10 +419,11 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) kfree(masks); return NULL; } + *nummasks = min(nr_present + nr_others, numgrps); return masks; } #else /* CONFIG_SMP */ -struct cpumask *group_cpus_evenly(unsigned int numgrps) +struct cpumask *group_cpus_evenly(unsigned int numgrps, unsigned int *nummasks) { struct cpumask *masks = kcalloc(numgrps, sizeof(*masks), GFP_KERNEL); @@ -433,6 +432,7 @@ struct cpumask *group_cpus_evenly(unsigned int numgrps) /* assign all CPUs(cpu 0) to the 1st group only */ cpumask_copy(&masks[0], cpu_possible_mask); + *nummasks = 1; return masks; } #endif /* CONFIG_SMP */ From patchwork Tue Jun 17 13:43:25 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 897490 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C451E2E3AE4; Tue, 17 Jun 2025 13:43:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750167818; cv=none; b=IazezUMDINt8lMjeJn3fEO11JY1C/zPe+4ckJFiQcIv+RiDgpvhOiEiFnWagiadWBgOgqGq3W7kmtWkGbPuiqfDZrUzTtIZ2b6BQcDDal1m+o0YVF822C6EeYvwzT/xqP+huUo4iUFP4VmgbAaUM2K+AEG4m6rlF0w6UAE79AfY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750167818; c=relaxed/simple; bh=Wb0VVks1in8/wgvZy69kc6dxGdTcBSbAW1IQk4SRnXw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qaL0JY+oS3NuLjDodLH7bPoaPyoYBTnPwGfgVYAVqAWtrgCzbb5LPfFoZhrmpDv3o95OvQWjAyi3uDWZyIE7exdr1RZH3VurHRb37dWdqfwgyHL5FqnSYhHt+HgzDyDV7Ntd2tyc3uEJD4tXWQ0bqAh5roNAmmB1mVJPNqkFBPA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hBzVWER+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hBzVWER+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1B5C4C4CEF1; Tue, 17 Jun 2025 13:43:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750167818; bh=Wb0VVks1in8/wgvZy69kc6dxGdTcBSbAW1IQk4SRnXw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=hBzVWER+QS2jJkuriCQIa4LyuNFDsINLSqZCoogErBxBKw4JN1mmn4iu72E0jirch PPnW/Y1vgakKthwwc4M7/IgTA11CG1K9WXqrxvI7UBiPUmpUZeXZasTZMVjKqF9rZN wB9wN+xmKxNTUjLXYBrKzNSqtwe+LyJgmSEErCtTTaIy73h77pqb03mf1b47bI6KRl rFpYCVyw0NRmytgwI+oB2GyenYLXZclZ13elJZVH4WEwZmfT7sAjBBWpB1fXvCfSK1 UjCnqQNpzzbSbhTlnroe9JaSpHIxhBaQ58/M9I8M1VS/vXpNAW0LIXqPWHhPXueZtq N41rxh/7TPJQQ== From: Daniel Wagner Date: Tue, 17 Jun 2025 15:43:25 +0200 Subject: [PATCH 3/5] nvme-pci: use block layer helpers to calculate num of queues Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250617-isolcpus-queue-counters-v1-3-13923686b54b@kernel.org> References: <20250617-isolcpus-queue-counters-v1-0-13923686b54b@kernel.org> In-Reply-To: <20250617-isolcpus-queue-counters-v1-0-13923686b54b@kernel.org> To: Jens Axboe , Christoph Hellwig Cc: Keith Busch , Sagi Grimberg , "Michael S. Tsirkin" , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Hannes Reinecke , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 The calculation of the upper limit for queues does not depend solely on the number of possible CPUs; for example, the isolcpus kernel command-line option must also be considered. To account for this, the block layer provides a helper function to retrieve the maximum number of queues. Use it to set an appropriate upper queue number limit. Reviewed-by: Christoph Hellwig Reviewed-by: Hannes Reinecke Signed-off-by: Daniel Wagner --- drivers/nvme/host/pci.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 8ff12e415cb5d1529d760b33f3e0cf3b8d1555f1..f134bf4f41b2581e4809e618250de7985b5c9701 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -97,7 +97,7 @@ static int io_queue_count_set(const char *val, const struct kernel_param *kp) int ret; ret = kstrtouint(val, 10, &n); - if (ret != 0 || n > num_possible_cpus()) + if (ret != 0 || n > blk_mq_num_possible_queues(0)) return -EINVAL; return param_set_uint(val, kp); } @@ -2520,7 +2520,8 @@ static unsigned int nvme_max_io_queues(struct nvme_dev *dev) */ if (dev->ctrl.quirks & NVME_QUIRK_SHARED_TAGS) return 1; - return num_possible_cpus() + dev->nr_write_queues + dev->nr_poll_queues; + return blk_mq_num_possible_queues(0) + dev->nr_write_queues + + dev->nr_poll_queues; } static int nvme_setup_io_queues(struct nvme_dev *dev) From patchwork Tue Jun 17 13:43:27 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Daniel Wagner X-Patchwork-Id: 897489 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A49892E888F; Tue, 17 Jun 2025 13:43:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750167828; cv=none; b=qq8EQCiMa/G/g9xS3Llqyp8DMK7BIikOIUnMxusp330BIgJxwhYQpxvjY5j/m6GaJidvQZ4nxlrKDl+xIygtGp/tyl7jDCRJEuW9c+Xl+LuCkK1ImN4P0/PzjtSUFXCdkcztFwgvByStixFtOQOEGglDmG1qKlciIVvewxe/2bM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750167828; c=relaxed/simple; bh=3edJmKoyUSjMTYlt2oNOEj/NOQQ/uX8bqeMbrf4iMm8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=ebvZ06CTT6mtJZqiplYvAuyrf+yl1eYnmdqkzRpppMtsO//6YjIJ/xbCM2zBjaxB4BV3ipkU92Bwpd8a8uteJK4qLY5LKSWegRpLd+uCV6qRsxJUZ4P34H6rcKO6bJqDjEkUOiuiRJvyDxdAAuD/iZrWD3189Wcr13UikpBBMfU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MNYoPSYB; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MNYoPSYB" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D687C4CEE3; Tue, 17 Jun 2025 13:43:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750167828; bh=3edJmKoyUSjMTYlt2oNOEj/NOQQ/uX8bqeMbrf4iMm8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:From; b=MNYoPSYBKQ2E60FbE8av7UdKTeFZsKp3S1eSfPUOuJJhZWJ4nSpsIQXDbIrEEGshZ iHANFRqB4+ogYxSrGyADEDNwXR5u9jn2P6HBuTrlP2slsKz9o9Y39tMu8j3JUETGla 83P1lNhc+taMFmbuSnKF2vIa4IKya0LP157KDLSkK0r3Q2gUfbztIrS8O/MKw3fRIX gybwsEBPDiv3IDfOsLObyxSmGNQUIywBltt2JdllGdeuTEH4GMk/v1HYtlRutH1kQM gpX41cp5n6zrWZjqstS3pKlN0WuFir/IqypxXI9urvSnxaGOh2xDOa1rhL2FCsp6t9 tYOyyzhTcmPsw== From: Daniel Wagner Date: Tue, 17 Jun 2025 15:43:27 +0200 Subject: [PATCH 5/5] virtio: blk/scsi: use block layer helpers to calculate num of queues Precedence: bulk X-Mailing-List: linux-scsi@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Message-Id: <20250617-isolcpus-queue-counters-v1-5-13923686b54b@kernel.org> References: <20250617-isolcpus-queue-counters-v1-0-13923686b54b@kernel.org> In-Reply-To: <20250617-isolcpus-queue-counters-v1-0-13923686b54b@kernel.org> To: Jens Axboe , Christoph Hellwig Cc: Keith Busch , Sagi Grimberg , "Michael S. Tsirkin" , "Martin K. Petersen" , Thomas Gleixner , Costa Shulyupin , Juri Lelli , Valentin Schneider , Waiman Long , Ming Lei , Frederic Weisbecker , Hannes Reinecke , linux-kernel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, megaraidlinux.pdl@broadcom.com, linux-scsi@vger.kernel.org, storagedev@microchip.com, virtualization@lists.linux.dev, GR-QLogic-Storage-Upstream@marvell.com, Daniel Wagner X-Mailer: b4 0.14.2 The calculation of the upper limit for queues does not depend solely on the number of possible CPUs; for example, the isolcpus kernel command-line option must also be considered. To account for this, the block layer provides a helper function to retrieve the maximum number of queues. Use it to set an appropriate upper queue number limit. Reviewed-by: Christoph Hellwig Acked-by: Michael S. Tsirkin Reviewed-by: Hannes Reinecke Reviewed-by: Ming Lei Signed-off-by: Daniel Wagner --- drivers/block/virtio_blk.c | 5 ++--- drivers/scsi/virtio_scsi.c | 1 + 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c index 30bca8cb7106040d3bbb11ba9e0b546510534324..e649fa67bac16b4f0c6e8e8f0e6bec111897c355 100644 --- a/drivers/block/virtio_blk.c +++ b/drivers/block/virtio_blk.c @@ -976,9 +976,8 @@ static int init_vq(struct virtio_blk *vblk) return -EINVAL; } - num_vqs = min_t(unsigned int, - min_not_zero(num_request_queues, nr_cpu_ids), - num_vqs); + num_vqs = blk_mq_num_possible_queues( + min_not_zero(num_request_queues, num_vqs)); num_poll_vqs = min_t(unsigned int, poll_queues, num_vqs - 1); diff --git a/drivers/scsi/virtio_scsi.c b/drivers/scsi/virtio_scsi.c index 21ce3e9401929cd273fde08b0944e8b47e1e66cc..96a69edddbe5555574fc8fed1ba7c82a99df4472 100644 --- a/drivers/scsi/virtio_scsi.c +++ b/drivers/scsi/virtio_scsi.c @@ -919,6 +919,7 @@ static int virtscsi_probe(struct virtio_device *vdev) /* We need to know how many queues before we allocate. */ num_queues = virtscsi_config_get(vdev, num_queues) ? : 1; num_queues = min_t(unsigned int, nr_cpu_ids, num_queues); + num_queues = blk_mq_num_possible_queues(num_queues); num_targets = virtscsi_config_get(vdev, max_target) + 1;