Message ID | 20250527-ublk_task_per_io-v7-1-cbdbaf283baa@purestorage.com |
---|---|
State | New |
Headers | show |
Series | ublk: decouple server threads from ublk_queues/hctxs | expand |
On Tue, May 27, 2025 at 4:01 PM Uday Shankar <ushankar@purestorage.com> wrote: > > Currently, ublk_drv associates to each hardware queue (hctx) a unique > task (called the queue's ubq_daemon) which is allowed to issue > COMMIT_AND_FETCH commands against the hctx. If any other task attempts > to do so, the command fails immediately with EINVAL. When considered > together with the block layer architecture, the result is that for each > CPU C on the system, there is a unique ublk server thread which is > allowed to handle I/O submitted on CPU C. This can lead to suboptimal > performance under imbalanced load generation. For an extreme example, > suppose all the load is generated on CPUs mapping to a single ublk > server thread. Then that thread may be fully utilized and become the > bottleneck in the system, while other ublk server threads are totally > idle. > > This issue can also be addressed directly in the ublk server without > kernel support by having threads dequeue I/Os and pass them around to > ensure even load. But this solution requires inter-thread communication > at least twice for each I/O (submission and completion), which is > generally a bad pattern for performance. The problem gets even worse > with zero copy, as more inter-thread communication would be required to > have the buffer register/unregister calls to come from the correct > thread. > > Therefore, address this issue in ublk_drv by allowing each I/O to have > its own daemon task. Two I/Os in the same queue are now allowed to be > serviced by different daemon tasks - this was not possible before. > Imbalanced load can then be balanced across all ublk server threads by > having the ublk server threads issue FETCH_REQs in a round-robin manner. > As a small toy example, consider a system with a single ublk device > having 2 queues, each of depth 4. A ublk server having 4 threads could > issue its FETCH_REQs against this device as follows (where each entry is > the qid,tag pair that the FETCH_REQ targets): > > ublk server thread: T0 T1 T2 T3 > 0,0 0,1 0,2 0,3 > 1,3 1,0 1,1 1,2 > > This setup allows for load that is concentrated on one hctx/ublk_queue > to be spread out across all ublk server threads, alleviating the issue > described above. > > Add the new UBLK_F_PER_IO_DAEMON feature to ublk_drv, which ublk servers > can use to essentially test for the presence of this change and tailor > their behavior accordingly. > > Signed-off-by: Uday Shankar <ushankar@purestorage.com> > Reviewed-by: Caleb Sander Mateos <csander@purestorage.com> Still looks good to me. Best, Caleb
On Tue, May 27, 2025 at 05:01:24PM -0600, Uday Shankar wrote: > Currently, ublk_drv associates to each hardware queue (hctx) a unique > task (called the queue's ubq_daemon) which is allowed to issue > COMMIT_AND_FETCH commands against the hctx. If any other task attempts > to do so, the command fails immediately with EINVAL. When considered > together with the block layer architecture, the result is that for each > CPU C on the system, there is a unique ublk server thread which is > allowed to handle I/O submitted on CPU C. This can lead to suboptimal > performance under imbalanced load generation. For an extreme example, > suppose all the load is generated on CPUs mapping to a single ublk > server thread. Then that thread may be fully utilized and become the > bottleneck in the system, while other ublk server threads are totally > idle. > > This issue can also be addressed directly in the ublk server without > kernel support by having threads dequeue I/Os and pass them around to > ensure even load. But this solution requires inter-thread communication > at least twice for each I/O (submission and completion), which is > generally a bad pattern for performance. The problem gets even worse > with zero copy, as more inter-thread communication would be required to > have the buffer register/unregister calls to come from the correct > thread. > > Therefore, address this issue in ublk_drv by allowing each I/O to have > its own daemon task. Two I/Os in the same queue are now allowed to be > serviced by different daemon tasks - this was not possible before. > Imbalanced load can then be balanced across all ublk server threads by > having the ublk server threads issue FETCH_REQs in a round-robin manner. > As a small toy example, consider a system with a single ublk device > having 2 queues, each of depth 4. A ublk server having 4 threads could > issue its FETCH_REQs against this device as follows (where each entry is > the qid,tag pair that the FETCH_REQ targets): > > ublk server thread: T0 T1 T2 T3 > 0,0 0,1 0,2 0,3 > 1,3 1,0 1,1 1,2 > > This setup allows for load that is concentrated on one hctx/ublk_queue > to be spread out across all ublk server threads, alleviating the issue > described above. > > Add the new UBLK_F_PER_IO_DAEMON feature to ublk_drv, which ublk servers > can use to essentially test for the presence of this change and tailor > their behavior accordingly. > > Signed-off-by: Uday Shankar <ushankar@purestorage.com> > Reviewed-by: Caleb Sander Mateos <csander@purestorage.com> This patch looks close to go, just one panic triggered immediately by the following steps, I think it needs to be addressed first. Maybe we need to add one such stress test for UBLK_F_PER_IO_DAEMON too. 1) run heavy IO: [root@ktest-40 ublk]# ./kublk add -t null -q 2 --nthreads 4 --per_io_tasks dev id 0: nr_hw_queues 2 queue_depth 128 block size 512 dev_capacity 524288000 max rq size 1048576 daemon pid 1283 flags 0x2042 state LIVE queue 0: affinity(0 ) queue 1: affinity(8 ) [root@ktest-40 ublk]# [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p 0 -n 8 /dev/ublkb0 Or `fio -numjobs=8 --ioengine=libaio --iodepth=128 --iodepth_batch_submit=32 \ --iodepth_batch_complete_min=32` 2) panic immediately: [ 51.297750] BUG: kernel NULL pointer dereference, address: 0000000000000000 [ 51.298719] #PF: supervisor read access in kernel mode [ 51.299403] #PF: error_code(0x0000) - not-present page [ 51.300069] PGD 1161c8067 P4D 1161c8067 PUD 11a793067 PMD 0 [ 51.300825] Oops: Oops: 0000 [#1] SMP NOPTI [ 51.301389] CPU: 0 UID: 0 PID: 1285 Comm: kublk Not tainted 6.15.0+ #288 PREEMPT(full) [ 51.302375] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-1.fc39 04/01/2014 [ 51.303551] RIP: 0010:io_uring_cmd_done+0xa7/0x1d0 [ 51.304226] Code: 48 89 f1 48 89 f0 48 83 e1 bf 80 cc 01 48 81 c9 00 01 80 00 83 e6 40 48 0f 45 c1 48 89 43 48 44 89 6b 58 c7 43 5c 00 00 00 00 <8b> 07 f6 c4 08 74 12 48 89 93 e8 00 00 0 [ 51.306554] RSP: 0018:ffffd1da436e3a40 EFLAGS: 00010246 [ 51.307253] RAX: 0000000000000100 RBX: ffff8d9cd3737300 RCX: 0000000000000001 [ 51.308178] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 [ 51.309333] RBP: 0000000000000001 R08: 0000000000000018 R09: 0000000000190015 [ 51.310744] R10: 0000000000190015 R11: 0000000000000035 R12: ffff8d9cd1c7c000 [ 51.311986] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 [ 51.313386] FS: 00007f2c293916c0(0000) GS:ffff8da179df6000(0000) knlGS:0000000000000000 [ 51.314899] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 51.315926] CR2: 0000000000000000 CR3: 00000001161c9002 CR4: 0000000000772ef0 [ 51.317179] PKRU: 55555554 [ 51.317682] Call Trace: [ 51.318040] <TASK> [ 51.318355] ublk_cmd_list_tw_cb+0x30/0x40 [ublk_drv] [ 51.319061] __io_run_local_work_loop+0x72/0x80 [ 51.319696] __io_run_local_work+0x69/0x1e0 [ 51.320274] io_cqring_wait+0x8f/0x6a0 [ 51.320794] __do_sys_io_uring_enter+0x500/0x770 [ 51.321422] do_syscall_64+0x82/0x170 [ 51.321891] ? __do_sys_io_uring_enter+0x500/0x770 Thanks, Ming
On Thu, May 29, 2025 at 3:00 AM Ming Lei <ming.lei@redhat.com> wrote: > > On Tue, May 27, 2025 at 05:01:24PM -0600, Uday Shankar wrote: > > Currently, ublk_drv associates to each hardware queue (hctx) a unique > > task (called the queue's ubq_daemon) which is allowed to issue > > COMMIT_AND_FETCH commands against the hctx. If any other task attempts > > to do so, the command fails immediately with EINVAL. When considered > > together with the block layer architecture, the result is that for each > > CPU C on the system, there is a unique ublk server thread which is > > allowed to handle I/O submitted on CPU C. This can lead to suboptimal > > performance under imbalanced load generation. For an extreme example, > > suppose all the load is generated on CPUs mapping to a single ublk > > server thread. Then that thread may be fully utilized and become the > > bottleneck in the system, while other ublk server threads are totally > > idle. > > > > This issue can also be addressed directly in the ublk server without > > kernel support by having threads dequeue I/Os and pass them around to > > ensure even load. But this solution requires inter-thread communication > > at least twice for each I/O (submission and completion), which is > > generally a bad pattern for performance. The problem gets even worse > > with zero copy, as more inter-thread communication would be required to > > have the buffer register/unregister calls to come from the correct > > thread. > > > > Therefore, address this issue in ublk_drv by allowing each I/O to have > > its own daemon task. Two I/Os in the same queue are now allowed to be > > serviced by different daemon tasks - this was not possible before. > > Imbalanced load can then be balanced across all ublk server threads by > > having the ublk server threads issue FETCH_REQs in a round-robin manner. > > As a small toy example, consider a system with a single ublk device > > having 2 queues, each of depth 4. A ublk server having 4 threads could > > issue its FETCH_REQs against this device as follows (where each entry is > > the qid,tag pair that the FETCH_REQ targets): > > > > ublk server thread: T0 T1 T2 T3 > > 0,0 0,1 0,2 0,3 > > 1,3 1,0 1,1 1,2 > > > > This setup allows for load that is concentrated on one hctx/ublk_queue > > to be spread out across all ublk server threads, alleviating the issue > > described above. > > > > Add the new UBLK_F_PER_IO_DAEMON feature to ublk_drv, which ublk servers > > can use to essentially test for the presence of this change and tailor > > their behavior accordingly. > > > > Signed-off-by: Uday Shankar <ushankar@purestorage.com> > > Reviewed-by: Caleb Sander Mateos <csander@purestorage.com> > > This patch looks close to go, just one panic triggered immediately by > the following steps, I think it needs to be addressed first. > > Maybe we need to add one such stress test for UBLK_F_PER_IO_DAEMON too. > > > 1) run heavy IO: > > [root@ktest-40 ublk]# ./kublk add -t null -q 2 --nthreads 4 --per_io_tasks > dev id 0: nr_hw_queues 2 queue_depth 128 block size 512 dev_capacity 524288000 > max rq size 1048576 daemon pid 1283 flags 0x2042 state LIVE > queue 0: affinity(0 ) > queue 1: affinity(8 ) > [root@ktest-40 ublk]# > [root@ktest-40 ublk]# ~/git/fio/t/io_uring -p 0 -n 8 /dev/ublkb0 > > Or > > `fio -numjobs=8 --ioengine=libaio --iodepth=128 --iodepth_batch_submit=32 \ > --iodepth_batch_complete_min=32` > > 2) panic immediately: > > [ 51.297750] BUG: kernel NULL pointer dereference, address: 0000000000000000 > [ 51.298719] #PF: supervisor read access in kernel mode > [ 51.299403] #PF: error_code(0x0000) - not-present page > [ 51.300069] PGD 1161c8067 P4D 1161c8067 PUD 11a793067 PMD 0 > [ 51.300825] Oops: Oops: 0000 [#1] SMP NOPTI > [ 51.301389] CPU: 0 UID: 0 PID: 1285 Comm: kublk Not tainted 6.15.0+ #288 PREEMPT(full) > [ 51.302375] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-1.fc39 04/01/2014 > [ 51.303551] RIP: 0010:io_uring_cmd_done+0xa7/0x1d0 > [ 51.304226] Code: 48 89 f1 48 89 f0 48 83 e1 bf 80 cc 01 48 81 c9 00 01 80 00 83 e6 40 48 0f 45 c1 48 89 43 48 44 89 6b 58 c7 43 5c 00 00 00 00 <8b> 07 f6 c4 08 74 12 48 89 93 e8 00 00 0 > [ 51.306554] RSP: 0018:ffffd1da436e3a40 EFLAGS: 00010246 > [ 51.307253] RAX: 0000000000000100 RBX: ffff8d9cd3737300 RCX: 0000000000000001 > [ 51.308178] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 > [ 51.309333] RBP: 0000000000000001 R08: 0000000000000018 R09: 0000000000190015 > [ 51.310744] R10: 0000000000190015 R11: 0000000000000035 R12: ffff8d9cd1c7c000 > [ 51.311986] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000 > [ 51.313386] FS: 00007f2c293916c0(0000) GS:ffff8da179df6000(0000) knlGS:0000000000000000 > [ 51.314899] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 > [ 51.315926] CR2: 0000000000000000 CR3: 00000001161c9002 CR4: 0000000000772ef0 > [ 51.317179] PKRU: 55555554 > [ 51.317682] Call Trace: > [ 51.318040] <TASK> > [ 51.318355] ublk_cmd_list_tw_cb+0x30/0x40 [ublk_drv] > [ 51.319061] __io_run_local_work_loop+0x72/0x80 > [ 51.319696] __io_run_local_work+0x69/0x1e0 > [ 51.320274] io_cqring_wait+0x8f/0x6a0 > [ 51.320794] __do_sys_io_uring_enter+0x500/0x770 > [ 51.321422] do_syscall_64+0x82/0x170 > [ 51.321891] ? __do_sys_io_uring_enter+0x500/0x770 Maybe we need to keep the ubq != this_q check in ublk_queue_rqs() in addition to io->task != this_io->task? I'm not quite sure how a single plug would end up with requests for multiple hctxs on the same ublk device. But nvme_queue_rqs() checks this too, so presumably it is possible. And ublk_cmd_list_tw_cb() assumes all requests in pdu->req_list belong to the same ubq. Best, Caleb
On Tue, May 27, 2025 at 4:01 PM Uday Shankar <ushankar@purestorage.com> wrote: > > Currently, ublk_drv associates to each hardware queue (hctx) a unique > task (called the queue's ubq_daemon) which is allowed to issue > COMMIT_AND_FETCH commands against the hctx. If any other task attempts > to do so, the command fails immediately with EINVAL. When considered > together with the block layer architecture, the result is that for each > CPU C on the system, there is a unique ublk server thread which is > allowed to handle I/O submitted on CPU C. This can lead to suboptimal > performance under imbalanced load generation. For an extreme example, > suppose all the load is generated on CPUs mapping to a single ublk > server thread. Then that thread may be fully utilized and become the > bottleneck in the system, while other ublk server threads are totally > idle. > > This issue can also be addressed directly in the ublk server without > kernel support by having threads dequeue I/Os and pass them around to > ensure even load. But this solution requires inter-thread communication > at least twice for each I/O (submission and completion), which is > generally a bad pattern for performance. The problem gets even worse > with zero copy, as more inter-thread communication would be required to > have the buffer register/unregister calls to come from the correct > thread. > > Therefore, address this issue in ublk_drv by allowing each I/O to have > its own daemon task. Two I/Os in the same queue are now allowed to be > serviced by different daemon tasks - this was not possible before. > Imbalanced load can then be balanced across all ublk server threads by > having the ublk server threads issue FETCH_REQs in a round-robin manner. > As a small toy example, consider a system with a single ublk device > having 2 queues, each of depth 4. A ublk server having 4 threads could > issue its FETCH_REQs against this device as follows (where each entry is > the qid,tag pair that the FETCH_REQ targets): > > ublk server thread: T0 T1 T2 T3 > 0,0 0,1 0,2 0,3 > 1,3 1,0 1,1 1,2 > > This setup allows for load that is concentrated on one hctx/ublk_queue > to be spread out across all ublk server threads, alleviating the issue > described above. > > Add the new UBLK_F_PER_IO_DAEMON feature to ublk_drv, which ublk servers > can use to essentially test for the presence of this change and tailor > their behavior accordingly. > > Signed-off-by: Uday Shankar <ushankar@purestorage.com> > Reviewed-by: Caleb Sander Mateos <csander@purestorage.com> > --- > drivers/block/ublk_drv.c | 108 +++++++++++++++++++++--------------------- > include/uapi/linux/ublk_cmd.h | 9 ++++ > 2 files changed, 64 insertions(+), 53 deletions(-) > > diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c > index 1b47341962d095830a02e07418815f633fa9ed8a..60d4a0251339dd0a56760e9c582d394c8b386661 100644 > --- a/drivers/block/ublk_drv.c > +++ b/drivers/block/ublk_drv.c > @@ -69,7 +69,8 @@ > | UBLK_F_USER_RECOVERY_FAIL_IO \ > | UBLK_F_UPDATE_SIZE \ > | UBLK_F_AUTO_BUF_REG \ > - | UBLK_F_QUIESCE) > + | UBLK_F_QUIESCE \ > + | UBLK_F_PER_IO_DAEMON) > > #define UBLK_F_ALL_RECOVERY_FLAGS (UBLK_F_USER_RECOVERY \ > | UBLK_F_USER_RECOVERY_REISSUE \ > @@ -166,6 +167,8 @@ struct ublk_io { > /* valid if UBLK_IO_FLAG_OWNED_BY_SRV is set */ > struct request *req; > }; > + > + struct task_struct *task; > }; > > struct ublk_queue { > @@ -173,11 +176,9 @@ struct ublk_queue { > int q_depth; > > unsigned long flags; > - struct task_struct *ubq_daemon; > struct ublksrv_io_desc *io_cmd_buf; > > bool force_abort; > - bool timeout; > bool canceling; > bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */ > unsigned short nr_io_ready; /* how many ios setup */ > @@ -1099,11 +1100,6 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_uring_cmd_pdu( > return io_uring_cmd_to_pdu(ioucmd, struct ublk_uring_cmd_pdu); > } > > -static inline bool ubq_daemon_is_dying(struct ublk_queue *ubq) > -{ > - return !ubq->ubq_daemon || ubq->ubq_daemon->flags & PF_EXITING; > -} > - > /* todo: handle partial completion */ > static inline void __ublk_complete_rq(struct request *req) > { > @@ -1275,13 +1271,13 @@ static void ublk_dispatch_req(struct ublk_queue *ubq, > /* > * Task is exiting if either: > * > - * (1) current != ubq_daemon. > + * (1) current != io->task. > * io_uring_cmd_complete_in_task() tries to run task_work > - * in a workqueue if ubq_daemon(cmd's task) is PF_EXITING. > + * in a workqueue if cmd's task is PF_EXITING. > * > * (2) current->flags & PF_EXITING. > */ > - if (unlikely(current != ubq->ubq_daemon || current->flags & PF_EXITING)) { > + if (unlikely(current != io->task || current->flags & PF_EXITING)) { > __ublk_abort_rq(ubq, req); > return; > } > @@ -1341,13 +1337,12 @@ static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd, > } while (rq); > } > > -static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l) > +static void ublk_queue_cmd_list(struct ublk_io *io, struct rq_list *l) > { > - struct request *rq = rq_list_peek(l); > - struct io_uring_cmd *cmd = ubq->ios[rq->tag].cmd; > + struct io_uring_cmd *cmd = io->cmd; > struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); > > - pdu->req_list = rq; > + pdu->req_list = rq_list_peek(l); > rq_list_init(l); > io_uring_cmd_complete_in_task(cmd, ublk_cmd_list_tw_cb); > } > @@ -1355,13 +1350,10 @@ static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l) > static enum blk_eh_timer_return ublk_timeout(struct request *rq) > { > struct ublk_queue *ubq = rq->mq_hctx->driver_data; > + struct ublk_io *io = &ubq->ios[rq->tag]; > > if (ubq->flags & UBLK_F_UNPRIVILEGED_DEV) { > - if (!ubq->timeout) { > - send_sig(SIGKILL, ubq->ubq_daemon, 0); > - ubq->timeout = true; > - } > - > + send_sig(SIGKILL, io->task, 0); > return BLK_EH_DONE; > } > > @@ -1429,24 +1421,25 @@ static void ublk_queue_rqs(struct rq_list *rqlist) > { > struct rq_list requeue_list = { }; > struct rq_list submit_list = { }; > - struct ublk_queue *ubq = NULL; > + struct ublk_io *io = NULL; > struct request *req; > > while ((req = rq_list_pop(rqlist))) { > struct ublk_queue *this_q = req->mq_hctx->driver_data; > + struct ublk_io *this_io = &this_q->ios[req->tag]; > > - if (ubq && ubq != this_q && !rq_list_empty(&submit_list)) > - ublk_queue_cmd_list(ubq, &submit_list); > - ubq = this_q; > + if (io && io->task != this_io->task && !rq_list_empty(&submit_list)) > + ublk_queue_cmd_list(io, &submit_list); > + io = this_io; > > - if (ublk_prep_req(ubq, req, true) == BLK_STS_OK) > + if (ublk_prep_req(this_q, req, true) == BLK_STS_OK) > rq_list_add_tail(&submit_list, req); > else > rq_list_add_tail(&requeue_list, req); > } > > - if (ubq && !rq_list_empty(&submit_list)) > - ublk_queue_cmd_list(ubq, &submit_list); > + if (io && !rq_list_empty(&submit_list)) Since you're already touching this, I think you can drop the ubq/io check entirely. If submit_list is nonempty, the loop must have executed at least once, so io/ubq couldn't be NULL. Best, Caleb > + ublk_queue_cmd_list(io, &submit_list); > *rqlist = requeue_list; > } > > @@ -1474,17 +1467,6 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq) > /* All old ioucmds have to be completed */ > ubq->nr_io_ready = 0; > > - /* > - * old daemon is PF_EXITING, put it now > - * > - * It could be NULL in case of closing one quisced device. > - */ > - if (ubq->ubq_daemon) > - put_task_struct(ubq->ubq_daemon); > - /* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */ > - ubq->ubq_daemon = NULL; > - ubq->timeout = false; > - > for (i = 0; i < ubq->q_depth; i++) { > struct ublk_io *io = &ubq->ios[i]; > > @@ -1495,6 +1477,17 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq) > io->flags &= UBLK_IO_FLAG_CANCELED; > io->cmd = NULL; > io->addr = 0; > + > + /* > + * old task is PF_EXITING, put it now > + * > + * It could be NULL in case of closing one quiesced > + * device. > + */ > + if (io->task) { > + put_task_struct(io->task); > + io->task = NULL; > + } > } > } > > @@ -1516,7 +1509,7 @@ static void ublk_reset_ch_dev(struct ublk_device *ub) > for (i = 0; i < ub->dev_info.nr_hw_queues; i++) > ublk_queue_reinit(ub, ublk_get_queue(ub, i)); > > - /* set to NULL, otherwise new ubq_daemon cannot mmap the io_cmd_buf */ > + /* set to NULL, otherwise new tasks cannot mmap io_cmd_buf */ > ub->mm = NULL; > ub->nr_queues_ready = 0; > ub->nr_privileged_daemon = 0; > @@ -1783,6 +1776,7 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd, > struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); > struct ublk_queue *ubq = pdu->ubq; > struct task_struct *task; > + struct ublk_io *io; > > if (WARN_ON_ONCE(!ubq)) > return; > @@ -1791,13 +1785,14 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd, > return; > > task = io_uring_cmd_get_task(cmd); > - if (WARN_ON_ONCE(task && task != ubq->ubq_daemon)) > + io = &ubq->ios[pdu->tag]; > + if (WARN_ON_ONCE(task && task != io->task)) > return; > > if (!ubq->canceling) > ublk_start_cancel(ubq); > > - WARN_ON_ONCE(ubq->ios[pdu->tag].cmd != cmd); > + WARN_ON_ONCE(io->cmd != cmd); > ublk_cancel_cmd(ubq, pdu->tag, issue_flags); > } > > @@ -1930,8 +1925,6 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq) > { > ubq->nr_io_ready++; > if (ublk_queue_ready(ubq)) { > - ubq->ubq_daemon = current; > - get_task_struct(ubq->ubq_daemon); > ub->nr_queues_ready++; > > if (capable(CAP_SYS_ADMIN)) > @@ -2084,6 +2077,7 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq, > } > > ublk_fill_io_cmd(io, cmd, buf_addr); > + WRITE_ONCE(io->task, get_task_struct(current)); > ublk_mark_io_ready(ub, ubq); > out: > mutex_unlock(&ub->mutex); > @@ -2179,6 +2173,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, > const struct ublksrv_io_cmd *ub_cmd) > { > struct ublk_device *ub = cmd->file->private_data; > + struct task_struct *task; > struct ublk_queue *ubq; > struct ublk_io *io; > u32 cmd_op = cmd->cmd_op; > @@ -2193,13 +2188,14 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, > goto out; > > ubq = ublk_get_queue(ub, ub_cmd->q_id); > - if (ubq->ubq_daemon && ubq->ubq_daemon != current) > - goto out; > > if (tag >= ubq->q_depth) > goto out; > > io = &ubq->ios[tag]; > + task = READ_ONCE(io->task); > + if (task && task != current) > + goto out; > > /* there is pending io cmd, something must be wrong */ > if (io->flags & UBLK_IO_FLAG_ACTIVE) { > @@ -2449,9 +2445,14 @@ static void ublk_deinit_queue(struct ublk_device *ub, int q_id) > { > int size = ublk_queue_cmd_buf_size(ub, q_id); > struct ublk_queue *ubq = ublk_get_queue(ub, q_id); > + int i; > + > + for (i = 0; i < ubq->q_depth; i++) { > + struct ublk_io *io = &ubq->ios[i]; > + if (io->task) > + put_task_struct(io->task); > + } > > - if (ubq->ubq_daemon) > - put_task_struct(ubq->ubq_daemon); > if (ubq->io_cmd_buf) > free_pages((unsigned long)ubq->io_cmd_buf, get_order(size)); > } > @@ -2923,7 +2924,8 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header) > ub->dev_info.flags &= UBLK_F_ALL; > > ub->dev_info.flags |= UBLK_F_CMD_IOCTL_ENCODE | > - UBLK_F_URING_CMD_COMP_IN_TASK; > + UBLK_F_URING_CMD_COMP_IN_TASK | > + UBLK_F_PER_IO_DAEMON; > > /* GET_DATA isn't needed any more with USER_COPY or ZERO COPY */ > if (ub->dev_info.flags & (UBLK_F_USER_COPY | UBLK_F_SUPPORT_ZERO_COPY | > @@ -3188,14 +3190,14 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub, > int ublksrv_pid = (int)header->data[0]; > int ret = -EINVAL; > > - pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n", > - __func__, ub->dev_info.nr_hw_queues, header->dev_id); > - /* wait until new ubq_daemon sending all FETCH_REQ */ > + pr_devel("%s: Waiting for all FETCH_REQs, dev id %d...\n", __func__, > + header->dev_id); > + > if (wait_for_completion_interruptible(&ub->completion)) > return -EINTR; > > - pr_devel("%s: All new ubq_daemons(nr: %d) are ready, dev id %d\n", > - __func__, ub->dev_info.nr_hw_queues, header->dev_id); > + pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__, > + header->dev_id); > > mutex_lock(&ub->mutex); > if (ublk_nosrv_should_stop_dev(ub)) > diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h > index 56c7e3fc666fc578a545d15a9767e58f3a4a2f3b..77d9d6af46da878cf30df6e3e31758a6f8f61afa 100644 > --- a/include/uapi/linux/ublk_cmd.h > +++ b/include/uapi/linux/ublk_cmd.h > @@ -272,6 +272,15 @@ > */ > #define UBLK_F_QUIESCE (1ULL << 12) > > +/* > + * If this feature is set, ublk_drv supports each (qid,tag) pair having > + * its own independent daemon task that is responsible for handling it. > + * If it is not set, daemons are per-queue instead, so for two pairs > + * (qid1,tag1) and (qid2,tag2), if qid1 == qid2, then the same task must > + * be responsible for handling (qid1,tag1) and (qid2,tag2). > + */ > +#define UBLK_F_PER_IO_DAEMON (1ULL << 13) > + > /* device state */ > #define UBLK_S_DEV_DEAD 0 > #define UBLK_S_DEV_LIVE 1 > > -- > 2.34.1 >
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 1b47341962d095830a02e07418815f633fa9ed8a..60d4a0251339dd0a56760e9c582d394c8b386661 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -69,7 +69,8 @@ | UBLK_F_USER_RECOVERY_FAIL_IO \ | UBLK_F_UPDATE_SIZE \ | UBLK_F_AUTO_BUF_REG \ - | UBLK_F_QUIESCE) + | UBLK_F_QUIESCE \ + | UBLK_F_PER_IO_DAEMON) #define UBLK_F_ALL_RECOVERY_FLAGS (UBLK_F_USER_RECOVERY \ | UBLK_F_USER_RECOVERY_REISSUE \ @@ -166,6 +167,8 @@ struct ublk_io { /* valid if UBLK_IO_FLAG_OWNED_BY_SRV is set */ struct request *req; }; + + struct task_struct *task; }; struct ublk_queue { @@ -173,11 +176,9 @@ struct ublk_queue { int q_depth; unsigned long flags; - struct task_struct *ubq_daemon; struct ublksrv_io_desc *io_cmd_buf; bool force_abort; - bool timeout; bool canceling; bool fail_io; /* copy of dev->state == UBLK_S_DEV_FAIL_IO */ unsigned short nr_io_ready; /* how many ios setup */ @@ -1099,11 +1100,6 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_uring_cmd_pdu( return io_uring_cmd_to_pdu(ioucmd, struct ublk_uring_cmd_pdu); } -static inline bool ubq_daemon_is_dying(struct ublk_queue *ubq) -{ - return !ubq->ubq_daemon || ubq->ubq_daemon->flags & PF_EXITING; -} - /* todo: handle partial completion */ static inline void __ublk_complete_rq(struct request *req) { @@ -1275,13 +1271,13 @@ static void ublk_dispatch_req(struct ublk_queue *ubq, /* * Task is exiting if either: * - * (1) current != ubq_daemon. + * (1) current != io->task. * io_uring_cmd_complete_in_task() tries to run task_work - * in a workqueue if ubq_daemon(cmd's task) is PF_EXITING. + * in a workqueue if cmd's task is PF_EXITING. * * (2) current->flags & PF_EXITING. */ - if (unlikely(current != ubq->ubq_daemon || current->flags & PF_EXITING)) { + if (unlikely(current != io->task || current->flags & PF_EXITING)) { __ublk_abort_rq(ubq, req); return; } @@ -1341,13 +1337,12 @@ static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd, } while (rq); } -static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l) +static void ublk_queue_cmd_list(struct ublk_io *io, struct rq_list *l) { - struct request *rq = rq_list_peek(l); - struct io_uring_cmd *cmd = ubq->ios[rq->tag].cmd; + struct io_uring_cmd *cmd = io->cmd; struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); - pdu->req_list = rq; + pdu->req_list = rq_list_peek(l); rq_list_init(l); io_uring_cmd_complete_in_task(cmd, ublk_cmd_list_tw_cb); } @@ -1355,13 +1350,10 @@ static void ublk_queue_cmd_list(struct ublk_queue *ubq, struct rq_list *l) static enum blk_eh_timer_return ublk_timeout(struct request *rq) { struct ublk_queue *ubq = rq->mq_hctx->driver_data; + struct ublk_io *io = &ubq->ios[rq->tag]; if (ubq->flags & UBLK_F_UNPRIVILEGED_DEV) { - if (!ubq->timeout) { - send_sig(SIGKILL, ubq->ubq_daemon, 0); - ubq->timeout = true; - } - + send_sig(SIGKILL, io->task, 0); return BLK_EH_DONE; } @@ -1429,24 +1421,25 @@ static void ublk_queue_rqs(struct rq_list *rqlist) { struct rq_list requeue_list = { }; struct rq_list submit_list = { }; - struct ublk_queue *ubq = NULL; + struct ublk_io *io = NULL; struct request *req; while ((req = rq_list_pop(rqlist))) { struct ublk_queue *this_q = req->mq_hctx->driver_data; + struct ublk_io *this_io = &this_q->ios[req->tag]; - if (ubq && ubq != this_q && !rq_list_empty(&submit_list)) - ublk_queue_cmd_list(ubq, &submit_list); - ubq = this_q; + if (io && io->task != this_io->task && !rq_list_empty(&submit_list)) + ublk_queue_cmd_list(io, &submit_list); + io = this_io; - if (ublk_prep_req(ubq, req, true) == BLK_STS_OK) + if (ublk_prep_req(this_q, req, true) == BLK_STS_OK) rq_list_add_tail(&submit_list, req); else rq_list_add_tail(&requeue_list, req); } - if (ubq && !rq_list_empty(&submit_list)) - ublk_queue_cmd_list(ubq, &submit_list); + if (io && !rq_list_empty(&submit_list)) + ublk_queue_cmd_list(io, &submit_list); *rqlist = requeue_list; } @@ -1474,17 +1467,6 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq) /* All old ioucmds have to be completed */ ubq->nr_io_ready = 0; - /* - * old daemon is PF_EXITING, put it now - * - * It could be NULL in case of closing one quisced device. - */ - if (ubq->ubq_daemon) - put_task_struct(ubq->ubq_daemon); - /* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */ - ubq->ubq_daemon = NULL; - ubq->timeout = false; - for (i = 0; i < ubq->q_depth; i++) { struct ublk_io *io = &ubq->ios[i]; @@ -1495,6 +1477,17 @@ static void ublk_queue_reinit(struct ublk_device *ub, struct ublk_queue *ubq) io->flags &= UBLK_IO_FLAG_CANCELED; io->cmd = NULL; io->addr = 0; + + /* + * old task is PF_EXITING, put it now + * + * It could be NULL in case of closing one quiesced + * device. + */ + if (io->task) { + put_task_struct(io->task); + io->task = NULL; + } } } @@ -1516,7 +1509,7 @@ static void ublk_reset_ch_dev(struct ublk_device *ub) for (i = 0; i < ub->dev_info.nr_hw_queues; i++) ublk_queue_reinit(ub, ublk_get_queue(ub, i)); - /* set to NULL, otherwise new ubq_daemon cannot mmap the io_cmd_buf */ + /* set to NULL, otherwise new tasks cannot mmap io_cmd_buf */ ub->mm = NULL; ub->nr_queues_ready = 0; ub->nr_privileged_daemon = 0; @@ -1783,6 +1776,7 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd, struct ublk_uring_cmd_pdu *pdu = ublk_get_uring_cmd_pdu(cmd); struct ublk_queue *ubq = pdu->ubq; struct task_struct *task; + struct ublk_io *io; if (WARN_ON_ONCE(!ubq)) return; @@ -1791,13 +1785,14 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd, return; task = io_uring_cmd_get_task(cmd); - if (WARN_ON_ONCE(task && task != ubq->ubq_daemon)) + io = &ubq->ios[pdu->tag]; + if (WARN_ON_ONCE(task && task != io->task)) return; if (!ubq->canceling) ublk_start_cancel(ubq); - WARN_ON_ONCE(ubq->ios[pdu->tag].cmd != cmd); + WARN_ON_ONCE(io->cmd != cmd); ublk_cancel_cmd(ubq, pdu->tag, issue_flags); } @@ -1930,8 +1925,6 @@ static void ublk_mark_io_ready(struct ublk_device *ub, struct ublk_queue *ubq) { ubq->nr_io_ready++; if (ublk_queue_ready(ubq)) { - ubq->ubq_daemon = current; - get_task_struct(ubq->ubq_daemon); ub->nr_queues_ready++; if (capable(CAP_SYS_ADMIN)) @@ -2084,6 +2077,7 @@ static int ublk_fetch(struct io_uring_cmd *cmd, struct ublk_queue *ubq, } ublk_fill_io_cmd(io, cmd, buf_addr); + WRITE_ONCE(io->task, get_task_struct(current)); ublk_mark_io_ready(ub, ubq); out: mutex_unlock(&ub->mutex); @@ -2179,6 +2173,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, const struct ublksrv_io_cmd *ub_cmd) { struct ublk_device *ub = cmd->file->private_data; + struct task_struct *task; struct ublk_queue *ubq; struct ublk_io *io; u32 cmd_op = cmd->cmd_op; @@ -2193,13 +2188,14 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, goto out; ubq = ublk_get_queue(ub, ub_cmd->q_id); - if (ubq->ubq_daemon && ubq->ubq_daemon != current) - goto out; if (tag >= ubq->q_depth) goto out; io = &ubq->ios[tag]; + task = READ_ONCE(io->task); + if (task && task != current) + goto out; /* there is pending io cmd, something must be wrong */ if (io->flags & UBLK_IO_FLAG_ACTIVE) { @@ -2449,9 +2445,14 @@ static void ublk_deinit_queue(struct ublk_device *ub, int q_id) { int size = ublk_queue_cmd_buf_size(ub, q_id); struct ublk_queue *ubq = ublk_get_queue(ub, q_id); + int i; + + for (i = 0; i < ubq->q_depth; i++) { + struct ublk_io *io = &ubq->ios[i]; + if (io->task) + put_task_struct(io->task); + } - if (ubq->ubq_daemon) - put_task_struct(ubq->ubq_daemon); if (ubq->io_cmd_buf) free_pages((unsigned long)ubq->io_cmd_buf, get_order(size)); } @@ -2923,7 +2924,8 @@ static int ublk_ctrl_add_dev(const struct ublksrv_ctrl_cmd *header) ub->dev_info.flags &= UBLK_F_ALL; ub->dev_info.flags |= UBLK_F_CMD_IOCTL_ENCODE | - UBLK_F_URING_CMD_COMP_IN_TASK; + UBLK_F_URING_CMD_COMP_IN_TASK | + UBLK_F_PER_IO_DAEMON; /* GET_DATA isn't needed any more with USER_COPY or ZERO COPY */ if (ub->dev_info.flags & (UBLK_F_USER_COPY | UBLK_F_SUPPORT_ZERO_COPY | @@ -3188,14 +3190,14 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub, int ublksrv_pid = (int)header->data[0]; int ret = -EINVAL; - pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...\n", - __func__, ub->dev_info.nr_hw_queues, header->dev_id); - /* wait until new ubq_daemon sending all FETCH_REQ */ + pr_devel("%s: Waiting for all FETCH_REQs, dev id %d...\n", __func__, + header->dev_id); + if (wait_for_completion_interruptible(&ub->completion)) return -EINTR; - pr_devel("%s: All new ubq_daemons(nr: %d) are ready, dev id %d\n", - __func__, ub->dev_info.nr_hw_queues, header->dev_id); + pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__, + header->dev_id); mutex_lock(&ub->mutex); if (ublk_nosrv_should_stop_dev(ub)) diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h index 56c7e3fc666fc578a545d15a9767e58f3a4a2f3b..77d9d6af46da878cf30df6e3e31758a6f8f61afa 100644 --- a/include/uapi/linux/ublk_cmd.h +++ b/include/uapi/linux/ublk_cmd.h @@ -272,6 +272,15 @@ */ #define UBLK_F_QUIESCE (1ULL << 12) +/* + * If this feature is set, ublk_drv supports each (qid,tag) pair having + * its own independent daemon task that is responsible for handling it. + * If it is not set, daemons are per-queue instead, so for two pairs + * (qid1,tag1) and (qid2,tag2), if qid1 == qid2, then the same task must + * be responsible for handling (qid1,tag1) and (qid2,tag2). + */ +#define UBLK_F_PER_IO_DAEMON (1ULL << 13) + /* device state */ #define UBLK_S_DEV_DEAD 0 #define UBLK_S_DEV_LIVE 1