Message ID | 20250110061639.1280907-4-chenridong@huaweicloud.com |
---|---|
State | New |
Headers | show |
Series | padata: fix UAF issues | expand |
On Fri, Jan 10, 2025 at 06:16:39AM +0000, Chen Ridong wrote: ... > Fixes: bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance padata queues") > Signed-off-by: Chen Ridong <chenridong@huawei.com> Series looks good, thanks for the persistence. Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> > diff --git a/kernel/padata.c b/kernel/padata.c ... > static void invoke_padata_reorder(struct work_struct *work) > @@ -364,6 +370,8 @@ static void invoke_padata_reorder(struct work_struct *work) > pd = container_of(work, struct parallel_data, reorder_work); > padata_reorder(pd); > local_bh_enable(); > + /* Pairs with putting the reorder_work in the serial_wq */ s/putting/getting/
On 2025/1/14 1:00, Daniel Jordan wrote: > On Fri, Jan 10, 2025 at 06:16:39AM +0000, Chen Ridong wrote: > ... >> Fixes: bbefa1dd6a6d ("crypto: pcrypt - Avoid deadlock by using per-instance padata queues") >> Signed-off-by: Chen Ridong <chenridong@huawei.com> > > Series looks good, thanks for the persistence. > Thank you for your patience. Best regards, Ridong > Acked-by: Daniel Jordan <daniel.m.jordan@oracle.com> > >> diff --git a/kernel/padata.c b/kernel/padata.c > ... >> static void invoke_padata_reorder(struct work_struct *work) >> @@ -364,6 +370,8 @@ static void invoke_padata_reorder(struct work_struct *work) >> pd = container_of(work, struct parallel_data, reorder_work); >> padata_reorder(pd); >> local_bh_enable(); >> + /* Pairs with putting the reorder_work in the serial_wq */ > > s/putting/getting/ >
On Fri, Jan 10, 2025 at 06:16:39AM +0000, Chen Ridong wrote: > From: Chen Ridong <chenridong@huawei.com> > > Although the previous patch can avoid ps and ps UAF for _do_serial, it > can not avoid potential UAF issue for reorder_work. This issue can > happen just as below: > > crypto_request crypto_request crypto_del_alg > padata_do_serial > ... > padata_reorder > // processes all remaining > // requests then breaks > while (1) { > if (!padata) > break; > ... > } > > padata_do_serial > // new request added > list_add > // sees the new request > queue_work(reorder_work) > padata_reorder > queue_work_on(squeue->work) > ... > > <kworker context> > padata_serial_worker > // completes new request, > // no more outstanding > // requests > > crypto_del_alg > // free pd > > <kworker context> > invoke_padata_reorder > // UAF of pd Looking back this explanation is actually broken. The call crypto_del_alg does not free anything immediately. It can only start freeing things once the final tfm user goes away. Any crypto request of that tfm must have completed before that happens. If not there is a serious bug in the Crypto API. So if crypto_del_alg is leading to a freeing of the pd while there are still outstanding users of that tfm, then this points to a bug in the Crypto API and not padata. Can you still reproduce this bug easily if you revert the patches in this series? If so we should be able to track down the real bug. To recap, every tfm holds a ref count on the underlying crypto_alg. All crypto requests must complete before a tfm can be freed, which then leads to a drop of the refcount on crypto_alg. A crypto_alg can only be freed when its ref count hits zero. Only then will the associated pd be freed. So what's missing in the above picture is the entity that is freeing the tfm, thus leading to the actual freeing of the alg and pd. Thanks,
On 2025/5/23 11:59, Herbert Xu wrote: > On Fri, Jan 10, 2025 at 06:16:39AM +0000, Chen Ridong wrote: >> From: Chen Ridong <chenridong@huawei.com> >> >> Although the previous patch can avoid ps and ps UAF for _do_serial, it >> can not avoid potential UAF issue for reorder_work. This issue can >> happen just as below: >> >> crypto_request crypto_request crypto_del_alg >> padata_do_serial >> ... >> padata_reorder >> // processes all remaining >> // requests then breaks >> while (1) { >> if (!padata) >> break; >> ... >> } >> >> padata_do_serial >> // new request added >> list_add >> // sees the new request >> queue_work(reorder_work) >> padata_reorder >> queue_work_on(squeue->work) >> ... >> >> <kworker context> >> padata_serial_worker >> // completes new request, >> // no more outstanding >> // requests >> >> crypto_del_alg >> // free pd >> >> <kworker context> >> invoke_padata_reorder >> // UAF of pd > > Looking back this explanation is actually broken. The call > crypto_del_alg does not free anything immediately. It can only > start freeing things once the final tfm user goes away. Any crypto > request of that tfm must have completed before that happens. > > If not there is a serious bug in the Crypto API. > > So if crypto_del_alg is leading to a freeing of the pd while there > are still outstanding users of that tfm, then this points to a bug > in the Crypto API and not padata. > > Can you still reproduce this bug easily if you revert the patches > in this series? If so we should be able to track down the real bug. > Unfortunately, I did not reproduce this bug, It was mentioned: https://lore.kernel.org/all/20221019083708.27138-6-nstange@suse.de/ We through that the kworker is asynchronous, and this scenarios may happen. Thanks, Ridong > To recap, every tfm holds a ref count on the underlying crypto_alg. > All crypto requests must complete before a tfm can be freed, which > then leads to a drop of the refcount on crypto_alg. > > A crypto_alg can only be freed when its ref count hits zero. Only > then will the associated pd be freed. > > So what's missing in the above picture is the entity that is freeing > the tfm, thus leading to the actual freeing of the alg and pd. > > Thanks,
On Mon, May 26, 2025 at 09:15:37AM +0800, Chen Ridong wrote: > > Unfortunately, I did not reproduce this bug, It was mentioned: > https://lore.kernel.org/all/20221019083708.27138-6-nstange@suse.de/ > > We through that the kworker is asynchronous, and this scenarios may happen. Right. I think the only way this can happen is through padata_replace. That is indeed completely asynchronous and must be guarded against using the reference count. Cheers,
diff --git a/kernel/padata.c b/kernel/padata.c index de2c02a81469..418987056340 100644 --- a/kernel/padata.c +++ b/kernel/padata.c @@ -352,8 +352,14 @@ static void padata_reorder(struct parallel_data *pd) smp_mb(); reorder = per_cpu_ptr(pd->reorder_list, pd->cpu); - if (!list_empty(&reorder->list) && padata_find_next(pd, false)) + if (!list_empty(&reorder->list) && padata_find_next(pd, false)) { + /* + * Other context(eg. the padata_serial_worker) can finish the request. + * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish. + */ + padata_get_pd(pd); queue_work(pinst->serial_wq, &pd->reorder_work); + } } static void invoke_padata_reorder(struct work_struct *work) @@ -364,6 +370,8 @@ static void invoke_padata_reorder(struct work_struct *work) pd = container_of(work, struct parallel_data, reorder_work); padata_reorder(pd); local_bh_enable(); + /* Pairs with putting the reorder_work in the serial_wq */ + padata_put_pd(pd); } static void padata_serial_worker(struct work_struct *serial_work)