Message ID | 20200514073718.17690-1-t-kristo@ti.com |
---|---|
State | New |
Headers | show |
Series | [1/1] soc: ti: omap-prm: use atomic iopoll instead of sleeping one | expand |
diff --git a/drivers/soc/ti/omap_prm.c b/drivers/soc/ti/omap_prm.c index 96c6f777519c..c9b3f9ebf0bb 100644 --- a/drivers/soc/ti/omap_prm.c +++ b/drivers/soc/ti/omap_prm.c @@ -256,10 +256,10 @@ static int omap_reset_deassert(struct reset_controller_dev *rcdev, goto exit; /* wait for the status to be set */ - ret = readl_relaxed_poll_timeout(reset->prm->base + - reset->prm->data->rstst, - v, v & BIT(st_bit), 1, - OMAP_RESET_MAX_WAIT); + ret = readl_relaxed_poll_timeout_atomic(reset->prm->base + + reset->prm->data->rstst, + v, v & BIT(st_bit), 1, + OMAP_RESET_MAX_WAIT); if (ret) pr_err("%s: timedout waiting for %s:%lu\n", __func__, reset->prm->data->name, id);
The reset handling APIs for omap-prm can be invoked PM runtime which runs in atomic context. For this to work properly, switch to atomic iopoll version instead of the current which can sleep. Otherwise, this throws a "BUG: scheduling while atomic" warning. Issue is seen rather easily when CONFIG_PREEMPT is enabled. Signed-off-by: Tero Kristo <t-kristo@ti.com> --- drivers/soc/ti/omap_prm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-)