Message ID | 1617814235-25634-1-git-send-email-loic.poulain@linaro.org |
---|---|
State | Superseded |
Headers | show |
Series | [RESEND] bus: mhi: Add inbound buffers allocation flag | expand |
Hi Mani, Hemant, On Wed, 7 Apr 2021 at 18:41, Loic Poulain <loic.poulain@linaro.org> wrote: > > Currently, the MHI controller driver defines which channels should > have their inbound buffers allocated and queued. But ideally, this is > something that should be decided by the MHI device driver instead, > which actually deals with that buffers. > > Add a flag parameter to mhi_prepare_for_transfer allowing to specify > if buffers have to be allocated and queued by the MHI stack. > > Keep auto_queue flag for now, but should be removed at some point. > > Signed-off-by: Loic Poulain <loic.poulain@linaro.org> > --- Would you consider this one for 5.13. Without it, MHI modems with IPCR channel are not usable because of lacking of RX buffer allocation. Thanks, Loic > drivers/bus/mhi/core/internal.h | 2 +- > drivers/bus/mhi/core/main.c | 11 ++++++++--- > drivers/net/mhi/net.c | 2 +- > include/linux/mhi.h | 12 +++++++++++- > net/qrtr/mhi.c | 2 +- > 5 files changed, 22 insertions(+), 7 deletions(-) > > diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h > index 5b9ea66..672052f 100644 > --- a/drivers/bus/mhi/core/internal.h > +++ b/drivers/bus/mhi/core/internal.h > @@ -682,7 +682,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, > struct image_info *img_info); > void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl); > int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, > - struct mhi_chan *mhi_chan); > + struct mhi_chan *mhi_chan, enum mhi_chan_flags flags); > int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, > struct mhi_chan *mhi_chan); > void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, > diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c > index 0f1febf..432b53b 100644 > --- a/drivers/bus/mhi/core/main.c > +++ b/drivers/bus/mhi/core/main.c > @@ -1384,7 +1384,8 @@ static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl, > } > > int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, > - struct mhi_chan *mhi_chan) > + struct mhi_chan *mhi_chan, > + enum mhi_chan_flags flags) > { > int ret = 0; > struct device *dev = &mhi_chan->mhi_dev->dev; > @@ -1409,6 +1410,9 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, > if (ret) > goto error_pm_state; > > + if (mhi_chan->dir == DMA_FROM_DEVICE) > + mhi_chan->pre_alloc = !!(flags & MHI_CH_INBOUND_ALLOC_BUFS); > + > /* Pre-allocate buffer for xfer ring */ > if (mhi_chan->pre_alloc) { > int nr_el = get_nr_avail_ring_elements(mhi_cntrl, > @@ -1555,7 +1559,8 @@ void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan) > } > > /* Move channel to start state */ > -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) > +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, > + enum mhi_chan_flags flags) > { > int ret, dir; > struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; > @@ -1566,7 +1571,7 @@ int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) > if (!mhi_chan) > continue; > > - ret = mhi_prepare_channel(mhi_cntrl, mhi_chan); > + ret = mhi_prepare_channel(mhi_cntrl, mhi_chan, flags); > if (ret) > goto error_open_chan; > } > diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c > index 5ec7a29..06e1455 100644 > --- a/drivers/net/mhi/net.c > +++ b/drivers/net/mhi/net.c > @@ -327,7 +327,7 @@ static int mhi_net_probe(struct mhi_device *mhi_dev, > u64_stats_init(&mhi_netdev->stats.tx_syncp); > > /* Start MHI channels */ > - err = mhi_prepare_for_transfer(mhi_dev); > + err = mhi_prepare_for_transfer(mhi_dev, 0); > if (err) > goto out_err; > > diff --git a/include/linux/mhi.h b/include/linux/mhi.h > index d095fba..9372acf 100644 > --- a/include/linux/mhi.h > +++ b/include/linux/mhi.h > @@ -60,6 +60,14 @@ enum mhi_flags { > }; > > /** > + * enum mhi_chan_flags - MHI channel flags > + * @MHI_CH_INBOUND_ALLOC_BUFS: Automatically allocate and queue inbound buffers > + */ > +enum mhi_chan_flags { > + MHI_CH_INBOUND_ALLOC_BUFS = BIT(0), > +}; > + > +/** > * enum mhi_device_type - Device types > * @MHI_DEVICE_XFER: Handles data transfer > * @MHI_DEVICE_CONTROLLER: Control device > @@ -719,8 +727,10 @@ void mhi_device_put(struct mhi_device *mhi_dev); > * host and device execution environments match and > * channels are in a DISABLED state. > * @mhi_dev: Device associated with the channels > + * @flags: MHI channel flags > */ > -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev); > +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, > + enum mhi_chan_flags flags); > > /** > * mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer. > diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c > index 2bf2b19..47afded 100644 > --- a/net/qrtr/mhi.c > +++ b/net/qrtr/mhi.c > @@ -77,7 +77,7 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev, > int rc; > > /* start channels */ > - rc = mhi_prepare_for_transfer(mhi_dev); > + rc = mhi_prepare_for_transfer(mhi_dev, MHI_CH_INBOUND_ALLOC_BUFS); > if (rc) > return rc; > > -- > 2.7.4 >
On 2021-05-03 01:12 AM, Loic Poulain wrote: > Hi Mani, Hemant, > > On Wed, 7 Apr 2021 at 18:41, Loic Poulain <loic.poulain@linaro.org> > wrote: >> >> Currently, the MHI controller driver defines which channels should >> have their inbound buffers allocated and queued. But ideally, this is >> something that should be decided by the MHI device driver instead, >> which actually deals with that buffers. >> >> Add a flag parameter to mhi_prepare_for_transfer allowing to specify >> if buffers have to be allocated and queued by the MHI stack. >> >> Keep auto_queue flag for now, but should be removed at some point. >> >> Signed-off-by: Loic Poulain <loic.poulain@linaro.org> >> --- Tested on an X86 Ubuntu 18.04 + SDX65 setup. Tested-by: Bhaumik Bhatt <bbhatt@codeaurora.org> Reviewed-by: Bhaumik Bhatt <bbhatt@codeaurora.org> > > Would you consider this one for 5.13. Without it, MHI modems with IPCR > channel are not usable because of lacking of RX buffer allocation. > > Thanks, > Loic > > >> drivers/bus/mhi/core/internal.h | 2 +- >> drivers/bus/mhi/core/main.c | 11 ++++++++--- >> drivers/net/mhi/net.c | 2 +- >> include/linux/mhi.h | 12 +++++++++++- >> net/qrtr/mhi.c | 2 +- >> 5 files changed, 22 insertions(+), 7 deletions(-) >> >> diff --git a/drivers/bus/mhi/core/internal.h >> b/drivers/bus/mhi/core/internal.h >> index 5b9ea66..672052f 100644 >> --- a/drivers/bus/mhi/core/internal.h >> +++ b/drivers/bus/mhi/core/internal.h >> @@ -682,7 +682,7 @@ void mhi_rddm_prepare(struct mhi_controller >> *mhi_cntrl, >> struct image_info *img_info); >> void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl); >> int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, >> - struct mhi_chan *mhi_chan); >> + struct mhi_chan *mhi_chan, enum mhi_chan_flags >> flags); >> int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, >> struct mhi_chan *mhi_chan); >> void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, >> diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c >> index 0f1febf..432b53b 100644 >> --- a/drivers/bus/mhi/core/main.c >> +++ b/drivers/bus/mhi/core/main.c >> @@ -1384,7 +1384,8 @@ static void mhi_unprepare_channel(struct >> mhi_controller *mhi_cntrl, >> } >> >> int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, >> - struct mhi_chan *mhi_chan) >> + struct mhi_chan *mhi_chan, >> + enum mhi_chan_flags flags) >> { >> int ret = 0; >> struct device *dev = &mhi_chan->mhi_dev->dev; >> @@ -1409,6 +1410,9 @@ int mhi_prepare_channel(struct mhi_controller >> *mhi_cntrl, >> if (ret) >> goto error_pm_state; >> >> + if (mhi_chan->dir == DMA_FROM_DEVICE) >> + mhi_chan->pre_alloc = !!(flags & >> MHI_CH_INBOUND_ALLOC_BUFS); >> + >> /* Pre-allocate buffer for xfer ring */ >> if (mhi_chan->pre_alloc) { >> int nr_el = get_nr_avail_ring_elements(mhi_cntrl, >> @@ -1555,7 +1559,8 @@ void mhi_reset_chan(struct mhi_controller >> *mhi_cntrl, struct mhi_chan *mhi_chan) >> } >> >> /* Move channel to start state */ >> -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) >> +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, >> + enum mhi_chan_flags flags) >> { >> int ret, dir; >> struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; >> @@ -1566,7 +1571,7 @@ int mhi_prepare_for_transfer(struct mhi_device >> *mhi_dev) >> if (!mhi_chan) >> continue; >> >> - ret = mhi_prepare_channel(mhi_cntrl, mhi_chan); >> + ret = mhi_prepare_channel(mhi_cntrl, mhi_chan, flags); >> if (ret) >> goto error_open_chan; >> } >> diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c >> index 5ec7a29..06e1455 100644 >> --- a/drivers/net/mhi/net.c >> +++ b/drivers/net/mhi/net.c >> @@ -327,7 +327,7 @@ static int mhi_net_probe(struct mhi_device >> *mhi_dev, >> u64_stats_init(&mhi_netdev->stats.tx_syncp); >> >> /* Start MHI channels */ >> - err = mhi_prepare_for_transfer(mhi_dev); >> + err = mhi_prepare_for_transfer(mhi_dev, 0); >> if (err) >> goto out_err; >> >> diff --git a/include/linux/mhi.h b/include/linux/mhi.h >> index d095fba..9372acf 100644 >> --- a/include/linux/mhi.h >> +++ b/include/linux/mhi.h >> @@ -60,6 +60,14 @@ enum mhi_flags { >> }; >> >> /** >> + * enum mhi_chan_flags - MHI channel flags >> + * @MHI_CH_INBOUND_ALLOC_BUFS: Automatically allocate and queue >> inbound buffers >> + */ >> +enum mhi_chan_flags { >> + MHI_CH_INBOUND_ALLOC_BUFS = BIT(0), >> +}; >> + >> +/** >> * enum mhi_device_type - Device types >> * @MHI_DEVICE_XFER: Handles data transfer >> * @MHI_DEVICE_CONTROLLER: Control device >> @@ -719,8 +727,10 @@ void mhi_device_put(struct mhi_device *mhi_dev); >> * host and device execution environments >> match and >> * channels are in a DISABLED state. >> * @mhi_dev: Device associated with the channels >> + * @flags: MHI channel flags >> */ >> -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev); >> +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, >> + enum mhi_chan_flags flags); >> >> /** >> * mhi_unprepare_from_transfer - Reset UL and DL channels for data >> transfer. >> diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c >> index 2bf2b19..47afded 100644 >> --- a/net/qrtr/mhi.c >> +++ b/net/qrtr/mhi.c >> @@ -77,7 +77,7 @@ static int qcom_mhi_qrtr_probe(struct mhi_device >> *mhi_dev, >> int rc; >> >> /* start channels */ >> - rc = mhi_prepare_for_transfer(mhi_dev); >> + rc = mhi_prepare_for_transfer(mhi_dev, >> MHI_CH_INBOUND_ALLOC_BUFS); >> if (rc) >> return rc; >> >> -- >> 2.7.4 >> Thanks, Bhaumik --- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project
On 4/7/21 9:50 AM, Loic Poulain wrote: > Currently, the MHI controller driver defines which channels should > have their inbound buffers allocated and queued. But ideally, this is > something that should be decided by the MHI device driver instead, > which actually deals with that buffers. > > Add a flag parameter to mhi_prepare_for_transfer allowing to specify > if buffers have to be allocated and queued by the MHI stack. > > Keep auto_queue flag for now, but should be removed at some point. > > Signed-off-by: Loic Poulain <loic.poulain@linaro.org> Reviewed-by: Hemant Kumar <hemantk@codeaurora.org> -- The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum, a Linux Foundation Collaborative Project
On Wed, Apr 07, 2021 at 06:50:35PM +0200, Loic Poulain wrote: > Currently, the MHI controller driver defines which channels should > have their inbound buffers allocated and queued. But ideally, this is > something that should be decided by the MHI device driver instead, > which actually deals with that buffers. > > Add a flag parameter to mhi_prepare_for_transfer allowing to specify > if buffers have to be allocated and queued by the MHI stack. > > Keep auto_queue flag for now, but should be removed at some point. > > Signed-off-by: Loic Poulain <loic.poulain@linaro.org> You need to modify the API in WWAN driver as well. With that, Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Thanks, Mani > --- > drivers/bus/mhi/core/internal.h | 2 +- > drivers/bus/mhi/core/main.c | 11 ++++++++--- > drivers/net/mhi/net.c | 2 +- > include/linux/mhi.h | 12 +++++++++++- > net/qrtr/mhi.c | 2 +- > 5 files changed, 22 insertions(+), 7 deletions(-) > > diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h > index 5b9ea66..672052f 100644 > --- a/drivers/bus/mhi/core/internal.h > +++ b/drivers/bus/mhi/core/internal.h > @@ -682,7 +682,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, > struct image_info *img_info); > void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl); > int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, > - struct mhi_chan *mhi_chan); > + struct mhi_chan *mhi_chan, enum mhi_chan_flags flags); > int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, > struct mhi_chan *mhi_chan); > void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, > diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c > index 0f1febf..432b53b 100644 > --- a/drivers/bus/mhi/core/main.c > +++ b/drivers/bus/mhi/core/main.c > @@ -1384,7 +1384,8 @@ static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl, > } > > int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, > - struct mhi_chan *mhi_chan) > + struct mhi_chan *mhi_chan, > + enum mhi_chan_flags flags) > { > int ret = 0; > struct device *dev = &mhi_chan->mhi_dev->dev; > @@ -1409,6 +1410,9 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, > if (ret) > goto error_pm_state; > > + if (mhi_chan->dir == DMA_FROM_DEVICE) > + mhi_chan->pre_alloc = !!(flags & MHI_CH_INBOUND_ALLOC_BUFS); > + > /* Pre-allocate buffer for xfer ring */ > if (mhi_chan->pre_alloc) { > int nr_el = get_nr_avail_ring_elements(mhi_cntrl, > @@ -1555,7 +1559,8 @@ void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan) > } > > /* Move channel to start state */ > -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) > +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, > + enum mhi_chan_flags flags) > { > int ret, dir; > struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; > @@ -1566,7 +1571,7 @@ int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) > if (!mhi_chan) > continue; > > - ret = mhi_prepare_channel(mhi_cntrl, mhi_chan); > + ret = mhi_prepare_channel(mhi_cntrl, mhi_chan, flags); > if (ret) > goto error_open_chan; > } > diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c > index 5ec7a29..06e1455 100644 > --- a/drivers/net/mhi/net.c > +++ b/drivers/net/mhi/net.c > @@ -327,7 +327,7 @@ static int mhi_net_probe(struct mhi_device *mhi_dev, > u64_stats_init(&mhi_netdev->stats.tx_syncp); > > /* Start MHI channels */ > - err = mhi_prepare_for_transfer(mhi_dev); > + err = mhi_prepare_for_transfer(mhi_dev, 0); > if (err) > goto out_err; > > diff --git a/include/linux/mhi.h b/include/linux/mhi.h > index d095fba..9372acf 100644 > --- a/include/linux/mhi.h > +++ b/include/linux/mhi.h > @@ -60,6 +60,14 @@ enum mhi_flags { > }; > > /** > + * enum mhi_chan_flags - MHI channel flags > + * @MHI_CH_INBOUND_ALLOC_BUFS: Automatically allocate and queue inbound buffers > + */ > +enum mhi_chan_flags { > + MHI_CH_INBOUND_ALLOC_BUFS = BIT(0), > +}; > + > +/** > * enum mhi_device_type - Device types > * @MHI_DEVICE_XFER: Handles data transfer > * @MHI_DEVICE_CONTROLLER: Control device > @@ -719,8 +727,10 @@ void mhi_device_put(struct mhi_device *mhi_dev); > * host and device execution environments match and > * channels are in a DISABLED state. > * @mhi_dev: Device associated with the channels > + * @flags: MHI channel flags > */ > -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev); > +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, > + enum mhi_chan_flags flags); > > /** > * mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer. > diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c > index 2bf2b19..47afded 100644 > --- a/net/qrtr/mhi.c > +++ b/net/qrtr/mhi.c > @@ -77,7 +77,7 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev, > int rc; > > /* start channels */ > - rc = mhi_prepare_for_transfer(mhi_dev); > + rc = mhi_prepare_for_transfer(mhi_dev, MHI_CH_INBOUND_ALLOC_BUFS); > if (rc) > return rc; > > -- > 2.7.4 >
diff --git a/drivers/bus/mhi/core/internal.h b/drivers/bus/mhi/core/internal.h index 5b9ea66..672052f 100644 --- a/drivers/bus/mhi/core/internal.h +++ b/drivers/bus/mhi/core/internal.h @@ -682,7 +682,7 @@ void mhi_rddm_prepare(struct mhi_controller *mhi_cntrl, struct image_info *img_info); void mhi_fw_load_handler(struct mhi_controller *mhi_cntrl); int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, - struct mhi_chan *mhi_chan); + struct mhi_chan *mhi_chan, enum mhi_chan_flags flags); int mhi_init_chan_ctxt(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan); void mhi_deinit_chan_ctxt(struct mhi_controller *mhi_cntrl, diff --git a/drivers/bus/mhi/core/main.c b/drivers/bus/mhi/core/main.c index 0f1febf..432b53b 100644 --- a/drivers/bus/mhi/core/main.c +++ b/drivers/bus/mhi/core/main.c @@ -1384,7 +1384,8 @@ static void mhi_unprepare_channel(struct mhi_controller *mhi_cntrl, } int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, - struct mhi_chan *mhi_chan) + struct mhi_chan *mhi_chan, + enum mhi_chan_flags flags) { int ret = 0; struct device *dev = &mhi_chan->mhi_dev->dev; @@ -1409,6 +1410,9 @@ int mhi_prepare_channel(struct mhi_controller *mhi_cntrl, if (ret) goto error_pm_state; + if (mhi_chan->dir == DMA_FROM_DEVICE) + mhi_chan->pre_alloc = !!(flags & MHI_CH_INBOUND_ALLOC_BUFS); + /* Pre-allocate buffer for xfer ring */ if (mhi_chan->pre_alloc) { int nr_el = get_nr_avail_ring_elements(mhi_cntrl, @@ -1555,7 +1559,8 @@ void mhi_reset_chan(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan) } /* Move channel to start state */ -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, + enum mhi_chan_flags flags) { int ret, dir; struct mhi_controller *mhi_cntrl = mhi_dev->mhi_cntrl; @@ -1566,7 +1571,7 @@ int mhi_prepare_for_transfer(struct mhi_device *mhi_dev) if (!mhi_chan) continue; - ret = mhi_prepare_channel(mhi_cntrl, mhi_chan); + ret = mhi_prepare_channel(mhi_cntrl, mhi_chan, flags); if (ret) goto error_open_chan; } diff --git a/drivers/net/mhi/net.c b/drivers/net/mhi/net.c index 5ec7a29..06e1455 100644 --- a/drivers/net/mhi/net.c +++ b/drivers/net/mhi/net.c @@ -327,7 +327,7 @@ static int mhi_net_probe(struct mhi_device *mhi_dev, u64_stats_init(&mhi_netdev->stats.tx_syncp); /* Start MHI channels */ - err = mhi_prepare_for_transfer(mhi_dev); + err = mhi_prepare_for_transfer(mhi_dev, 0); if (err) goto out_err; diff --git a/include/linux/mhi.h b/include/linux/mhi.h index d095fba..9372acf 100644 --- a/include/linux/mhi.h +++ b/include/linux/mhi.h @@ -60,6 +60,14 @@ enum mhi_flags { }; /** + * enum mhi_chan_flags - MHI channel flags + * @MHI_CH_INBOUND_ALLOC_BUFS: Automatically allocate and queue inbound buffers + */ +enum mhi_chan_flags { + MHI_CH_INBOUND_ALLOC_BUFS = BIT(0), +}; + +/** * enum mhi_device_type - Device types * @MHI_DEVICE_XFER: Handles data transfer * @MHI_DEVICE_CONTROLLER: Control device @@ -719,8 +727,10 @@ void mhi_device_put(struct mhi_device *mhi_dev); * host and device execution environments match and * channels are in a DISABLED state. * @mhi_dev: Device associated with the channels + * @flags: MHI channel flags */ -int mhi_prepare_for_transfer(struct mhi_device *mhi_dev); +int mhi_prepare_for_transfer(struct mhi_device *mhi_dev, + enum mhi_chan_flags flags); /** * mhi_unprepare_from_transfer - Reset UL and DL channels for data transfer. diff --git a/net/qrtr/mhi.c b/net/qrtr/mhi.c index 2bf2b19..47afded 100644 --- a/net/qrtr/mhi.c +++ b/net/qrtr/mhi.c @@ -77,7 +77,7 @@ static int qcom_mhi_qrtr_probe(struct mhi_device *mhi_dev, int rc; /* start channels */ - rc = mhi_prepare_for_transfer(mhi_dev); + rc = mhi_prepare_for_transfer(mhi_dev, MHI_CH_INBOUND_ALLOC_BUFS); if (rc) return rc;
Currently, the MHI controller driver defines which channels should have their inbound buffers allocated and queued. But ideally, this is something that should be decided by the MHI device driver instead, which actually deals with that buffers. Add a flag parameter to mhi_prepare_for_transfer allowing to specify if buffers have to be allocated and queued by the MHI stack. Keep auto_queue flag for now, but should be removed at some point. Signed-off-by: Loic Poulain <loic.poulain@linaro.org> --- drivers/bus/mhi/core/internal.h | 2 +- drivers/bus/mhi/core/main.c | 11 ++++++++--- drivers/net/mhi/net.c | 2 +- include/linux/mhi.h | 12 +++++++++++- net/qrtr/mhi.c | 2 +- 5 files changed, 22 insertions(+), 7 deletions(-) -- 2.7.4