Message ID | 20190530182039.4945-1-ivan.khoronzhuk@linaro.org |
---|---|
Headers | show |
Series | net: ethernet: ti: cpsw: Add XDP support | expand |
Hi Ivan, From below code snippets, it looks like you only allocated 1 page_pool and sharing it with several RX-queues, as I don't have the full context and don't know this driver, I might be wrong? To be clear, a page_pool object is needed per RX-queue, as it is accessing a small RX page cache (which protected by NAPI/softirq). On Thu, 30 May 2019 21:20:39 +0300 Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > @@ -1404,6 +1711,14 @@ static int cpsw_ndo_open(struct net_device *ndev) > enable_irq(cpsw->irqs_table[0]); > } > > + pool_size = cpdma_get_num_rx_descs(cpsw->dma); > + cpsw->page_pool = cpsw_create_page_pool(cpsw, pool_size); > + if (IS_ERR(cpsw->page_pool)) { > + ret = PTR_ERR(cpsw->page_pool); > + cpsw->page_pool = NULL; > + goto err_cleanup; > + } On Thu, 30 May 2019 21:20:39 +0300 Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > @@ -675,10 +742,33 @@ int cpsw_set_ringparam(struct net_device *ndev, > if (cpsw->usage_count) > cpdma_chan_split_pool(cpsw->dma); > > + for (i = 0; i < cpsw->data.slaves; i++) { > + struct net_device *ndev = cpsw->slaves[i].ndev; > + > + if (!(ndev && netif_running(ndev))) > + continue; > + > + cpsw_xdp_unreg_rxqs(netdev_priv(ndev)); > + } > + > + page_pool_destroy(cpsw->page_pool); > + cpsw->page_pool = pool; > + On Thu, 30 May 2019 21:20:39 +0300 Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > +void cpsw_xdp_unreg_rxqs(struct cpsw_priv *priv) > +{ > + struct cpsw_common *cpsw = priv->cpsw; > + int i; > + > + for (i = 0; i < cpsw->rx_ch_num; i++) > + xdp_rxq_info_unreg(&priv->xdp_rxq[i]); > +} On Thu, 30 May 2019 21:20:39 +0300 Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > +int cpsw_xdp_reg_rxq(struct cpsw_priv *priv, int ch) > +{ > + struct xdp_rxq_info *xdp_rxq = &priv->xdp_rxq[ch]; > + struct cpsw_common *cpsw = priv->cpsw; > + int ret; > + > + ret = xdp_rxq_info_reg(xdp_rxq, priv->ndev, ch); > + if (ret) > + goto err_cleanup; > + > + ret = xdp_rxq_info_reg_mem_model(xdp_rxq, MEM_TYPE_PAGE_POOL, > + cpsw->page_pool); > + if (ret) > + goto err_cleanup; > + > + return 0; -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer
On Fri, May 31, 2019 at 05:46:43PM +0200, Jesper Dangaard Brouer wrote: Hi Jesper, > >Hi Ivan, > >From below code snippets, it looks like you only allocated 1 page_pool >and sharing it with several RX-queues, as I don't have the full context >and don't know this driver, I might be wrong? > >To be clear, a page_pool object is needed per RX-queue, as it is >accessing a small RX page cache (which protected by NAPI/softirq). There is one RX interrupt and one RX NAPI for all rx channels. > >On Thu, 30 May 2019 21:20:39 +0300 >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > >> @@ -1404,6 +1711,14 @@ static int cpsw_ndo_open(struct net_device *ndev) >> enable_irq(cpsw->irqs_table[0]); >> } >> >> + pool_size = cpdma_get_num_rx_descs(cpsw->dma); >> + cpsw->page_pool = cpsw_create_page_pool(cpsw, pool_size); >> + if (IS_ERR(cpsw->page_pool)) { >> + ret = PTR_ERR(cpsw->page_pool); >> + cpsw->page_pool = NULL; >> + goto err_cleanup; >> + } > >On Thu, 30 May 2019 21:20:39 +0300 >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > >> @@ -675,10 +742,33 @@ int cpsw_set_ringparam(struct net_device *ndev, >> if (cpsw->usage_count) >> cpdma_chan_split_pool(cpsw->dma); >> >> + for (i = 0; i < cpsw->data.slaves; i++) { >> + struct net_device *ndev = cpsw->slaves[i].ndev; >> + >> + if (!(ndev && netif_running(ndev))) >> + continue; >> + >> + cpsw_xdp_unreg_rxqs(netdev_priv(ndev)); >> + } >> + >> + page_pool_destroy(cpsw->page_pool); >> + cpsw->page_pool = pool; >> + > >On Thu, 30 May 2019 21:20:39 +0300 >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > >> +void cpsw_xdp_unreg_rxqs(struct cpsw_priv *priv) >> +{ >> + struct cpsw_common *cpsw = priv->cpsw; >> + int i; >> + >> + for (i = 0; i < cpsw->rx_ch_num; i++) >> + xdp_rxq_info_unreg(&priv->xdp_rxq[i]); >> +} > > >On Thu, 30 May 2019 21:20:39 +0300 >Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > >> +int cpsw_xdp_reg_rxq(struct cpsw_priv *priv, int ch) >> +{ >> + struct xdp_rxq_info *xdp_rxq = &priv->xdp_rxq[ch]; >> + struct cpsw_common *cpsw = priv->cpsw; >> + int ret; >> + >> + ret = xdp_rxq_info_reg(xdp_rxq, priv->ndev, ch); >> + if (ret) >> + goto err_cleanup; >> + >> + ret = xdp_rxq_info_reg_mem_model(xdp_rxq, MEM_TYPE_PAGE_POOL, >> + cpsw->page_pool); >> + if (ret) >> + goto err_cleanup; >> + >> + return 0; > > > >-- >Best regards, > Jesper Dangaard Brouer > MSc.CS, Principal Kernel Engineer at Red Hat > LinkedIn: http://www.linkedin.com/in/brouer -- Regards, Ivan Khoronzhuk
On Fri, 31 May 2019 20:03:33 +0300 Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: > Probably it's not good example for others how it should be used, not > a big problem to move it to separate pools.., even don't remember why > I decided to use shared pool, there was some more reasons... need > search in history. Using a shared pool is makes it a lot harder to solve the issue I'm currently working on. That is handling/waiting for in-flight frames to complete, before removing the mem ID from the (r)hashtable lookup. I have working code, that basically remove page_pool_destroy() from public API, and instead lets xdp_rxq_info_unreg() call it when in-flight count reach zero (and delay fully removing the mem ID). -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer
On Fri, May 31, 2019 at 10:08:03PM +0000, Saeed Mahameed wrote: >On Fri, 2019-05-31 at 20:03 +0300, Ivan Khoronzhuk wrote: >> On Fri, May 31, 2019 at 06:32:41PM +0200, Jesper Dangaard Brouer >> wrote: >> > On Fri, 31 May 2019 19:25:24 +0300 Ivan Khoronzhuk < >> > ivan.khoronzhuk@linaro.org> wrote: >> > >> > > On Fri, May 31, 2019 at 05:46:43PM +0200, Jesper Dangaard Brouer >> > > wrote: >> > > > From below code snippets, it looks like you only allocated 1 >> > > > page_pool >> > > > and sharing it with several RX-queues, as I don't have the full >> > > > context >> > > > and don't know this driver, I might be wrong? >> > > > >> > > > To be clear, a page_pool object is needed per RX-queue, as it >> > > > is >> > > > accessing a small RX page cache (which protected by >> > > > NAPI/softirq). >> > > >> > > There is one RX interrupt and one RX NAPI for all rx channels. >> > >> > So, what are you saying? >> > >> > You _are_ sharing the page_pool between several RX-channels, but it >> > is >> > safe because this hardware only have one RX interrupt + NAPI >> > instance?? >> >> I can miss smth but in case of cpsw technically it means: >> 1) RX interrupts are disabled while NAPI is scheduled, >> not for particular CPU or channel, but at all, for whole cpsw >> module. >> 2) RX channels are handled one by one by priority. > >Hi Ivan, I got a silly question.. > >What is the reason behind having multiple RX rings and one CPU/NAPI >handling all of them ? priority ? how do you priorities ? Several. One of the reason, from what I know, it can handle for several cpus/napi but because of errata on some SoCs or for all of them it was discarded, but idea was it can. Second it uses same davinci_cpdma API as tx channels that can be rate limited, and it's used not only by cpsw but also by other driver, so can't be modified easily and no reason. And third one, h/w has ability to steer some filtered traffic to rx queues and can be potentially configured with ethtool ntuples or so, but it's not implemented....yet. > >> 3) After all of them handled and no more in budget - interrupts are >> enabled. >> 4) If page is returned to the pool, and it's within NAPI, no races as >> it's >> returned protected by softirq. If it's returned not in softirq >> it's protected >> by producer lock of the ring. >> >> Probably it's not good example for others how it should be used, not >> a big >> problem to move it to separate pools.., even don't remember why I >> decided to >> use shared pool, there was some more reasons... need search in >> history. >> >> > -- >> > Best regards, >> > Jesper Dangaard Brouer >> > MSc.CS, Principal Kernel Engineer at Red Hat >> > LinkedIn: http://www.linkedin.com/in/brouer -- Regards, Ivan Khoronzhuk