Message ID | 20210330231528.546284-2-alobakin@pm.me |
---|---|
State | Superseded |
Headers | show |
Series | [bpf-next,1/2] xsk: speed-up generic full-copy xmit | expand |
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c index a71ed664da0a..41f8f21b3348 100644 --- a/net/xdp/xsk.c +++ b/net/xdp/xsk.c @@ -517,14 +517,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs, return ERR_PTR(err); skb_reserve(skb, hr); - skb_put(skb, len); buffer = xsk_buff_raw_get_data(xs->pool, desc->addr); - err = skb_store_bits(skb, 0, buffer, len); - if (unlikely(err)) { - kfree_skb(skb); - return ERR_PTR(err); - } + memcpy(__skb_put(skb, len), buffer, ALIGN(len, sizeof(long))); } skb->dev = dev;
There are a few moments that are known for sure at the moment of copying: - allocated skb is fully linear; - its linear space is long enough to hold the full buffer data. So, the out-of-line skb_put(), skb_store_bits() and the check for a retcode can be replaced with plain memcpy(__skb_put()) with no loss. Also align memcpy()'s len to sizeof(long) to improve its performance. Signed-off-by: Alexander Lobakin <alobakin@pm.me> --- net/xdp/xsk.c | 7 +------ 1 file changed, 1 insertion(+), 6 deletions(-) -- 2.31.1