This series is based on the exceptional generic zerocopy xmit logics
initially introduced by Xuan Zhuo. It extends it the way that it
could cover all the sane drivers, not only the ones that are capable
of xmitting skbs with no linear space.
The first patch is a random while-we-are-here improvement over
full-copy path, and the second is the main course. See the individual
commit messages for the details.
The original (full-zerocopy) path is still here and still generally
faster, but for now it seems like virtio_net will remain the only
user of it, at least for a considerable period of time.
From v1 [0]:
- don't add a whole SMP_CACHE_BYTES because of only two bytes
(NET_IP_ALIGN);
- switch to zerocopy if the frame is 129 bytes or longer, not 128.
128 still fit to kmalloc-512, while a zerocopy skb is always
kmalloc-1024 -> can potentially be slower on this frame size.
[0] https://lore.kernel.org/netdev/[email protected]
Alexander Lobakin (2):
xsk: speed-up generic full-copy xmit
xsk: introduce generic almost-zerocopy xmit
net/xdp/xsk.c | 32 ++++++++++++++++++++++----------
1 file changed, 22 insertions(+), 10 deletions(-)
--
Well, this is untested. I currently don't have an access to my setup
and is bound by moving to another country, but as I don't know for
sure at the moment when I'll get back to work on the kernel next time,
I found it worthy to publish this now -- if any further changes will
be required when I already will be out-of-sight, maybe someone could
carry on to make a another revision and so on (I'm still here for any
questions, comments, reviews and improvements till the end of this
week).
But this *should* work with all the sane drivers. If a particular
one won't handle this, it's likely ill. Any tests are highly
appreciated. Thanks!
--
2.31.1
There are a few moments that are known for sure at the moment of
copying:
- allocated skb is fully linear;
- its linear space is long enough to hold the full buffer data.
So, the out-of-line skb_put(), skb_store_bits() and the check for
a retcode can be replaced with plain memcpy(__skb_put()) with
no loss.
Also align memcpy()'s len to sizeof(long) to improve its performance.
Signed-off-by: Alexander Lobakin <[email protected]>
---
net/xdp/xsk.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index a71ed664da0a..41f8f21b3348 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -517,14 +517,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
return ERR_PTR(err);
skb_reserve(skb, hr);
- skb_put(skb, len);
buffer = xsk_buff_raw_get_data(xs->pool, desc->addr);
- err = skb_store_bits(skb, 0, buffer, len);
- if (unlikely(err)) {
- kfree_skb(skb);
- return ERR_PTR(err);
- }
+ memcpy(__skb_put(skb, len), buffer, ALIGN(len, sizeof(long)));
}
skb->dev = dev;
--
2.31.1
The reasons behind IFF_TX_SKB_NO_LINEAR are:
- most drivers expect skb with the linear space;
- most drivers expect hard header in the linear space;
- many drivers need some headroom to insert custom headers
and/or pull headers from frags (pskb_may_pull() etc.).
With some bits of overhead, we can satisfy all of this without
inducing full buffer data copy.
Now frames that are bigger than 128 bytes (to mitigate allocation
overhead) are also being built using zerocopy path (if the device and
driver support S/G xmit, which is almost always true).
We allocate 256* additional bytes for skb linear space and pull hard
header there (aligning its end by 16 bytes for platforms with
NET_IP_ALIGN). The rest of the buffer data is just pinned as frags.
A room of at least 240 bytes is left for any driver needs.
We could just pass the buffer to eth_get_headlen() to minimize
allocation overhead and be able to copy all the headers into the
linear space, but the flow dissection procedure tends to be more
expensive than the current approach.
IFF_TX_SKB_NO_LINEAR path remains unchanged and is still actual and
generally faster.
* The value of 256 bytes is kinda "magic", it can be found in lots
of drivers and places of core code and it is believed that 256
bytes are enough to store any headers of any frame.
Cc: Xuan Zhuo <[email protected]>
Signed-off-by: Alexander Lobakin <[email protected]>
---
net/xdp/xsk.c | 25 +++++++++++++++++++++----
1 file changed, 21 insertions(+), 4 deletions(-)
diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
index 41f8f21b3348..1d241f87422c 100644
--- a/net/xdp/xsk.c
+++ b/net/xdp/xsk.c
@@ -445,6 +445,9 @@ static void xsk_destruct_skb(struct sk_buff *skb)
sock_wfree(skb);
}
+#define XSK_SKB_HEADLEN 256
+#define XSK_COPY_THRESHOLD (XSK_SKB_HEADLEN / 2)
+
static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
struct xdp_desc *desc)
{
@@ -452,13 +455,21 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
u32 hr, len, ts, offset, copy, copied;
struct sk_buff *skb;
struct page *page;
+ bool need_pull;
void *buffer;
int err, i;
u64 addr;
hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(xs->dev->needed_headroom));
+ len = hr;
+
+ need_pull = !(xs->dev->priv_flags & IFF_TX_SKB_NO_LINEAR);
+ if (need_pull) {
+ len += XSK_SKB_HEADLEN;
+ hr += NET_IP_ALIGN;
+ }
- skb = sock_alloc_send_skb(&xs->sk, hr, 1, &err);
+ skb = sock_alloc_send_skb(&xs->sk, len, 1, &err);
if (unlikely(!skb))
return ERR_PTR(err);
@@ -488,6 +499,11 @@ static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs,
skb->data_len += len;
skb->truesize += ts;
+ if (need_pull && unlikely(!__pskb_pull_tail(skb, ETH_HLEN))) {
+ kfree_skb(skb);
+ return ERR_PTR(-ENOMEM);
+ }
+
refcount_add(ts, &xs->sk.sk_wmem_alloc);
return skb;
@@ -498,19 +514,20 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
{
struct net_device *dev = xs->dev;
struct sk_buff *skb;
+ u32 len = desc->len;
- if (dev->priv_flags & IFF_TX_SKB_NO_LINEAR) {
+ if ((dev->priv_flags & IFF_TX_SKB_NO_LINEAR) ||
+ (len > XSK_COPY_THRESHOLD && likely(dev->features & NETIF_F_SG))) {
skb = xsk_build_skb_zerocopy(xs, desc);
if (IS_ERR(skb))
return skb;
} else {
- u32 hr, tr, len;
void *buffer;
+ u32 hr, tr;
int err;
hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom));
tr = dev->needed_tailroom;
- len = desc->len;
skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err);
if (unlikely(!skb))
--
2.31.1
On Wed, Mar 31, 2021 at 2:27 PM Alexander Lobakin <[email protected]> wrote:
>
> This series is based on the exceptional generic zerocopy xmit logics
> initially introduced by Xuan Zhuo. It extends it the way that it
> could cover all the sane drivers, not only the ones that are capable
> of xmitting skbs with no linear space.
>
> The first patch is a random while-we-are-here improvement over
> full-copy path, and the second is the main course. See the individual
> commit messages for the details.
>
> The original (full-zerocopy) path is still here and still generally
> faster, but for now it seems like virtio_net will remain the only
> user of it, at least for a considerable period of time.
>
> From v1 [0]:
> - don't add a whole SMP_CACHE_BYTES because of only two bytes
> (NET_IP_ALIGN);
> - switch to zerocopy if the frame is 129 bytes or longer, not 128.
> 128 still fit to kmalloc-512, while a zerocopy skb is always
> kmalloc-1024 -> can potentially be slower on this frame size.
>
> [0] https://lore.kernel.org/netdev/[email protected]
>
> Alexander Lobakin (2):
> xsk: speed-up generic full-copy xmit
I took both your patches for a spin on my machine and for the first
one I do see a small but consistent drop in performance. I thought it
would go the other way, but it does not so let us put this one on the
shelf for now.
> xsk: introduce generic almost-zerocopy xmit
This one wreaked havoc on my machine ;-). The performance dropped with
75% for packets larger than 128 bytes when the new scheme kicks in.
Checking with perf top, it seems that we spend much more time
executing the sendmsg syscall. Analyzing some more:
$ sudo bpftrace -e 'kprobe:__sys_sendto { @calls = @calls + 1; }
interval:s:1 {printf("calls/sec: %d\n", @calls); @calls = 0;}'
Attaching 2 probes...
calls/sec: 1539509 with your patch compared to
calls/sec: 105796 without your patch
The application spends a lot of more time trying to get the kernel to
send new packets, but the kernel replies with "have not completed the
outstanding ones, so come back later" = EAGAIN. Seems like the
transmission takes longer when the skbs have fragments, but I have not
examined this any further. Did you get a speed-up?
> net/xdp/xsk.c | 32 ++++++++++++++++++++++----------
> 1 file changed, 22 insertions(+), 10 deletions(-)
>
> --
> Well, this is untested. I currently don't have an access to my setup
> and is bound by moving to another country, but as I don't know for
> sure at the moment when I'll get back to work on the kernel next time,
> I found it worthy to publish this now -- if any further changes will
> be required when I already will be out-of-sight, maybe someone could
> carry on to make a another revision and so on (I'm still here for any
> questions, comments, reviews and improvements till the end of this
> week).
> But this *should* work with all the sane drivers. If a particular
> one won't handle this, it's likely ill. Any tests are highly
> appreciated. Thanks!
> --
> 2.31.1
>
>
On Tue, Apr 13, 2021 at 3:49 AM Xuan Zhuo <[email protected]> wrote:
>
> On Mon, 12 Apr 2021 16:13:12 +0200, Magnus Karlsson <[email protected]> wrote:
> > On Wed, Mar 31, 2021 at 2:27 PM Alexander Lobakin <[email protected]> wrote:
> > >
> > > This series is based on the exceptional generic zerocopy xmit logics
> > > initially introduced by Xuan Zhuo. It extends it the way that it
> > > could cover all the sane drivers, not only the ones that are capable
> > > of xmitting skbs with no linear space.
> > >
> > > The first patch is a random while-we-are-here improvement over
> > > full-copy path, and the second is the main course. See the individual
> > > commit messages for the details.
> > >
> > > The original (full-zerocopy) path is still here and still generally
> > > faster, but for now it seems like virtio_net will remain the only
> > > user of it, at least for a considerable period of time.
> > >
> > > From v1 [0]:
> > > - don't add a whole SMP_CACHE_BYTES because of only two bytes
> > > (NET_IP_ALIGN);
> > > - switch to zerocopy if the frame is 129 bytes or longer, not 128.
> > > 128 still fit to kmalloc-512, while a zerocopy skb is always
> > > kmalloc-1024 -> can potentially be slower on this frame size.
> > >
> > > [0] https://lore.kernel.org/netdev/[email protected]
> > >
> > > Alexander Lobakin (2):
> > > xsk: speed-up generic full-copy xmit
> >
> > I took both your patches for a spin on my machine and for the first
> > one I do see a small but consistent drop in performance. I thought it
> > would go the other way, but it does not so let us put this one on the
> > shelf for now.
> >
> > > xsk: introduce generic almost-zerocopy xmit
> >
> > This one wreaked havoc on my machine ;-). The performance dropped with
> > 75% for packets larger than 128 bytes when the new scheme kicks in.
> > Checking with perf top, it seems that we spend much more time
> > executing the sendmsg syscall. Analyzing some more:
> >
> > $ sudo bpftrace -e 'kprobe:__sys_sendto { @calls = @calls + 1; }
> > interval:s:1 {printf("calls/sec: %d\n", @calls); @calls = 0;}'
> > Attaching 2 probes...
> > calls/sec: 1539509 with your patch compared to
> >
> > calls/sec: 105796 without your patch
> >
> > The application spends a lot of more time trying to get the kernel to
> > send new packets, but the kernel replies with "have not completed the
> > outstanding ones, so come back later" = EAGAIN. Seems like the
> > transmission takes longer when the skbs have fragments, but I have not
> > examined this any further. Did you get a speed-up?
>
> Regarding this solution, I actually tested it on my mlx5 network card, but the
> performance was severely degraded, so I did not continue this solution later. I
> guess it might have something to do with the physical network card. We can try
> other network cards.
I tried it on a third card and got a 40% degradation, so let us scrap
this idea. It should stay optional as it is today as the (software)
drivers that benefit from this can turn it on explicitly.
> links: https://www.spinics.net/lists/netdev/msg710918.html
>
> Thanks.
>
> >
> > > net/xdp/xsk.c | 32 ++++++++++++++++++++++----------
> > > 1 file changed, 22 insertions(+), 10 deletions(-)
> > >
> > > --
> > > Well, this is untested. I currently don't have an access to my setup
> > > and is bound by moving to another country, but as I don't know for
> > > sure at the moment when I'll get back to work on the kernel next time,
> > > I found it worthy to publish this now -- if any further changes will
> > > be required when I already will be out-of-sight, maybe someone could
> > > carry on to make a another revision and so on (I'm still here for any
> > > questions, comments, reviews and improvements till the end of this
> > > week).
> > > But this *should* work with all the sane drivers. If a particular
> > > one won't handle this, it's likely ill. Any tests are highly
> > > appreciated. Thanks!
> > > --
> > > 2.31.1
> > >
> > >
From: Magnus Karlsson <[email protected]>
Date: Tue, 13 Apr 2021 09:14:02 +0200
Hi!
I've finally done with a kinda comfy setup after moving to another
country and can finally continue working on patches and stuff.
> On Tue, Apr 13, 2021 at 3:49 AM Xuan Zhuo <[email protected]> wrote:
> >
> > On Mon, 12 Apr 2021 16:13:12 +0200, Magnus Karlsson <[email protected]> wrote:
> > > On Wed, Mar 31, 2021 at 2:27 PM Alexander Lobakin <[email protected]> wrote:
> > > >
> > > > This series is based on the exceptional generic zerocopy xmit logics
> > > > initially introduced by Xuan Zhuo. It extends it the way that it
> > > > could cover all the sane drivers, not only the ones that are capable
> > > > of xmitting skbs with no linear space.
> > > >
> > > > The first patch is a random while-we-are-here improvement over
> > > > full-copy path, and the second is the main course. See the individual
> > > > commit messages for the details.
> > > >
> > > > The original (full-zerocopy) path is still here and still generally
> > > > faster, but for now it seems like virtio_net will remain the only
> > > > user of it, at least for a considerable period of time.
> > > >
> > > > From v1 [0]:
> > > > - don't add a whole SMP_CACHE_BYTES because of only two bytes
> > > > (NET_IP_ALIGN);
> > > > - switch to zerocopy if the frame is 129 bytes or longer, not 128.
> > > > 128 still fit to kmalloc-512, while a zerocopy skb is always
> > > > kmalloc-1024 -> can potentially be slower on this frame size.
> > > >
> > > > [0] https://lore.kernel.org/netdev/[email protected]
> > > >
> > > > Alexander Lobakin (2):
> > > > xsk: speed-up generic full-copy xmit
> > >
> > > I took both your patches for a spin on my machine and for the first
> > > one I do see a small but consistent drop in performance. I thought it
> > > would go the other way, but it does not so let us put this one on the
> > > shelf for now.
This is kinda strange as the solution is pretty straightforward.
But sure, if the performance dropped after this one, it should not
be considered for taking.
I might have a look at it later.
> > > > xsk: introduce generic almost-zerocopy xmit
> > >
> > > This one wreaked havoc on my machine ;-). The performance dropped with
> > > 75% for packets larger than 128 bytes when the new scheme kicks in.
> > > Checking with perf top, it seems that we spend much more time
> > > executing the sendmsg syscall. Analyzing some more:
> > >
> > > $ sudo bpftrace -e 'kprobe:__sys_sendto { @calls = @calls + 1; }
> > > interval:s:1 {printf("calls/sec: %d\n", @calls); @calls = 0;}'
> > > Attaching 2 probes...
> > > calls/sec: 1539509 with your patch compared to
> > >
> > > calls/sec: 105796 without your patch
> > >
> > > The application spends a lot of more time trying to get the kernel to
> > > send new packets, but the kernel replies with "have not completed the
> > > outstanding ones, so come back later" = EAGAIN. Seems like the
> > > transmission takes longer when the skbs have fragments, but I have not
> > > examined this any further. Did you get a speed-up?
> >
> > Regarding this solution, I actually tested it on my mlx5 network card, but the
> > performance was severely degraded, so I did not continue this solution later. I
> > guess it might have something to do with the physical network card. We can try
> > other network cards.
>
> I tried it on a third card and got a 40% degradation, so let us scrap
> this idea. It should stay optional as it is today as the (software)
> drivers that benefit from this can turn it on explicitly.
Thank you guys a lot for the testing!
I think the main reason is the DMA mapping of one additional frag
(14 bytes of MAC header, which is excessive). It can take a lot of
CPU cycles, especially when the device is behind an IOMMU, and seems
like memcpying is faster here.
Moreover, if Xuan tested it as one of the steps towards his
full-zerocopy and found it to be a bad idea, this should not
go further.
So I'm burying this.
> > links: https://www.spinics.net/lists/netdev/msg710918.html
> >
> > Thanks.
> >
> > >
> > > > net/xdp/xsk.c | 32 ++++++++++++++++++++++----------
> > > > 1 file changed, 22 insertions(+), 10 deletions(-)
> > > >
> > > > --
> > > > Well, this is untested. I currently don't have an access to my setup
> > > > and is bound by moving to another country, but as I don't know for
> > > > sure at the moment when I'll get back to work on the kernel next time,
> > > > I found it worthy to publish this now -- if any further changes will
> > > > be required when I already will be out-of-sight, maybe someone could
> > > > carry on to make a another revision and so on (I'm still here for any
> > > > questions, comments, reviews and improvements till the end of this
> > > > week).
> > > > But this *should* work with all the sane drivers. If a particular
> > > > one won't handle this, it's likely ill. Any tests are highly
> > > > appreciated. Thanks!
> > > > --
> > > > 2.31.1
Thanks,
Al