Instead of unconditional allocating a new skbuff_head and
unconditional flushing of flush_skb_cache, reuse the ones queued
up for flushing if there are any.
skbuff_heads stored in flush_skb_cache are already unreferenced
from any pages or extensions and almost ready for use. We perform
zeroing in __napi_alloc_skb() anyway, regardless of where did our
skbuff_head came from.
Signed-off-by: Alexander Lobakin <[email protected]>
---
net/core/skbuff.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index 3c904c29efbb..0e8c597ff6ce 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -487,6 +487,9 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
static struct sk_buff *__napi_decache_skb(struct napi_alloc_cache *nc)
{
+ if (nc->flush_skb_count)
+ return nc->flush_skb_cache[--nc->flush_skb_count];
+
return kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC);
}
--
2.30.0