2021-04-17 00:09:35

by Xie He

[permalink] [raw]
Subject: [PATCH net] net/core/dev.c: Ensure pfmemalloc skbs are correctly handled when receiving

When an skb is allocated by "__netdev_alloc_skb" in "net/core/skbuff.c",
if "sk_memalloc_socks()" is true, and if there's not sufficient memory,
the skb would be allocated using emergency memory reserves. This kind of
skbs are called pfmemalloc skbs.

pfmemalloc skbs must be specially handled in "net/core/dev.c" when
receiving. They must NOT be delivered to the target protocol if
"skb_pfmemalloc_protocol(skb)" is false.

However, if, after a pfmemalloc skb is allocated and before it reaches
the code in "__netif_receive_skb", "sk_memalloc_socks()" becomes false,
then the skb will be handled by "__netif_receive_skb" as a normal skb.
This causes the skb to be delivered to the target protocol even if
"skb_pfmemalloc_protocol(skb)" is false.

This patch fixes this problem by ensuring all pfmemalloc skbs are handled
by "__netif_receive_skb" as pfmemalloc skbs.

"__netif_receive_skb_list" has the same problem as "__netif_receive_skb".
This patch also fixes it.

Fixes: b4b9e3558508 ("netvm: set PF_MEMALLOC as appropriate during SKB processing")
Cc: Mel Gorman <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Neil Brown <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Jiri Slaby <[email protected]>
Cc: Mike Christie <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Eric Dumazet <[email protected]>
Cc: Sebastian Andrzej Siewior <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Andrew Morton <[email protected]>
Signed-off-by: Xie He <[email protected]>
---
net/core/dev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/dev.c b/net/core/dev.c
index 1f79b9aa9a3f..3e6b7879daef 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -5479,7 +5479,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
{
int ret;

- if (sk_memalloc_socks() && skb_pfmemalloc(skb)) {
+ if (skb_pfmemalloc(skb)) {
unsigned int noreclaim_flag;

/*
@@ -5507,7 +5507,7 @@ static void __netif_receive_skb_list(struct list_head *head)
bool pfmemalloc = false; /* Is current sublist PF_MEMALLOC? */

list_for_each_entry_safe(skb, next, head, list) {
- if ((sk_memalloc_socks() && skb_pfmemalloc(skb)) != pfmemalloc) {
+ if (skb_pfmemalloc(skb) != pfmemalloc) {
struct list_head sublist;

/* Handle the previous sublist */
--
2.27.0


2021-04-17 04:53:49

by Eric Dumazet

[permalink] [raw]
Subject: Re: [PATCH net] net/core/dev.c: Ensure pfmemalloc skbs are correctly handled when receiving

On Sat, Apr 17, 2021 at 2:08 AM Xie He <[email protected]> wrote:
>
> When an skb is allocated by "__netdev_alloc_skb" in "net/core/skbuff.c",
> if "sk_memalloc_socks()" is true, and if there's not sufficient memory,
> the skb would be allocated using emergency memory reserves. This kind of
> skbs are called pfmemalloc skbs.
>
> pfmemalloc skbs must be specially handled in "net/core/dev.c" when
> receiving. They must NOT be delivered to the target protocol if
> "skb_pfmemalloc_protocol(skb)" is false.
>
> However, if, after a pfmemalloc skb is allocated and before it reaches
> the code in "__netif_receive_skb", "sk_memalloc_socks()" becomes false,
> then the skb will be handled by "__netif_receive_skb" as a normal skb.
> This causes the skb to be delivered to the target protocol even if
> "skb_pfmemalloc_protocol(skb)" is false.
>
> This patch fixes this problem by ensuring all pfmemalloc skbs are handled
> by "__netif_receive_skb" as pfmemalloc skbs.
>
> "__netif_receive_skb_list" has the same problem as "__netif_receive_skb".
> This patch also fixes it.
>
> Fixes: b4b9e3558508 ("netvm: set PF_MEMALLOC as appropriate during SKB processing")
> Cc: Mel Gorman <[email protected]>
> Cc: David S. Miller <[email protected]>
> Cc: Neil Brown <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Jiri Slaby <[email protected]>
> Cc: Mike Christie <[email protected]>
> Cc: Eric B Munson <[email protected]>
> Cc: Eric Dumazet <[email protected]>
> Cc: Sebastian Andrzej Siewior <[email protected]>
> Cc: Christoph Lameter <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Signed-off-by: Xie He <[email protected]>
> ---
> net/core/dev.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/net/core/dev.c b/net/core/dev.c
> index 1f79b9aa9a3f..3e6b7879daef 100644
> --- a/net/core/dev.c
> +++ b/net/core/dev.c
> @@ -5479,7 +5479,7 @@ static int __netif_receive_skb(struct sk_buff *skb)
> {
> int ret;
>
> - if (sk_memalloc_socks() && skb_pfmemalloc(skb)) {
> + if (skb_pfmemalloc(skb)) {
> unsigned int noreclaim_flag;
>
> /*
> @@ -5507,7 +5507,7 @@ static void __netif_receive_skb_list(struct list_head *head)
> bool pfmemalloc = false; /* Is current sublist PF_MEMALLOC? */
>
> list_for_each_entry_safe(skb, next, head, list) {
> - if ((sk_memalloc_socks() && skb_pfmemalloc(skb)) != pfmemalloc) {
> + if (skb_pfmemalloc(skb) != pfmemalloc) {
> struct list_head sublist;
>
> /* Handle the previous sublist */
> --
> 2.27.0
>

The race window has been considered to be small that we prefer the
code as it is.

The reason why we prefer current code is that we use a static key for
the implementation
of sk_memalloc_socks()

Trading some minor condition (race) with extra cycles for each
received packet is a serious concern.

What matters is a persistent condition that would _deplete_ memory,
not for a dozen of packets,
but thousands. Can you demonstrate such an issue ?