2023-10-19 12:01:32

by Abel Wu

[permalink] [raw]
Subject: [PATCH net v3 1/3] sock: Code cleanup on __sk_mem_raise_allocated()

Code cleanup for both better simplicity and readability.
No functional change intended.

Signed-off-by: Abel Wu <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
---
net/core/sock.c | 22 ++++++++++++----------
1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/net/core/sock.c b/net/core/sock.c
index 16584e2dd648..4412c47466a7 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3041,17 +3041,19 @@ EXPORT_SYMBOL(sk_wait_data);
*/
int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
{
- bool memcg_charge = mem_cgroup_sockets_enabled && sk->sk_memcg;
+ struct mem_cgroup *memcg = mem_cgroup_sockets_enabled ? sk->sk_memcg : NULL;
struct proto *prot = sk->sk_prot;
- bool charged = true;
+ bool charged = false;
long allocated;

sk_memory_allocated_add(sk, amt);
allocated = sk_memory_allocated(sk);
- if (memcg_charge &&
- !(charged = mem_cgroup_charge_skmem(sk->sk_memcg, amt,
- gfp_memcg_charge())))
- goto suppress_allocation;
+
+ if (memcg) {
+ if (!mem_cgroup_charge_skmem(memcg, amt, gfp_memcg_charge()))
+ goto suppress_allocation;
+ charged = true;
+ }

/* Under limit. */
if (allocated <= sk_prot_mem_limits(sk, 0)) {
@@ -3106,8 +3108,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
*/
if (sk->sk_wmem_queued + size >= sk->sk_sndbuf) {
/* Force charge with __GFP_NOFAIL */
- if (memcg_charge && !charged) {
- mem_cgroup_charge_skmem(sk->sk_memcg, amt,
+ if (memcg && !charged) {
+ mem_cgroup_charge_skmem(memcg, amt,
gfp_memcg_charge() | __GFP_NOFAIL);
}
return 1;
@@ -3119,8 +3121,8 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)

sk_memory_allocated_sub(sk, amt);

- if (memcg_charge && charged)
- mem_cgroup_uncharge_skmem(sk->sk_memcg, amt);
+ if (charged)
+ mem_cgroup_uncharge_skmem(memcg, amt);

return 0;
}
--
2.37.3


2023-10-19 12:01:36

by Abel Wu

[permalink] [raw]
Subject: [PATCH net v3 2/3] sock: Doc behaviors for pressure heurisitics

There are now two accounting infrastructures for skmem, while the
heuristics in __sk_mem_raise_allocated() were actually introduced
before memcg was born.

Add some comments to clarify whether they can be applied to both
infrastructures or not.

Suggested-by: Shakeel Butt <[email protected]>
Signed-off-by: Abel Wu <[email protected]>
Acked-by: Shakeel Butt <[email protected]>
---
net/core/sock.c | 14 +++++++++++++-
1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/net/core/sock.c b/net/core/sock.c
index 4412c47466a7..45841a5689b6 100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -3069,7 +3069,14 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)
if (allocated > sk_prot_mem_limits(sk, 2))
goto suppress_allocation;

- /* guarantee minimum buffer size under pressure */
+ /* Guarantee minimum buffer size under pressure (either global
+ * or memcg) to make sure features described in RFC 7323 (TCP
+ * Extensions for High Performance) work properly.
+ *
+ * This rule does NOT stand when exceeds global or memcg's hard
+ * limit, or else a DoS attack can be taken place by spawning
+ * lots of sockets whose usage are under minimum buffer size.
+ */
if (kind == SK_MEM_RECV) {
if (atomic_read(&sk->sk_rmem_alloc) < sk_get_rmem0(sk, prot))
return 1;
@@ -3090,6 +3097,11 @@ int __sk_mem_raise_allocated(struct sock *sk, int size, int amt, int kind)

if (!sk_under_memory_pressure(sk))
return 1;
+
+ /* Try to be fair among all the sockets under global
+ * pressure by allowing the ones that below average
+ * usage to raise.
+ */
alloc = sk_sockets_allocated_read_positive(sk);
if (sk_prot_mem_limits(sk, 2) > alloc *
sk_mem_pages(sk->sk_wmem_queued +
--
2.37.3

2023-10-23 07:49:36

by Simon Horman

[permalink] [raw]
Subject: Re: [PATCH net v3 1/3] sock: Code cleanup on __sk_mem_raise_allocated()

On Thu, Oct 19, 2023 at 08:00:24PM +0800, Abel Wu wrote:
> Code cleanup for both better simplicity and readability.
> No functional change intended.
>
> Signed-off-by: Abel Wu <[email protected]>
> Acked-by: Shakeel Butt <[email protected]>

Reviewed-by: Simon Horman <[email protected]>

2023-10-23 07:49:53

by Simon Horman

[permalink] [raw]
Subject: Re: [PATCH net v3 2/3] sock: Doc behaviors for pressure heurisitics

On Thu, Oct 19, 2023 at 08:00:25PM +0800, Abel Wu wrote:
> There are now two accounting infrastructures for skmem, while the
> heuristics in __sk_mem_raise_allocated() were actually introduced
> before memcg was born.
>
> Add some comments to clarify whether they can be applied to both
> infrastructures or not.
>
> Suggested-by: Shakeel Butt <[email protected]>
> Signed-off-by: Abel Wu <[email protected]>
> Acked-by: Shakeel Butt <[email protected]>

Reviewed-by: Simon Horman <[email protected]>

2023-10-24 08:50:31

by patchwork-bot+netdevbpf

[permalink] [raw]
Subject: Re: [PATCH net v3 1/3] sock: Code cleanup on __sk_mem_raise_allocated()

Hello:

This series was applied to netdev/net-next.git (main)
by Paolo Abeni <[email protected]>:

On Thu, 19 Oct 2023 20:00:24 +0800 you wrote:
> Code cleanup for both better simplicity and readability.
> No functional change intended.
>
> Signed-off-by: Abel Wu <[email protected]>
> Acked-by: Shakeel Butt <[email protected]>
> ---
> net/core/sock.c | 22 ++++++++++++----------
> 1 file changed, 12 insertions(+), 10 deletions(-)

Here is the summary with links:
- [net,v3,1/3] sock: Code cleanup on __sk_mem_raise_allocated()
https://git.kernel.org/netdev/net-next/c/2def8ff3fdb6
- [net,v3,2/3] sock: Doc behaviors for pressure heurisitics
https://git.kernel.org/netdev/net-next/c/2e12072c67b5
- [net,v3,3/3] sock: Ignore memcg pressure heuristics when raising allocated
https://git.kernel.org/netdev/net-next/c/66e6369e312d

You are awesome, thank you!
--
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html