2020-06-25 23:51:10

by Nitesh Narayan Lal

[permalink] [raw]
Subject: [Patch v4 3/3] net: Restrict receive packets queuing to housekeeping CPUs

From: Alex Belits <[email protected]>

With the existing implementation of store_rps_map(), packets are queued
in the receive path on the backlog queues of other CPUs irrespective of
whether they are isolated or not. This could add a latency overhead to
any RT workload that is running on the same CPU.

Ensure that store_rps_map() only uses available housekeeping CPUs for
storing the rps_map.

Signed-off-by: Alex Belits <[email protected]>
Signed-off-by: Nitesh Narayan Lal <[email protected]>
---
net/core/net-sysfs.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index e353b822bb15..677868fea316 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -11,6 +11,7 @@
#include <linux/if_arp.h>
#include <linux/slab.h>
#include <linux/sched/signal.h>
+#include <linux/sched/isolation.h>
#include <linux/nsproxy.h>
#include <net/sock.h>
#include <net/net_namespace.h>
@@ -741,7 +742,7 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
{
struct rps_map *old_map, *map;
cpumask_var_t mask;
- int err, cpu, i;
+ int err, cpu, i, hk_flags;
static DEFINE_MUTEX(rps_map_mutex);

if (!capable(CAP_NET_ADMIN))
@@ -756,6 +757,13 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
return err;
}

+ hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
+ cpumask_and(mask, mask, housekeeping_cpumask(hk_flags));
+ if (cpumask_empty(mask)) {
+ free_cpumask_var(mask);
+ return -EINVAL;
+ }
+
map = kzalloc(max_t(unsigned int,
RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
GFP_KERNEL);
--
2.18.4


2020-06-26 12:15:39

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [Patch v4 3/3] net: Restrict receive packets queuing to housekeeping CPUs

On Thu, Jun 25, 2020 at 06:34:43PM -0400, Nitesh Narayan Lal wrote:
> From: Alex Belits <[email protected]>
>
> With the existing implementation of store_rps_map(), packets are queued
> in the receive path on the backlog queues of other CPUs irrespective of
> whether they are isolated or not. This could add a latency overhead to
> any RT workload that is running on the same CPU.
>
> Ensure that store_rps_map() only uses available housekeeping CPUs for
> storing the rps_map.
>
> Signed-off-by: Alex Belits <[email protected]>
> Signed-off-by: Nitesh Narayan Lal <[email protected]>

Dave, ACK if I route this?

2020-06-26 17:29:23

by David Miller

[permalink] [raw]
Subject: Re: [Patch v4 3/3] net: Restrict receive packets queuing to housekeeping CPUs

From: Peter Zijlstra <[email protected]>
Date: Fri, 26 Jun 2020 13:14:01 +0200

> On Thu, Jun 25, 2020 at 06:34:43PM -0400, Nitesh Narayan Lal wrote:
>> From: Alex Belits <[email protected]>
>>
>> With the existing implementation of store_rps_map(), packets are queued
>> in the receive path on the backlog queues of other CPUs irrespective of
>> whether they are isolated or not. This could add a latency overhead to
>> any RT workload that is running on the same CPU.
>>
>> Ensure that store_rps_map() only uses available housekeeping CPUs for
>> storing the rps_map.
>>
>> Signed-off-by: Alex Belits <[email protected]>
>> Signed-off-by: Nitesh Narayan Lal <[email protected]>
>
> Dave, ACK if I route this?

No problem:

Acked-by: David S. Miller <[email protected]>

2020-07-09 08:48:19

by tip-bot2 for Jacob Pan

[permalink] [raw]
Subject: [tip: sched/core] net: Restrict receive packets queuing to housekeeping CPUs

The following commit has been merged into the sched/core branch of tip:

Commit-ID: 07bbecb3410617816a99e76a2df7576507a0c8ad
Gitweb: https://git.kernel.org/tip/07bbecb3410617816a99e76a2df7576507a0c8ad
Author: Alex Belits <[email protected]>
AuthorDate: Thu, 25 Jun 2020 18:34:43 -04:00
Committer: Peter Zijlstra <[email protected]>
CommitterDate: Wed, 08 Jul 2020 11:39:02 +02:00

net: Restrict receive packets queuing to housekeeping CPUs

With the existing implementation of store_rps_map(), packets are queued
in the receive path on the backlog queues of other CPUs irrespective of
whether they are isolated or not. This could add a latency overhead to
any RT workload that is running on the same CPU.

Ensure that store_rps_map() only uses available housekeeping CPUs for
storing the rps_map.

Signed-off-by: Alex Belits <[email protected]>
Signed-off-by: Nitesh Narayan Lal <[email protected]>
Signed-off-by: Peter Zijlstra (Intel) <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
---
net/core/net-sysfs.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c
index e353b82..677868f 100644
--- a/net/core/net-sysfs.c
+++ b/net/core/net-sysfs.c
@@ -11,6 +11,7 @@
#include <linux/if_arp.h>
#include <linux/slab.h>
#include <linux/sched/signal.h>
+#include <linux/sched/isolation.h>
#include <linux/nsproxy.h>
#include <net/sock.h>
#include <net/net_namespace.h>
@@ -741,7 +742,7 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
{
struct rps_map *old_map, *map;
cpumask_var_t mask;
- int err, cpu, i;
+ int err, cpu, i, hk_flags;
static DEFINE_MUTEX(rps_map_mutex);

if (!capable(CAP_NET_ADMIN))
@@ -756,6 +757,13 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue,
return err;
}

+ hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ;
+ cpumask_and(mask, mask, housekeeping_cpumask(hk_flags));
+ if (cpumask_empty(mask)) {
+ free_cpumask_var(mask);
+ return -EINVAL;
+ }
+
map = kzalloc(max_t(unsigned int,
RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES),
GFP_KERNEL);