2022-07-25 10:20:43

by Geetha sowjanya

[permalink] [raw]
Subject: [net-next PATCH] octeontx2-pf: Use only non-isolated cpus in irq affinity

This patch excludes the isolates cpus from the cpus list
while setting up TX/RX queue interrupts affinity

Signed-off-by: Geetha sowjanya <[email protected]>
Signed-off-by: Sunil Kovvuri Goutham <[email protected]>
---
.../ethernet/marvell/octeontx2/nic/otx2_common.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
index fb8db5888d2f..9886a02dd756 100644
--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
@@ -8,6 +8,7 @@
#include <linux/interrupt.h>
#include <linux/pci.h>
#include <net/tso.h>
+#include <linux/sched/isolation.h>

#include "otx2_reg.h"
#include "otx2_common.h"
@@ -1657,9 +1658,16 @@ void otx2_set_cints_affinity(struct otx2_nic *pfvf)
{
struct otx2_hw *hw = &pfvf->hw;
int vec, cpu, irq, cint;
+ cpumask_var_t mask;
+
+ if (!alloc_cpumask_var(&mask, GFP_KERNEL))
+ return;
+
+ cpumask_and(mask, cpu_online_mask,
+ housekeeping_cpumask(HK_TYPE_DOMAIN));
+ cpu = cpumask_first(mask);

vec = hw->nix_msixoff + NIX_LF_CINT_VEC_START;
- cpu = cpumask_first(cpu_online_mask);

/* CQ interrupts */
for (cint = 0; cint < pfvf->hw.cint_cnt; cint++, vec++) {
@@ -1671,10 +1679,11 @@ void otx2_set_cints_affinity(struct otx2_nic *pfvf)
irq = pci_irq_vector(pfvf->pdev, vec);
irq_set_affinity_hint(irq, hw->affinity_mask[vec]);

- cpu = cpumask_next(cpu, cpu_online_mask);
+ cpu = cpumask_next(cpu, mask);
if (unlikely(cpu >= nr_cpu_ids))
cpu = 0;
}
+ free_cpumask_var(mask);
}

u16 otx2_get_max_mtu(struct otx2_nic *pfvf)
--
2.17.1


2022-07-27 03:28:35

by Jakub Kicinski

[permalink] [raw]
Subject: Re: [net-next PATCH] octeontx2-pf: Use only non-isolated cpus in irq affinity

On Mon, 25 Jul 2022 15:14:02 +0530 Geetha sowjanya wrote:
> This patch excludes the isolates cpus from the cpus list
> while setting up TX/RX queue interrupts affinity
>
> Signed-off-by: Geetha sowjanya <[email protected]>
> Signed-off-by: Sunil Kovvuri Goutham <[email protected]>

Hm, housekeeping_cpumask() looks barely used by drivers,
do you have any references to discussions indicated drivers
are expected to pay attention to it? Really seems like something
that the core should take care of.

Tariq, thoughts?

2022-07-27 05:42:46

by Tariq Toukan

[permalink] [raw]
Subject: Re: [net-next PATCH] octeontx2-pf: Use only non-isolated cpus in irq affinity



On 7/27/2022 6:08 AM, Jakub Kicinski wrote:
> On Mon, 25 Jul 2022 15:14:02 +0530 Geetha sowjanya wrote:
>> This patch excludes the isolates cpus from the cpus list
>> while setting up TX/RX queue interrupts affinity
>>
>> Signed-off-by: Geetha sowjanya <[email protected]>
>> Signed-off-by: Sunil Kovvuri Goutham <[email protected]>
>
> Hm, housekeeping_cpumask() looks barely used by drivers,
> do you have any references to discussions indicated drivers
> are expected to pay attention to it? Really seems like something
> that the core should take care of.
>
> Tariq, thoughts?

I agree.
IMO this logic best fits inside the new sched API I proposed last week
(pending Ack...), transparent to driver.

Find here:
https://lore.kernel.org/all/[email protected]/

2022-08-01 06:59:40

by Tariq Toukan

[permalink] [raw]
Subject: Re: [net-next PATCH] octeontx2-pf: Use only non-isolated cpus in irq affinity



On 7/27/2022 10:03 AM, Sunil Kovvuri wrote:
>
>
> On Wed, Jul 27, 2022 at 11:01 AM Tariq Toukan <[email protected]
> <mailto:[email protected]>> wrote:
>
>
>
> On 7/27/2022 6:08 AM, Jakub Kicinski wrote:
> > On Mon, 25 Jul 2022 15:14:02 +0530 Geetha sowjanya wrote:
> >> This patch excludes the isolates cpus from the cpus list
> >> while setting up TX/RX queue interrupts affinity
> >>
> >> Signed-off-by: Geetha sowjanya <[email protected]
> <mailto:[email protected]>>
> >> Signed-off-by: Sunil Kovvuri Goutham <[email protected]
> <mailto:[email protected]>>
> >
> > Hm, housekeeping_cpumask() looks barely used by drivers,
> > do you have any references to discussions indicated drivers
> > are expected to pay attention to it? Really seems like something
> > that the core should take care of.
> >
> > Tariq, thoughts?
>
> I agree.
> IMO this logic best fits inside the new sched API I proposed last week
> (pending Ack...), transparent to driver.
>
> Find here:
> https://lore.kernel.org/all/[email protected]/ <https://lore.kernel.org/all/[email protected]/>
>
>
> You mean
>
> +static bool sched_cpus_spread_by_distance(int node, u16 *cpus, int
> ncpus) +{ +
>
> .... + cpumask_copy(cpumask, cpu_online_mask);
>
> Change cpu_online_mask here to a mask which gives non-isolated cores mask ?
>

Yes that was the intention.
However, on a second thought, I'm not sure this is a good idea.

In some cases, the device driver is isolated-out for other higher prio
tasks. While in other cases, the device driver processing is the high
prio task and is isolated in these cpus for best performance.
As the cpus spread usually affects affinity hints and numa-aware
allocations, your patch might cause a degradation if always applied.