2022-10-04 00:04:28

by John Stultz

[permalink] [raw]
Subject: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

From: Connor O'Brien <[email protected]>

In certain audio use cases, scheduling RT threads on cores that
are handling softirqs can lead to glitches. Prevent this
behavior in cases where the softirq is likely to take a long
time. To avoid unnecessary migrations, the old behavior is
preserved for RCU, SCHED and TIMER irqs which are expected to be
relatively quick.

This patch reworks and combines two related changes originally
by John Dias <[email protected]>

Cc: John Dias <[email protected]>
Cc: Connor O'Brien <[email protected]>
Cc: Rick Yiu <[email protected]>
Cc: John Kacur <[email protected]>
Cc: Qais Yousef <[email protected]>
Cc: Chris Redpath <[email protected]>
Cc: Abhijeet Dharmapurikar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: [email protected]
Signed-off-by: John Dias <[email protected]>
[elavila: Port to mainline, amend commit text]
Signed-off-by: J. Avila <[email protected]>
[connoro: Reworked, simplified, and merged two patches together]
Signed-off-by: Connor O'Brien <[email protected]>
[jstultz: Further simplified and fixed issues, reworded commit
message, removed arm64-isms]
Signed-off-by: John Stultz <[email protected]>
---
v2:
* Reformatted Kconfig entry to match coding style
(Reported-by: Randy Dunlap <[email protected]>)
* Made rt_task_fits_capacity_and_may_preempt static to
avoid warnings (Reported-by: kernel test robot <[email protected]>)
* Rework to use preempt_count and drop kconfig dependency on ARM64
v3:
* Use introduced __cpu_softirq_pending() to avoid s390 build
issues (Reported-by: kernel test robot <[email protected]>)
v4:
* Drop TASKLET_SOFTIRQ from LONG_SOFTIRQS (suggested by Qais)
* Depend on !PREEMPT_RT (Suggested by Qais)
* Larger simplification of logic (suggested by Qais)
* Rework LONG_SOFTIRQS to use BIT() macros
* Rename task_may_preempt() to cpu_busy_with_softirqs()
---
include/linux/interrupt.h | 6 ++++
init/Kconfig | 10 +++++++
kernel/sched/rt.c | 61 +++++++++++++++++++++++++++++++++------
kernel/softirq.c | 9 ++++++
4 files changed, 77 insertions(+), 9 deletions(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index a749a8663841..e3a4add67e8c 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -582,6 +582,11 @@ enum
* _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
*/
#define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
+/* Softirq's where the handling might be long: */
+#define LONG_SOFTIRQ_MASK (BIT(NET_TX_SOFTIRQ) | \
+ BIT(NET_RX_SOFTIRQ) | \
+ BIT(BLOCK_SOFTIRQ) | \
+ BIT(IRQ_POLL_SOFTIRQ))

/* map softirq index to softirq name. update 'softirq_to_name' in
* kernel/softirq.c when adding a new softirq.
@@ -617,6 +622,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
extern void raise_softirq(unsigned int nr);

DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
+DECLARE_PER_CPU(u32, active_softirqs);

static inline struct task_struct *this_cpu_ksoftirqd(void)
{
diff --git a/init/Kconfig b/init/Kconfig
index 532362fcfe31..3d1de6edcfa1 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1284,6 +1284,16 @@ config SCHED_AUTOGROUP
desktop applications. Task group autogeneration is currently based
upon task session.

+config RT_SOFTIRQ_OPTIMIZATION
+ bool "Improve RT scheduling during long softirq execution"
+ depends on SMP && !PREEMPT_RT
+ default n
+ help
+ Enable an optimization which tries to avoid placing RT tasks on CPUs
+ occupied by nonpreemptible tasks, such as a long softirq or CPUs
+ which may soon block preemptions, such as a CPU running a ksoftirq
+ thread which handles slow softirqs.
+
config SYSFS_DEPRECATED
bool "Enable deprecated sysfs features to support old userspace tools"
depends on SYSFS
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 55f39c8f4203..3c628db807c8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
#ifdef CONFIG_SMP
static int find_lowest_rq(struct task_struct *task);

+#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
+#define __use_softirq_opt 1
+/*
+ * Return whether the given cpu is currently non-preemptible
+ * while handling a potentially long softirq, or if the current
+ * task is likely to block preemptions soon because it is a
+ * ksoftirq thread that is handling slow softirq.
+ */
+static bool cpu_busy_with_softirqs(int cpu)
+{
+ u32 softirqs = per_cpu(active_softirqs, cpu) |
+ __cpu_softirq_pending(cpu);
+ struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
+ struct task_struct *curr;
+ struct rq *rq = cpu_rq(cpu);
+ int ret;
+
+ rcu_read_lock();
+ curr = READ_ONCE(rq->curr); /* unlocked access */
+ ret = (softirqs & LONG_SOFTIRQ_MASK) &&
+ (curr == cpu_ksoftirqd ||
+ preempt_count() & SOFTIRQ_MASK);
+ rcu_read_unlock();
+ return ret;
+}
+#else
+#define __use_softirq_opt 0
+static bool cpu_busy_with_softirqs(int cpu)
+{
+ return false;
+}
+#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
+
+static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
+{
+ return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
+}
+
static int
select_task_rq_rt(struct task_struct *p, int cpu, int flags)
{
@@ -1637,22 +1675,24 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
* This test is optimistic, if we get it wrong the load-balancer
* will have to sort it out.
*
- * We take into account the capacity of the CPU to ensure it fits the
- * requirement of the task - which is only important on heterogeneous
- * systems like big.LITTLE.
+ * We use rt_task_fits_cpu() to evaluate if the CPU is busy with
+ * potentially long-running softirq work, as well as take into
+ * account the capacity of the CPU to ensure it fits the
+ * requirement of the task - which is only important on
+ * heterogeneous systems like big.LITTLE.
*/
test = curr &&
unlikely(rt_task(curr)) &&
(curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);

- if (test || !rt_task_fits_capacity(p, cpu)) {
+ if (test || !rt_task_fits_cpu(p, cpu)) {
int target = find_lowest_rq(p);

/*
* Bail out if we were forcing a migration to find a better
* fitting CPU but our search failed.
*/
- if (!test && target != -1 && !rt_task_fits_capacity(p, target))
+ if (!test && target != -1 && !rt_task_fits_cpu(p, target))
goto out_unlock;

/*
@@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
return -1; /* No other targets possible */

/*
- * If we're on asym system ensure we consider the different capacities
- * of the CPUs when searching for the lowest_mask.
+ * If we're using the softirq optimization or if we are
+ * on asym system, ensure we consider the softirq processing
+ * or different capacities of the CPUs when searching for the
+ * lowest_mask.
*/
- if (static_branch_unlikely(&sched_asym_cpucapacity)) {
+ if (__use_softirq_opt ||
+ static_branch_unlikely(&sched_asym_cpucapacity)) {

ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
task, lowest_mask,
- rt_task_fits_capacity);
+ rt_task_fits_cpu);
} else {

ret = cpupri_find(&task_rq(task)->rd->cpupri,
diff --git a/kernel/softirq.c b/kernel/softirq.c
index c8a6913c067d..35ee79dd8786 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp

DEFINE_PER_CPU(struct task_struct *, ksoftirqd);

+/*
+ * active_softirqs -- per cpu, a mask of softirqs that are being handled,
+ * with the expectation that approximate answers are acceptable and therefore
+ * no synchronization.
+ */
+DEFINE_PER_CPU(u32, active_softirqs);
+
const char * const softirq_to_name[NR_SOFTIRQS] = {
"HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
"TASKLET", "SCHED", "HRTIMER", "RCU"
@@ -551,6 +558,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
restart:
/* Reset the pending bitmask before enabling irqs */
set_softirq_pending(0);
+ __this_cpu_write(active_softirqs, pending);

local_irq_enable();

@@ -580,6 +588,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
pending >>= softirq_bit;
}

+ __this_cpu_write(active_softirqs, 0);
if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
__this_cpu_read(ksoftirqd) == current)
rcu_softirq_qs();
--
2.38.0.rc1.362.ged0d419d3c-goog


2022-10-04 02:45:24

by John Stultz

[permalink] [raw]
Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Oct 3, 2022 at 6:56 PM Hillf Danton <[email protected]> wrote:
> On 3 Oct 2022 23:20:32 +0000 John Stultz <[email protected]>
> > +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> > +#define __use_softirq_opt 1
> > +/*
> > + * Return whether the given cpu is currently non-preemptible
> > + * while handling a potentially long softirq, or if the current
> > + * task is likely to block preemptions soon because it is a
> > + * ksoftirq thread that is handling slow softirq.
> > + */
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > + u32 softirqs = per_cpu(active_softirqs, cpu) |
> > + __cpu_softirq_pending(cpu);
> > + struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> > + struct task_struct *curr;
> > + struct rq *rq = cpu_rq(cpu);
> > + int ret;
> > +
> > + rcu_read_lock();
> > + curr = READ_ONCE(rq->curr); /* unlocked access */
> > + ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> > + (curr == cpu_ksoftirqd ||
> > + preempt_count() & SOFTIRQ_MASK);
> > + rcu_read_unlock();
> > + return ret;
> > +}
> > +#else
> > +#define __use_softirq_opt 0
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > + return false;
> > +}
> > +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> > +
> > +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
> > +{
> > + return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
> > +}
>
> On one hand, RT task is not layency sensitive enough if it fails to preempt
> ksoftirqd. On the other, deferring softirq to ksoftirqd barely makes sense
> in 3/3 if it preempts the current RT task.

Apologies, I'm not sure I'm following you here. Why would ksoftirqd
preempt the rt task?

thanks
-john

2022-10-05 02:37:10

by John Stultz

[permalink] [raw]
Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Tue, Oct 4, 2022 at 5:22 PM Hillf Danton <[email protected]> wrote:
> On 3 Oct 2022 19:29:36 -0700 John Stultz <[email protected]>
> >
> > Why would ksoftirqd preempt the rt task?
> >
> For example the kthread becomes sensitive to latency.

Again, my apologies, I'm still not sure I am following. Could you
expand a bit on the case you're concerned about? Is it the case where
the ksoftirqd thread is configured to run at higher rtprio?

thanks
-john

2022-10-10 15:50:06

by Qais Yousef

[permalink] [raw]
Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On 10/05/22 14:01, Hillf Danton wrote:
> On 4 Oct 2022 18:13:52 -0700 John Stultz <[email protected]>
> > On Tue, Oct 4, 2022 at 5:22 PM Hillf Danton <[email protected]> wrote:
> > > On 3 Oct 2022 19:29:36 -0700 John Stultz <[email protected]>
> > > >
> > > > Why would ksoftirqd preempt the rt task?
> > > >
> > > For example the kthread becomes sensitive to latency.
> >
> > Is it the case where
> > the ksoftirqd thread is configured to run at higher rtprio?
> >
> Yes, you are right.

I don't see a problem here. If a sys-admin configures their ksoftirqds to be
a higher priority RT tasks than the audio threads, then they better know what
they're doing :-)

The issue at hand here is that the softirqs boundedness is hard to control. And
the scheduling delays ensued are hard to deal with by any sys-admin.

Networking has actually introduced some knobs to help control that - but the
tricky bit of still being able to deliver high throughput networking while
keeping the softirq bounded to minimize scheduling delays/latencies. I think
even for PREEMPT_RT, high performance networking could be impacted to achieve
the required low latency.

See this paper which explores this duality:

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.702.7571&rep=rep1&type=pdf


With WiFi 6 and 5G mobile networks, phones are actually expected to deliver
multi-gigabit network throughputs.


Cheers

--
Qais Yousef

2022-10-12 14:25:46

by Qais Yousef

[permalink] [raw]
Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On 10/11/22 19:18, Hillf Danton wrote:

[...]

> > The issue at hand here is that the softirqs boundedness is hard to control. And
> > the scheduling delays ensued are hard to deal with by any sys-admin.
> >
> Given "The RT scheduler is for tasks that require strick scheduling
> requirements over all else, including performance." [1], I would invite
> Steve to shed some more light on the relation between RT scheduler and
> performance/network throughputs.
>
> Bonus question, why softirq took no care of RT tasks, given strick
> scheduling requirements above?
>
> [1] https://lore.kernel.org/lkml/[email protected]/

I think you're mixing the two patches up. That patch is to help describe some
latency requirements for CFS tasks. It has nothing to do with RT. Your
suggestion to use RT scheduler is not valid as these tasks can't be converted
to RT. Which is what Steve was trying to say IIUC.

Generally converting normal application tasks into RT is a recipe for disaster
because:

1. Misbehaving RT tasks (busy looping indefinitely) can hog the system
to a halt.
2. How do you manage priorities of all these pseudo-random RT tasks
each application spawns so you end up with correct resource sharing?

ie: using RT policy is a privileged operation for a reason :-)

The target audience for latency_nice is the average Joe task from any
application that might have some latency requirements to deliver a better user
experience. ie: it's not strict latency requirement. We just want to handle
delays within _CFS_ part of the scheduler.

By the way, your emails don't seem to make it to LKML for some reason; they
show as '[not found]' on lore.

https://lore.kernel.org/lkml/20221010154216.6mw7fszdaoajurvm@wubuntu/#r


Thanks

--
Qais Yousef

2022-10-17 13:04:47

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> From: Connor O'Brien <[email protected]>

Hi John, Connor,

I took a cursory look and have couple of hopefully meaningful
comments, but mostly - questions.

[...]

> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 55f39c8f4203..3c628db807c8 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
> #ifdef CONFIG_SMP
> static int find_lowest_rq(struct task_struct *task);
>
> +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> +#define __use_softirq_opt 1
> +/*
> + * Return whether the given cpu is currently non-preemptible
> + * while handling a potentially long softirq, or if the current
> + * task is likely to block preemptions soon because it is a
> + * ksoftirq thread that is handling slow softirq.

What is slow softirqs in this context compared to long?

> + */
> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> + u32 softirqs = per_cpu(active_softirqs, cpu) |
> + __cpu_softirq_pending(cpu);
> + struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> + struct task_struct *curr;
> + struct rq *rq = cpu_rq(cpu);
> + int ret;
> +
> + rcu_read_lock();
> + curr = READ_ONCE(rq->curr); /* unlocked access */

select_task_rq_rt() takes the lock and reads curr already,
before calling this funciton. I think there is a way to
decompose it in a better way.

> + ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> + (curr == cpu_ksoftirqd ||

EOL is extra.

> + preempt_count() & SOFTIRQ_MASK);

Could you please clarify this whole check in more detail?

What is the point in checking if a remote CPU is handling
a "long" softirq while the local one is handling any softirq?

> + rcu_read_unlock();

Why ret needs to be calculated under the lock?

> + return ret;
> +}
> +#else
> +#define __use_softirq_opt 0
> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> + return false;
> +}
> +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> +
> +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)

To me, the new name is unfortunate, since it strips a notion
of the reason. Instead, "CPU un/fits, because of capacity" it
reads as "CPU un/fits, because of ..." what?

> +{
> + return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);

I guess the order needs to be swapped, as rt_task_fits_capacity()
is rather "quick" while cpu_busy_with_softirqs() is rather "slow".

> +}
> +
> static int
> select_task_rq_rt(struct task_struct *p, int cpu, int flags)
> {
> @@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
> return -1; /* No other targets possible */
>
> /*
> - * If we're on asym system ensure we consider the different capacities
> - * of the CPUs when searching for the lowest_mask.
> + * If we're using the softirq optimization or if we are
> + * on asym system, ensure we consider the softirq processing
> + * or different capacities of the CPUs when searching for the
> + * lowest_mask.
> */
> - if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> + if (__use_softirq_opt ||

Why use __use_softirq_opt and not IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION)?

> + static_branch_unlikely(&sched_asym_cpucapacity)) {
>
> ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
> task, lowest_mask,
> - rt_task_fits_capacity);
> + rt_task_fits_cpu);
> } else {
>
> ret = cpupri_find(&task_rq(task)->rd->cpupri,
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index c8a6913c067d..35ee79dd8786 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
>
> DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
>
> +/*
> + * active_softirqs -- per cpu, a mask of softirqs that are being handled,
> + * with the expectation that approximate answers are acceptable and therefore
> + * no synchronization.
> + */
> +DEFINE_PER_CPU(u32, active_softirqs);

I guess all active_softirqs uses need to be coupled with
IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION) check.

> const char * const softirq_to_name[NR_SOFTIRQS] = {
> "HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
> "TASKLET", "SCHED", "HRTIMER", "RCU"
> @@ -551,6 +558,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> restart:
> /* Reset the pending bitmask before enabling irqs */
> set_softirq_pending(0);
> + __this_cpu_write(active_softirqs, pending);
>
> local_irq_enable();
>
> @@ -580,6 +588,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> pending >>= softirq_bit;
> }
>
> + __this_cpu_write(active_softirqs, 0);
> if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
> __this_cpu_read(ksoftirqd) == current)
> rcu_softirq_qs();

Thanks!

2022-10-18 04:01:13

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Oct 17, 2022 at 5:40 AM Alexander Gordeev
<[email protected]> wrote:
>
> On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> > From: Connor O'Brien <[email protected]>
> > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> > index 55f39c8f4203..3c628db807c8 100644
> > --- a/kernel/sched/rt.c
> > +++ b/kernel/sched/rt.c
> > @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
> > #ifdef CONFIG_SMP
> > static int find_lowest_rq(struct task_struct *task);
> >
> > +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> > +#define __use_softirq_opt 1
> > +/*
> > + * Return whether the given cpu is currently non-preemptible
> > + * while handling a potentially long softirq, or if the current
> > + * task is likely to block preemptions soon because it is a
> > + * ksoftirq thread that is handling slow softirq.
>
> What is slow softirqs in this context compared to long?

So the long softirqs are any of the softirqs in the LONG_SOFTIRQ_MASK
(net, block, irqpoll).
The "slow softirq" just refers to softirq processing that has already
been pushed out to ksoftirqd on the target cpu.

By default ksoftirqd's priority is likely not high enough priority to
prevent the rt task from running, however, it does disable preemption
while it's running the softirqs, so it effectively could block the rt
task from running for a while.


> > + */
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > + u32 softirqs = per_cpu(active_softirqs, cpu) |
> > + __cpu_softirq_pending(cpu);
> > + struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> > + struct task_struct *curr;
> > + struct rq *rq = cpu_rq(cpu);
> > + int ret;
> > +
> > + rcu_read_lock();
> > + curr = READ_ONCE(rq->curr); /* unlocked access */
>
> select_task_rq_rt() takes the lock and reads curr already,
> before calling this funciton. I think there is a way to
> decompose it in a better way.

Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
Trying to pass curr into cpu_busy_with_softirqs() would mean
cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
specified cpu and pass that in.

The original patch actually was


> > + ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> > + (curr == cpu_ksoftirqd ||
>
> EOL is extra.

Removed - thanks!

>
> > + preempt_count() & SOFTIRQ_MASK);
>
> Could you please clarify this whole check in more detail?
>
> What is the point in checking if a remote CPU is handling
> a "long" softirq while the local one is handling any softirq?

Good question! This looks like an error from my rework of the earlier
version of the patch.
In v1 it was:
task_thread_info(curr)->preempt_count & SOFTIRQ_MASK));
And it looks like I erroneously condensed that to preempt_count() &
SOFTIRQ_MASK treating curr (from the target cpu rq) as if it were
current. :P

I'll fix this. Thank you for catching it!

Just to expand what it should be in detail:
1: (softirqs & LONG_SOFTIRQ_MASK) &&
2: (curr == cpu_ksoftirqd ||
3: task_thread_info(curr)->preempt_count & SOFTIRQ_MASK)

Where we're checking
1) that the active_softirqs and __cpu_softirq_pending() values on the
target cpu are running a long softirq.
AND (
2) The current task on the target cpu is ksoftirqd
OR
3) The preempt_count of the current task on the target cpu has SOFTIRQ entries
)

> > + rcu_read_unlock();
>
> Why ret needs to be calculated under the lock?

I believe it is the rq->curr == cpu_ksoftirqd (and the erroneously
removed task_thread_info(curr)) check?
Part of me wonders if it should have the rcu_dereference(rq->curr) as well?

> > + return ret;
> > +}
> > +#else
> > +#define __use_softirq_opt 0
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > + return false;
> > +}
> > +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> > +
> > +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
>
> To me, the new name is unfortunate, since it strips a notion
> of the reason. Instead, "CPU un/fits, because of capacity" it
> reads as "CPU un/fits, because of ..." what?

As with all complicated questions: "It depends" :) On the kernel
config specifically.

The earlier version of the patch had:
rt_task_fits_capacity_and_may_preempt() but Qais suggested it be
switched to the generic "fits_cpu" as it would depend on the kernel
config as to which aspect of "fit" was being considered.

I guess it could be rt_task_fits_capacity_and_softirq_free() ?

> > +{
> > + return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
>
> I guess the order needs to be swapped, as rt_task_fits_capacity()
> is rather "quick" while cpu_busy_with_softirqs() is rather "slow".

Fair point. I've gone ahead and tweaked that. Thanks!

> > @@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
> > return -1; /* No other targets possible */
> >
> > /*
> > - * If we're on asym system ensure we consider the different capacities
> > - * of the CPUs when searching for the lowest_mask.
> > + * If we're using the softirq optimization or if we are
> > + * on asym system, ensure we consider the softirq processing
> > + * or different capacities of the CPUs when searching for the
> > + * lowest_mask.
> > */
> > - if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> > + if (__use_softirq_opt ||
>
> Why use __use_softirq_opt and not IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION)?

Because IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION) seemed a bit long
and less easy to read?
I'm ambivalent if folks would rather have the CONFIG switch be
explicit here, but to me it seemed nicer to read the flag variable.

> > @@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
> >
> > DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
> >
> > +/*
> > + * active_softirqs -- per cpu, a mask of softirqs that are being handled,
> > + * with the expectation that approximate answers are acceptable and therefore
> > + * no synchronization.
> > + */
> > +DEFINE_PER_CPU(u32, active_softirqs);
>
> I guess all active_softirqs uses need to be coupled with
> IS_ENABLED(CONFIG_RT_SOFTIRQ_OPTIMIZATION) check.

I guess it could be wrapped. I didn't see the per-cpu accounting as
troublesome enough to conditionalize, but the extra per-cpu data isn't
great if no one uses it. So I'll do that.

Thanks so much for the review and feedback! I really appreciate it.
-john

2022-10-18 04:30:29

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Oct 17, 2022 at 8:42 PM John Stultz <[email protected]> wrote:
> On Mon, Oct 17, 2022 at 5:40 AM Alexander Gordeev
> <[email protected]> wrote:
> > select_task_rq_rt() takes the lock and reads curr already,
> > before calling this funciton. I think there is a way to
> > decompose it in a better way.
>
> Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
> Trying to pass curr into cpu_busy_with_softirqs() would mean
> cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
> specified cpu and pass that in.
>
> The original patch actually was
>

Whoops I didn't finish my thought there. I was going to say the
original patch did similar to your suggestion, passing the target cpu
curr task value in from select_task_rq_rt().
However it didn't use the cpupri_find_fitness() and duplicated a lot
of similar logic in a way that is not as nice as what the current
version uses. But I'm very much open to suggestions for other ways to
simplify that as you suggest.

thanks
-john

2022-10-18 06:21:08

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Oct 17, 2022 at 8:42 PM John Stultz <[email protected]> wrote:
> On Mon, Oct 17, 2022 at 5:40 AM Alexander Gordeev
> <[email protected]> wrote:
> > On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> > > + preempt_count() & SOFTIRQ_MASK);
> >
> > Could you please clarify this whole check in more detail?
> >
> > What is the point in checking if a remote CPU is handling
> > a "long" softirq while the local one is handling any softirq?
>
> Good question! This looks like an error from my rework of the earlier
> version of the patch.
> In v1 it was:
> task_thread_info(curr)->preempt_count & SOFTIRQ_MASK));
> And it looks like I erroneously condensed that to preempt_count() &
> SOFTIRQ_MASK treating curr (from the target cpu rq) as if it were
> current. :P
>
> I'll fix this. Thank you for catching it!

Ah, it's not so trivial to fix as I see part of my motivation for
condensing it was task_thread_info(curr)->preempt_count doesn't build
on x86 (which uses percpu values instead of threadinfo).

So I'll have to think a bit more about how to do this generically, and
if the conditionalization can be otherwise simplified.

Thanks again for pointing the issue out!
-john

2022-10-19 10:35:51

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Oct 17, 2022 at 08:42:53PM -0700, John Stultz wrote:
> Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
> Trying to pass curr into cpu_busy_with_softirqs() would mean
> cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
> specified cpu and pass that in.

May be you could have a lightweight checker that accepts rq and curr
and gets called from select_task_rq_rt(). Then you could call that
same checker from cpu_busy_with_softirqs().

> Just to expand what it should be in detail:
> 1: (softirqs & LONG_SOFTIRQ_MASK) &&
> 2: (curr == cpu_ksoftirqd ||
> 3: task_thread_info(curr)->preempt_count & SOFTIRQ_MASK)
>
> Where we're checking
> 1) that the active_softirqs and __cpu_softirq_pending() values on the
> target cpu are running a long softirq.
> AND (
> 2) The current task on the target cpu is ksoftirqd
> OR
> 3) The preempt_count of the current task on the target cpu has SOFTIRQ entries
> )

2) When the target CPU is handling or about to handle long softirqs
already what is the difference if it is also running ksoftirqd or not?

3) What is the point of this check when 1) is true already?

Thanks!

2022-10-19 11:58:18

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On 10/17/22 20:42, John Stultz wrote:

> > > + return ret;
> > > +}
> > > +#else
> > > +#define __use_softirq_opt 0
> > > +static bool cpu_busy_with_softirqs(int cpu)
> > > +{
> > > + return false;
> > > +}
> > > +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> > > +
> > > +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
> >
> > To me, the new name is unfortunate, since it strips a notion
> > of the reason. Instead, "CPU un/fits, because of capacity" it
> > reads as "CPU un/fits, because of ..." what?
>
> As with all complicated questions: "It depends" :) On the kernel
> config specifically.
>
> The earlier version of the patch had:
> rt_task_fits_capacity_and_may_preempt() but Qais suggested it be
> switched to the generic "fits_cpu" as it would depend on the kernel
> config as to which aspect of "fit" was being considered.
>
> I guess it could be rt_task_fits_capacity_and_softirq_free() ?

My line of thinking is that we have multiple reasons which could potentially
come and go. Having a simple generic name is future proof to the details of the
reason.

For example, in the future we might find a better way to handle the softirq
conflict and remove this reason; or we might find a new 'reason' is required.
Documenting that in the function header (if required, I see the one line
implementation is self descriptive so far) rather than in name made more sense
to me, hence the suggestion.

I am already aware of another new reason required to handle thermal pressure
when checking for capacity [1]. The discussion has stalled, but the problem is
still there and we'd need to address it. So I expect this code will be massaged
somewhat in the near future.

Sticking to rt_task_fits_cpu() will reduce the churn IMHO. But if you really
prefer something else, works for me :-)

[1] https://lore.kernel.org/lkml/20220514235513.jm7ul2y6uddj6eh2@airbuntu/


Cheers

--
Qais Yousef

2022-10-19 22:26:04

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Wed, Oct 19, 2022 at 2:11 AM Alexander Gordeev
<[email protected]> wrote:
>
> On Mon, Oct 17, 2022 at 08:42:53PM -0700, John Stultz wrote:
> > Hrm. Suggestions? As select_task_rq_rt() is only one of the callers.
> > Trying to pass curr into cpu_busy_with_softirqs() would mean
> > cpupri_find_fitness() would need to read the cpu_rq(cpu)->curr for the
> > specified cpu and pass that in.
>
> May be you could have a lightweight checker that accepts rq and curr
> and gets called from select_task_rq_rt(). Then you could call that
> same checker from cpu_busy_with_softirqs().

Fair enough. Though your other questions are making me wonder if this
is necessary.

> > Just to expand what it should be in detail:
> > 1: (softirqs & LONG_SOFTIRQ_MASK) &&
> > 2: (curr == cpu_ksoftirqd ||
> > 3: task_thread_info(curr)->preempt_count & SOFTIRQ_MASK)
> >
> > Where we're checking
> > 1) that the active_softirqs and __cpu_softirq_pending() values on the
> > target cpu are running a long softirq.
> > AND (
> > 2) The current task on the target cpu is ksoftirqd
> > OR
> > 3) The preempt_count of the current task on the target cpu has SOFTIRQ entries
> > )
>
> 2) When the target CPU is handling or about to handle long softirqs
> already what is the difference if it is also running ksoftirqd or not?

Again, a good question! From my understanding, the original patch was
basically checking just #2 and #3 above, then additional logic was
added to narrow it to only the LONG_SOFTIRQ_MASK values, so that may
make the older part of the check redundant.

I fret there are some edge cases where on the target cpu softirqs
might be pending but ksoftirqd isn't running yet maybe due to a
lowish-prio rt task - such that the cpu could still be considered a
good target. But this seems a bit of a stretch.

> 3) What is the point of this check when 1) is true already?

Yeah, the more I think about this, the more duplicative it seems.
Again, there's some edge details about the preempt_count being set
before the active_softirq accounting is set, but the whole decision
here about the target cpus is a bit racy to begin with, so I'm not
sure if that is significant.

So I'll go ahead and simplify the check to just the LONG_SOFTIRQ_MASK
& (active | pending softirqs) check. This should avoid the need to
pull the cpu_rq(cpu)->curr value and simplify things.

Will send out a new version once I've been able to validate that
similification doesn't introduce a regression.

Thanks so much for the feedback and suggestions!
-john

2022-10-20 00:42:30

by Qais Yousef

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On 10/19/22 15:09, John Stultz wrote:

> I fret there are some edge cases where on the target cpu softirqs
> might be pending but ksoftirqd isn't running yet maybe due to a
> lowish-prio rt task - such that the cpu could still be considered a
> good target. But this seems a bit of a stretch.

Could using ksoftirqd_running() instead help with robustness here?


Cheers

--
Qais Yousef

2022-10-20 13:41:09

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Wed, Oct 19, 2022 at 03:09:15PM -0700, John Stultz wrote:

Hi John,

[...]

> So I'll go ahead and simplify the check to just the LONG_SOFTIRQ_MASK
> & (active | pending softirqs) check. This should avoid the need to
> pull the cpu_rq(cpu)->curr value and simplify things.

In my reading of your approach if you find a way to additionally
indicate long softirqs being handled by the remote ksoftirqd, it
would cover all obvious/not-corner cases.

> -john

2022-10-22 19:14:35

by Joel Fernandes

[permalink] [raw]
Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

Hi,

On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> From: Connor O'Brien <[email protected]>
>
> In certain audio use cases, scheduling RT threads on cores that
> are handling softirqs can lead to glitches. Prevent this
> behavior in cases where the softirq is likely to take a long
> time. To avoid unnecessary migrations, the old behavior is
> preserved for RCU, SCHED and TIMER irqs which are expected to be
> relatively quick.
>
> This patch reworks and combines two related changes originally
> by John Dias <[email protected]>
[..]
> ---
> v2:
> * Reformatted Kconfig entry to match coding style
> (Reported-by: Randy Dunlap <[email protected]>)
> * Made rt_task_fits_capacity_and_may_preempt static to
> avoid warnings (Reported-by: kernel test robot <[email protected]>)
> * Rework to use preempt_count and drop kconfig dependency on ARM64
> v3:
> * Use introduced __cpu_softirq_pending() to avoid s390 build
> issues (Reported-by: kernel test robot <[email protected]>)
> v4:
> * Drop TASKLET_SOFTIRQ from LONG_SOFTIRQS (suggested by Qais)
> * Depend on !PREEMPT_RT (Suggested by Qais)
> * Larger simplification of logic (suggested by Qais)
> * Rework LONG_SOFTIRQS to use BIT() macros
> * Rename task_may_preempt() to cpu_busy_with_softirqs()
> ---
> include/linux/interrupt.h | 6 ++++
> init/Kconfig | 10 +++++++
> kernel/sched/rt.c | 61 +++++++++++++++++++++++++++++++++------
> kernel/softirq.c | 9 ++++++
> 4 files changed, 77 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> index a749a8663841..e3a4add67e8c 100644
> --- a/include/linux/interrupt.h
> +++ b/include/linux/interrupt.h
> @@ -582,6 +582,11 @@ enum
> * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
> */
> #define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
> +/* Softirq's where the handling might be long: */
> +#define LONG_SOFTIRQ_MASK (BIT(NET_TX_SOFTIRQ) | \
> + BIT(NET_RX_SOFTIRQ) | \
> + BIT(BLOCK_SOFTIRQ) | \
> + BIT(IRQ_POLL_SOFTIRQ))

Instead of a mask, I wonder if a better approach is to have a heuristic based
on measurement of softirq load. There is already code to measure irq load
(see update_irq_load_avg). Perhaps, Vincent would be willing to add the
softirq load in it as well if asked nicely ;)

That's much better because:
1. Any new softirqs added will also trigger this optimization.
2. Systems where these softirqs are quiet will not unnecessarily trigger it.
3. You can also include irq load so that things like IRQ storms also don't
cause RT issues (when there are better "uncompromised" idle CPU candidates).
4. may not be need CONFIG option at all if the optimization makes
sense for everything. Think all the systems not running Android.

> /* map softirq index to softirq name. update 'softirq_to_name' in
> * kernel/softirq.c when adding a new softirq.
> @@ -617,6 +622,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
> extern void raise_softirq(unsigned int nr);
>
> DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
> +DECLARE_PER_CPU(u32, active_softirqs);
>
> static inline struct task_struct *this_cpu_ksoftirqd(void)
> {
> diff --git a/init/Kconfig b/init/Kconfig
> index 532362fcfe31..3d1de6edcfa1 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1284,6 +1284,16 @@ config SCHED_AUTOGROUP
> desktop applications. Task group autogeneration is currently based
> upon task session.
>
> +config RT_SOFTIRQ_OPTIMIZATION

I much dislike this config option name because it does not self-explain what
the option is and it is too long. More so, any future such optimizations,
even if rare, will have to come up with another name causing more confusion.
Instead, CONFIG_RT_AVOID_SOFTIRQS seems cleaner, but I'll defer to scheduler
maintainers since they have to take the patch ultimately. Though I'll give my
tag for a rename / better name. More below:

> + bool "Improve RT scheduling during long softirq execution"
> + depends on SMP && !PREEMPT_RT
> + default n
> + help
> + Enable an optimization which tries to avoid placing RT tasks on CPUs
> + occupied by nonpreemptible tasks, such as a long softirq or CPUs
> + which may soon block preemptions, such as a CPU running a ksoftirq
> + thread which handles slow softirqs.
> +
> config SYSFS_DEPRECATED
> bool "Enable deprecated sysfs features to support old userspace tools"
> depends on SYSFS
> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> index 55f39c8f4203..3c628db807c8 100644
> --- a/kernel/sched/rt.c
> +++ b/kernel/sched/rt.c
> @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
> #ifdef CONFIG_SMP
> static int find_lowest_rq(struct task_struct *task);
>
> +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> +#define __use_softirq_opt 1

Why not just use IS_ENABLED(CONFIG_RT_SOFTIRQ_OPT..) instead of defining new
macros?

> + * Return whether the given cpu is currently non-preemptible
> + * while handling a potentially long softirq, or if the current
> + * task is likely to block preemptions soon because it is a
> + * ksoftirq thread that is handling slow softirq.
> + */
> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> + u32 softirqs = per_cpu(active_softirqs, cpu) |
> + __cpu_softirq_pending(cpu);
> + struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> + struct task_struct *curr;
> + struct rq *rq = cpu_rq(cpu);
> + int ret;
> +
> + rcu_read_lock();

Could you clarify what is the RCU read-side critical section protecting?

> + curr = READ_ONCE(rq->curr); /* unlocked access */
> + ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> + (curr == cpu_ksoftirqd ||
> + preempt_count() & SOFTIRQ_MASK);

If the goal is to avoid ksoftirqd when it is running an actual softirq,
doesn't ksoftirqd already run softirqs under local_bh_disable()? If so, then
SOFTIRQ_MASK should already be set. If ksoftirqd is in between softirqs (and
bh is enabled there), then scheduling the RT task on the CPU may preempt
ksoftirqd and give the RT task a chance to run anyway, right?.

> + rcu_read_unlock();
> + return ret;
> +}
> +#else
> +#define __use_softirq_opt 0

define not needed if using IS_ENABLED.

more comments later :)

thanks,

- Joel


> +static bool cpu_busy_with_softirqs(int cpu)
> +{
> + return false;
> +}
> +#endif /* CONFIG_RT_SOFTIRQ_OPTIMIZATION */
> +
> +static bool rt_task_fits_cpu(struct task_struct *p, int cpu)
> +{
> + return !cpu_busy_with_softirqs(cpu) && rt_task_fits_capacity(p, cpu);
> +}
> +
> static int
> select_task_rq_rt(struct task_struct *p, int cpu, int flags)
> {
> @@ -1637,22 +1675,24 @@ select_task_rq_rt(struct task_struct *p, int cpu, int flags)
> * This test is optimistic, if we get it wrong the load-balancer
> * will have to sort it out.
> *
> - * We take into account the capacity of the CPU to ensure it fits the
> - * requirement of the task - which is only important on heterogeneous
> - * systems like big.LITTLE.
> + * We use rt_task_fits_cpu() to evaluate if the CPU is busy with
> + * potentially long-running softirq work, as well as take into
> + * account the capacity of the CPU to ensure it fits the
> + * requirement of the task - which is only important on
> + * heterogeneous systems like big.LITTLE.
> */
> test = curr &&
> unlikely(rt_task(curr)) &&
> (curr->nr_cpus_allowed < 2 || curr->prio <= p->prio);
>
> - if (test || !rt_task_fits_capacity(p, cpu)) {
> + if (test || !rt_task_fits_cpu(p, cpu)) {
> int target = find_lowest_rq(p);
>
> /*
> * Bail out if we were forcing a migration to find a better
> * fitting CPU but our search failed.
> */
> - if (!test && target != -1 && !rt_task_fits_capacity(p, target))
> + if (!test && target != -1 && !rt_task_fits_cpu(p, target))
> goto out_unlock;
>
> /*
> @@ -1894,14 +1934,17 @@ static int find_lowest_rq(struct task_struct *task)
> return -1; /* No other targets possible */
>
> /*
> - * If we're on asym system ensure we consider the different capacities
> - * of the CPUs when searching for the lowest_mask.
> + * If we're using the softirq optimization or if we are
> + * on asym system, ensure we consider the softirq processing
> + * or different capacities of the CPUs when searching for the
> + * lowest_mask.
> */
> - if (static_branch_unlikely(&sched_asym_cpucapacity)) {
> + if (__use_softirq_opt ||

replace with IS_ENABLED.

> + static_branch_unlikely(&sched_asym_cpucapacity)) {
>
> ret = cpupri_find_fitness(&task_rq(task)->rd->cpupri,
> task, lowest_mask,
> - rt_task_fits_capacity);
> + rt_task_fits_cpu);
> } else {
>
> ret = cpupri_find(&task_rq(task)->rd->cpupri,
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index c8a6913c067d..35ee79dd8786 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -60,6 +60,13 @@ static struct softirq_action softirq_vec[NR_SOFTIRQS] __cacheline_aligned_in_smp
>
> DEFINE_PER_CPU(struct task_struct *, ksoftirqd);
>
> +/*
> + * active_softirqs -- per cpu, a mask of softirqs that are being handled,
> + * with the expectation that approximate answers are acceptable and therefore
> + * no synchronization.
> + */
> +DEFINE_PER_CPU(u32, active_softirqs);
> +
> const char * const softirq_to_name[NR_SOFTIRQS] = {
> "HI", "TIMER", "NET_TX", "NET_RX", "BLOCK", "IRQ_POLL",
> "TASKLET", "SCHED", "HRTIMER", "RCU"
> @@ -551,6 +558,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> restart:
> /* Reset the pending bitmask before enabling irqs */
> set_softirq_pending(0);
> + __this_cpu_write(active_softirqs, pending);
>
> local_irq_enable();
>
> @@ -580,6 +588,7 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
> pending >>= softirq_bit;
> }
>
> + __this_cpu_write(active_softirqs, 0);
> if (!IS_ENABLED(CONFIG_PREEMPT_RT) &&
> __this_cpu_read(ksoftirqd) == current)
> rcu_softirq_qs();
> --
> 2.38.0.rc1.362.ged0d419d3c-goog
>

2022-10-22 19:14:54

by Joel Fernandes

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Thu, Oct 20, 2022 at 02:47:54PM +0200, Alexander Gordeev wrote:
> On Wed, Oct 19, 2022 at 03:09:15PM -0700, John Stultz wrote:
>
> Hi John,
>
> [...]
>
> > So I'll go ahead and simplify the check to just the LONG_SOFTIRQ_MASK
> > & (active | pending softirqs) check. This should avoid the need to
> > pull the cpu_rq(cpu)->curr value and simplify things.
>
> In my reading of your approach if you find a way to additionally
> indicate long softirqs being handled by the remote ksoftirqd, it
> would cover all obvious/not-corner cases.

How will that help? The long softirq executing inside ksoftirqd will disable
preemption and prevent any RT task from executing.

Did I miss something?

thanks,

- Joel

2022-10-23 07:55:54

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Sat, Oct 22, 2022 at 06:34:37PM +0000, Joel Fernandes wrote:
> > In my reading of your approach if you find a way to additionally
> > indicate long softirqs being handled by the remote ksoftirqd, it
> > would cover all obvious/not-corner cases.
>
> How will that help? The long softirq executing inside ksoftirqd will disable
> preemption and prevent any RT task from executing.

Right. So the check to deem a remote CPU unfit would (logically) look like this:

(active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK

> Did I miss something?

Or me :)

> thanks,
>
> - Joel
>

2022-11-15 07:13:35

by John Stultz

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Sun, Oct 23, 2022 at 12:45 AM Alexander Gordeev
<[email protected]> wrote:
>
> On Sat, Oct 22, 2022 at 06:34:37PM +0000, Joel Fernandes wrote:
> > > In my reading of your approach if you find a way to additionally
> > > indicate long softirqs being handled by the remote ksoftirqd, it
> > > would cover all obvious/not-corner cases.
> >
> > How will that help? The long softirq executing inside ksoftirqd will disable
> > preemption and prevent any RT task from executing.
>
> Right. So the check to deem a remote CPU unfit would (logically) look like this:
>
> (active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK
>

Alexander,
Apologies for the late response here, some other work took priority for a bit.

Thanks again for the feedback. But I wanted to follow up on your
suggestion here, as I'm not quite sure I see why we need the separate
ksoftirqd bitmask here?

As run_ksoftirqd() basically looks at the pending set and calls
__do_softirq() which then moves the bits from the pending mask to
active mask while they are being run.

So (pending|active)&LONG_SOFTIRQ_MASK seems like it should be a
sufficient check regardless of if the remote cpu is in softirq or
ksoftirqd, no?

thanks
-john

2022-11-15 13:29:16

by Alexander Gordeev

[permalink] [raw]
Subject: Re: [PATCH RFC v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Mon, Nov 14, 2022 at 11:08:36PM -0800, John Stultz wrote:

Hi John,
...
> > Right. So the check to deem a remote CPU unfit would (logically) look like this:
> >
> > (active | pending | ksoftirqd) & LONG_SOFTIRQ_MASK
...
> As run_ksoftirqd() basically looks at the pending set and calls
> __do_softirq() which then moves the bits from the pending mask to
> active mask while they are being run.
>
> So (pending|active)&LONG_SOFTIRQ_MASK seems like it should be a
> sufficient check regardless of if the remote cpu is in softirq or
> ksoftirqd, no?

I did not realize run_ksoftirqd()->__do_softirq() covers it.
Sorry for the noise.

Thanks!

> thanks
> -john

2022-11-15 22:32:49

by John Stultz

[permalink] [raw]
Subject: Re: [RFC PATCH v4 2/3] sched: Avoid placing RT threads on cores handling long softirqs

On Sat, Oct 22, 2022 at 11:28 AM Joel Fernandes <[email protected]> wrote:
> On Mon, Oct 03, 2022 at 11:20:32PM +0000, John Stultz wrote:
> > diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
> > index a749a8663841..e3a4add67e8c 100644
> > --- a/include/linux/interrupt.h
> > +++ b/include/linux/interrupt.h
> > @@ -582,6 +582,11 @@ enum
> > * _ IRQ_POLL: irq_poll_cpu_dead() migrates the queue
> > */
> > #define SOFTIRQ_HOTPLUG_SAFE_MASK (BIT(RCU_SOFTIRQ) | BIT(IRQ_POLL_SOFTIRQ))
> > +/* Softirq's where the handling might be long: */
> > +#define LONG_SOFTIRQ_MASK (BIT(NET_TX_SOFTIRQ) | \
> > + BIT(NET_RX_SOFTIRQ) | \
> > + BIT(BLOCK_SOFTIRQ) | \
> > + BIT(IRQ_POLL_SOFTIRQ))
>
> Instead of a mask, I wonder if a better approach is to have a heuristic based
> on measurement of softirq load. There is already code to measure irq load
> (see update_irq_load_avg). Perhaps, Vincent would be willing to add the
> softirq load in it as well if asked nicely ;)
>
> That's much better because:
> 1. Any new softirqs added will also trigger this optimization.
> 2. Systems where these softirqs are quiet will not unnecessarily trigger it.
> 3. You can also include irq load so that things like IRQ storms also don't
> cause RT issues (when there are better "uncompromised" idle CPU candidates).
> 4. may not be need CONFIG option at all if the optimization makes
> sense for everything. Think all the systems not running Android.

Hey Joel,
Thanks again for the feedback, and sorry to take awhile to get back to you.

I'll have to think about this some more, but I'm hesitant about
switching to a load based hursitic approach. Mostly as responding to
"load" feels a bit fuzzy to me.
If we suddenly get a burst of softirqs preempting scheduling, we don't
want to have to wait for the load tracking to recognize this, as the
effect of the blocked audio processing will be immediate.

That's why this optimization just avoids scheduling rt tasks on cpus
that are running potentially long-running softirqs, as we can't be
sure in that instance we'll be able to run soon.


> > /* map softirq index to softirq name. update 'softirq_to_name' in
> > * kernel/softirq.c when adding a new softirq.
> > @@ -617,6 +622,7 @@ extern void raise_softirq_irqoff(unsigned int nr);
> > extern void raise_softirq(unsigned int nr);
> >
> > DECLARE_PER_CPU(struct task_struct *, ksoftirqd);
> > +DECLARE_PER_CPU(u32, active_softirqs);
> >
> > static inline struct task_struct *this_cpu_ksoftirqd(void)
> > {
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 532362fcfe31..3d1de6edcfa1 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -1284,6 +1284,16 @@ config SCHED_AUTOGROUP
> > desktop applications. Task group autogeneration is currently based
> > upon task session.
> >
> > +config RT_SOFTIRQ_OPTIMIZATION
>
> I much dislike this config option name because it does not self-explain what
> the option is and it is too long. More so, any future such optimizations,
> even if rare, will have to come up with another name causing more confusion.
> Instead, CONFIG_RT_AVOID_SOFTIRQS seems cleaner, but I'll defer to scheduler
> maintainers since they have to take the patch ultimately. Though I'll give my
> tag for a rename / better name. More below:

That's a fair point. RT_SOFTIRQ_AWARE_SCHED maybe?


> > diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
> > index 55f39c8f4203..3c628db807c8 100644
> > --- a/kernel/sched/rt.c
> > +++ b/kernel/sched/rt.c
> > @@ -1599,6 +1599,44 @@ static void yield_task_rt(struct rq *rq)
> > #ifdef CONFIG_SMP
> > static int find_lowest_rq(struct task_struct *task);
> >
> > +#ifdef CONFIG_RT_SOFTIRQ_OPTIMIZATION
> > +#define __use_softirq_opt 1
>
> Why not just use IS_ENABLED(CONFIG_RT_SOFTIRQ_OPT..) instead of defining new
> macros?

Mostly for readability. The lines where its added are already fairly long, ie:
if (static_branch_unlikely(&sched_asym_cpucapacity)) {

So it seemed nicer to have the shorter macro. But I'm not strongly
opinionated on this.

> > + * Return whether the given cpu is currently non-preemptible
> > + * while handling a potentially long softirq, or if the current
> > + * task is likely to block preemptions soon because it is a
> > + * ksoftirq thread that is handling slow softirq.
> > + */
> > +static bool cpu_busy_with_softirqs(int cpu)
> > +{
> > + u32 softirqs = per_cpu(active_softirqs, cpu) |
> > + __cpu_softirq_pending(cpu);
> > + struct task_struct *cpu_ksoftirqd = per_cpu(ksoftirqd, cpu);
> > + struct task_struct *curr;
> > + struct rq *rq = cpu_rq(cpu);
> > + int ret;
> > +
> > + rcu_read_lock();
>
> Could you clarify what is the RCU read-side critical section protecting?
>
> > + curr = READ_ONCE(rq->curr); /* unlocked access */
> > + ret = (softirqs & LONG_SOFTIRQ_MASK) &&
> > + (curr == cpu_ksoftirqd ||
> > + preempt_count() & SOFTIRQ_MASK);
>
> If the goal is to avoid ksoftirqd when it is running an actual softirq,
> doesn't ksoftirqd already run softirqs under local_bh_disable()? If so, then
> SOFTIRQ_MASK should already be set. If ksoftirqd is in between softirqs (and
> bh is enabled there), then scheduling the RT task on the CPU may preempt
> ksoftirqd and give the RT task a chance to run anyway, right?.

Yeah. I'm refactoring/simplifying this for v5, so hopefully it will be
more straightforward.

Thanks again for the feedback!
-john