2024-06-13 18:17:20

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

Hello everyone,

Before jumping into the issue, let me clarify the Cc list. Everyone have
been cc'ed on Patch 0 through Patch 3. Respective arch maintainers,
reviewers, and committers returned by scripts/get_maintainer.pl have
been cc'ed on the respective arch side changes. Scheduler and CPU Idle
maintainers and reviewers have been included for the entire series. If I
have missed anyone, please do add them. If you would like to be dropped
from the cc list, wholly or partially, for the future iterations, please
do let me know.

As long as the first three patches are applied in-order, the arch
specific enablement can be applied independently and out-of-order since
the TIF_NOTIFY_IPI flag is not used until Patch 3 and Patch 2 preps the
complete tree to handle a break out of TIF_POLLING_NRFLAG state with the
setting of either TIF_NOTIFY_IPI or TIF_NEED_RESCHED.

Quick changelog and addressing concerns from v1
===============================================

v1: https://lore.kernel.org/lkml/[email protected]/

v1..v2:

o Rebased the series on latest tip:sched/core at commit c793a62823d1
("sched/core: Drop spinlocks on contention iff kernel is preemptible")
Fixed a conflict with commit edc8fc01f608 ("x86: Fix
CPUIDLE_FLAG_IRQ_ENABLE leaking timer reprogram") that touched
mwait_idle_with_hints() in arch/x86/include/asm/mwait.h

o Dropping the ARM results since I never got my hands on the ARM64
system I used in my last testing. If I do manage to get my hands on it
again, I'll rerun the experiments and share the results on the thread.
To test the case where TIF_NOTIFY_IPI is not enabled for a particular
architecture, I applied the series only until Patch 3 and tested the
same on my x86 machine with a WARN_ON_ONCE() in do_idle() to check if
tif_notify_ipi() ever return true and then repeated the same with
Patch 4 applied.

o Updated benchmark results based on the latest base.

o Collected the Ack from Guo Ren for CSKY enablement.

o Dropped the RFC tag.

o Unfortunately, the series does not solve the issue highlighted by
Julia Lawall w.r.t. NUMA Balancing in [0] based on her testing of v1.
However, she did highlight a possible regression last time around
where compiling a single file took much longer with the series but I
could not reproduce it on my end. For sanity, I did rerun the same
experiment this time around and I could not see any difference.
Following are the numbers for

$ make clean
$ time make kernel/sched/core.o

---> tip:sched/core

-j1

real 0m32.734s
user 0m25.158s
sys 0m6.750s

-j256

real 0m7.181s
user 0m27.509s
sys 0m7.876s

--> tip:sched/core + TIF_NOTIFY_IPI

-j1

real 0m32.408s
user 0m24.826s
sys 0m6.767s

-j256

real 0m7.187s
user 0m27.556s
sys 0m7.602s

[0] https://lore.kernel.org/lkml/alpine.DEB.2.22.394.2310032059060.3220@hadrien/

Individual patches have their own changelog to help with review.

With those details out of the way ...

Problem statement
=================

When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
upstream release v6.7-rc6 (this was the latest upstream kernel at the
time of encountering the issue). The issue persists on v6.10-rc1 as
well.

Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime.


Experiments
===========

Since the commit cannot be cleanly reverted on top of the current
tip:sched/core, the effects of the optimizations were reverted by:

1. Removing the check for call_function_single_prep_ipi() in
send_call_function_single_ipi(). With this change
send_call_function_single_ipi() always calls
arch_send_call_function_single_ipi()

2. Removing the call to flush_smp_call_function_queue() in do_idle()
since every smp_call_function, with (1.), would unconditionally send
an IPI to an idle CPU in TIF_POLLING mode.

Following is the diff of the above described changes which will be
henceforth referred to as the "revert":

diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 31231925f1ec..735184d98c0f 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -332,11 +332,6 @@ static void do_idle(void)
*/
smp_mb__after_atomic();

- /*
- * RCU relies on this call to be done outside of an RCU read-side
- * critical section.
- */
- flush_smp_call_function_queue();
schedule_idle();

if (unlikely(klp_patch_pending(current)))
diff --git a/kernel/smp.c b/kernel/smp.c
index f085ebcdf9e7..2ff100c41885 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -111,11 +111,9 @@ void __init call_function_init(void)
static __always_inline void
send_call_function_single_ipi(int cpu)
{
- if (call_function_single_prep_ipi(cpu)) {
- trace_ipi_send_cpu(cpu, _RET_IP_,
- generic_smp_call_function_single_interrupt);
- arch_send_call_function_single_ipi(cpu);
- }
+ trace_ipi_send_cpu(cpu, _RET_IP_,
+ generic_smp_call_function_single_interrupt);
+ arch_send_call_function_single_ipi(cpu);
}

static __always_inline void
--

With the revert, the time taken to complete a fixed set of IPIs using
ipistorm improves significantly. Following are the numbers from a dual
socket 3rd Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled)
running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

(tip:sched/core was at commit c793a62823d1 ("sched/core: Drop spinlocks
on contention iff kernel is preemptible") for all the test data
presented below)

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [00.00]
tip:sched/core + revert 0.41 [60.00]

Although the revert improves ipistorm performance, it also regresses
tbench and netperf, supporting the validity of the optimization.
Following are the tbench numbers from the same machine comparing vanilla
tip:sched/core and the revert applied on top:

==================================================================
Test : tbench
Units : Normalized throughput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip (CV) revert (CV) [pct imp]
1 1.00 (0.60) 0.90 (0.08) [-10%]
2 1.00 (0.27) 0.90 (0.76) [-10%]
4 1.00 (0.42) 0.90 (0.52) [-10%]
8 1.00 (0.78) 0.91 (0.54) [ -9%]
16 1.00 (1.70) 0.92 (0.39) [ -8%]
32 1.00 (1.73) 0.91 (1.39) [ -9%]
64 1.00 (1.09) 0.92 (1.60) [ -8%]
128 1.00 (1.45) 0.95 (0.52) [ -5%]
256 1.00 (0.96) 1.01 (0.28) [ 1%]
512 1.00 (0.32) 1.01 (0.20) [ 1%]
1024 1.00 (0.06) 1.01 (0.03) [ 1%]

Since a simple revert is not a viable solution, we delved deeper into
the changes in the execution path with call_function_single_prep_ipi()
check.


Effects of call_function_single_prep_ipi()
==========================================

To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.

When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.

Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched())
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NEED_RESCHED); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_set_need_resched();
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func();
} /* End of wait */ }
} schedule_idle() {
...
local_irq_disable();
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
arch_send_call_function_single_ipi(CPU1); ...
\ ...
\ newidle_balance() {
\ ...
/* Delay */ ...
\ }
\ ...
\--------------> local_irq_enable();
/* Processes the IPI */
--


Skipping newidle_balance()
==========================

In an earlier attempt to solve the challenge of the long IRQ disabled
section, newidle_balance() was skipped when a CPU waking up from idle
was found to have no runnable tasks, and was transitioning back to
idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
may be viable for CPUs that are idling with tick enabled, where the
newidle_balance() has the opportunity to pull tasks onto the idle CPU.

Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.

Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.


Proposed solution: TIF_NOTIFY_IPI
=================================

Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
of idle, TIF_NOTIFY_IPI is a newly introduced flag that
call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
indicate a pending IPI, which the idle CPU promises to process soon.

On architectures that do not support the TIF_NOTIFY_IPI flag (this
series only adds support for x86 and ARM processors for now),
call_function_single_prep_ipi() will fallback to setting
TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.

Since the pending IPI handlers are processed before the call to
schedule_idle() in do_idle(), schedule_idle() will only be called if the
IPI handler have woken / migrated a new task on the idle CPU and has set
TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
long IRQ disabled section in schedule_idle() unnecessarily, and any
need_resched() check within a call function will accurately notify if a
task is waiting for CPU time on the CPU handling the IPI.

Following is the crude visualization of how the situation changes with
the newly introduced TIF_NOTIFY_IPI flag:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched_or_ipi())
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NOTIFY_IPI); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { ...
/* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
} /* End of wait */ }
} if (need_resched()) {
schedule_idle();
smp_call_function_single(CPU1, func, wait = 1) { }
... ... /* IRQs remain enabled */
arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
--

Results
=======

With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
using ipistorm improves drastically and is closer the numbers same with
the revert. Following are the numbers from the same dual socket 3rd
Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled) running
ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [baseline]
tip:sched/core + revert 0.40 [60.26]
tip:sched/core + TIF_NOTIFY_IPI 0.46 [54.88]

netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.


Additional benefits
===================

In nohz_csd_func(), the need_resched() check returns true when an idle
CPU in TIF_POLLING mode is woken up to do an idle load balance which
leads to the idle load balance bailing out early today since
send_call_function_single_ipi() ends up setting the TIF_NEED_RESCHED
flag to put the CPU out of idle and the flag is not cleared until
__schedule() is called much later in the call path.

With TIF_NOTIFY_IPI, this is no longer the case since TIF_NEED_RESCHED
is only set if there is a genuine need to call schedule() and not used
in an overloaded manner to notify a pending IPI.

Links
=====

[1] https://github.com/antonblanchard/ipistorm
[2] https://lore.kernel.org/lkml/[email protected]/
[3] https://lore.kernel.org/lkml/[email protected]/
[4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
[5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/

This series is based on tip:sched/core at commit c793a62823d1
("sched/core: Drop spinlocks on contention iff kernel is preemptible")
--
Gautham R. Shenoy (4):
thread_info: Add helpers to test and clear TIF_NOTIFY_IPI
sched: Define a need_resched_or_ipi() helper and use it treewide
sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING
mode of pending IPI
x86/thread_info: Introduce TIF_NOTIFY_IPI flag

K Prateek Nayak (10):
arm/thread_info: Introduce TIF_NOTIFY_IPI flag
alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag
powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag
sh/thread_info: Introduce TIF_NOTIFY_IPI flag
sparc/thread_info: Introduce TIF_NOTIFY_IPI flag
csky/thread_info: Introduce TIF_NOTIFY_IPI flag
parisc/thread_info: Introduce TIF_NOTIFY_IPI flag
nios2/thread_info: Introduce TIF_NOTIFY_IPI flag
microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag
--
Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Andreas Larsson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Benjamin Gray <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Xin Li <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Leonardo Bras <[email protected]>
Cc: Imran Khan <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
--
arch/alpha/include/asm/thread_info.h | 2 ++
arch/arm/include/asm/thread_info.h | 3 ++
arch/csky/include/asm/thread_info.h | 2 ++
arch/microblaze/include/asm/thread_info.h | 2 ++
arch/nios2/include/asm/thread_info.h | 2 ++
arch/openrisc/include/asm/thread_info.h | 2 ++
arch/parisc/include/asm/thread_info.h | 2 ++
arch/powerpc/include/asm/thread_info.h | 2 ++
arch/sh/include/asm/thread_info.h | 2 ++
arch/sparc/include/asm/thread_info_32.h | 2 ++
arch/sparc/include/asm/thread_info_64.h | 2 ++
arch/x86/include/asm/mwait.h | 2 +-
arch/x86/include/asm/thread_info.h | 2 ++
arch/x86/kernel/process.c | 2 +-
drivers/cpuidle/cpuidle-powernv.c | 2 +-
drivers/cpuidle/cpuidle-pseries.c | 2 +-
drivers/cpuidle/poll_state.c | 2 +-
include/linux/sched.h | 5 +++
include/linux/sched/idle.h | 12 +++----
include/linux/thread_info.h | 43 +++++++++++++++++++++++
kernel/sched/core.c | 41 ++++++++++++++++-----
kernel/sched/idle.c | 23 ++++++++----
22 files changed, 133 insertions(+), 26 deletions(-)

--
2.34.1



2024-06-13 18:17:49

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 01/14] thread_info: Add helpers to test and clear TIF_NOTIFY_IPI

From: "Gautham R. Shenoy" <[email protected]>

Introduce the notion of TIF_NOTIFY_IPI flag. When a processor in
TIF_POLLING mode needs to process an IPI, the sender sets NEED_RESCHED
bit in idle task's thread_info to pull the target out of idle and avoids
sending an interrupt to the idle CPU. When NEED_RESCHED is set, the
scheduler assumes that a new task has been queued on the idle CPU and
calls schedule_idle(), however, it is not necessary that an IPI on an
idle CPU will necessarily end up waking a task on the said CPU. To avoid
spurious calls to schedule_idle() assuming an IPI on an idle CPU will
always wake a task on the said CPU, TIF_NOTIFY_IPI will be used to pull
a TIF_POLLING CPU out of idle.

Since the IPI handlers are processed before the call to schedule_idle(),
schedule_idle() will be called only if one of the handlers have woken up
a new task on the CPU and has set NEED_RESCHED.

Add tif_notify_ipi() and current_clr_notify_ipi() helpers to test if
TIF_NOTIFY_IPI is set in the current task's thread_info, and to clear it
respectively. These interfaces will be used in subsequent patches as
TIF_NOTIFY_IPI notion is integrated in the scheduler and in the idle
path.

[ prateek: Split the changes into a separate patch, add commit log ]

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Andreas Larsson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Benjamin Gray <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Xin Li <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Leonardo Bras <[email protected]>
Cc: Imran Khan <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
include/linux/thread_info.h | 43 +++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)

diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 9ea0b28068f4..1e10dd8c0227 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -195,6 +195,49 @@ static __always_inline bool tif_need_resched(void)

#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */

+#ifdef TIF_NOTIFY_IPI
+
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
+
+static __always_inline bool tif_notify_ipi(void)
+{
+ return arch_test_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+ arch_clear_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+#else
+
+static __always_inline bool tif_notify_ipi(void)
+{
+ return test_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+ clear_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
+
+#else /* !TIF_NOTIFY_IPI */
+
+static __always_inline bool tif_notify_ipi(void)
+{
+ return false;
+}
+
+static __always_inline void current_clr_notify_ipi(void) { }
+
+#endif /* TIF_NOTIFY_IPI */
+
#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
static inline int arch_within_stack_frames(const void * const stack,
const void * const stackend,
--
2.34.1


2024-06-13 18:18:39

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 02/14] sched: Define a need_resched_or_ipi() helper and use it treewide

From: "Gautham R. Shenoy" <[email protected]>

Currently TIF_NEED_RESCHED is being overloaded, to wakeup an idle CPU in
TIF_POLLING mode to service an IPI even if there are no new tasks being
woken up on the said CPU.

In preparation of a proper fix, introduce a new helper
"need_resched_or_ipi()" which is intended to return true if either
the TIF_NEED_RESCHED flag or if TIF_NOTIFY_IPI flag is set. Use this
helper function in place of need_resched() in idle loops where
TIF_POLLING_NRFLAG is set.

To preserve bisectibility and avoid unbreakable idle loops, all the
need_resched() checks within TIF_POLLING_NRFLAGS sections, have been
replaced tree-wide with the need_resched_or_ipi() check.

[ prateek: Replaced some of the missed out occurrences of
need_resched() within a TIF_POLLING sections with
need_resched_or_ipi() ]

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Andreas Larsson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Benjamin Gray <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Xin Li <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Leonardo Bras <[email protected]>
Cc: Imran Khan <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o Fixed a conflict with commit edc8fc01f608 ("x86: Fix
CPUIDLE_FLAG_IRQ_ENABLE leaking timer reprogram") that touched
mwait_idle_with_hints() in arch/x86/include/asm/mwait.h
---
arch/x86/include/asm/mwait.h | 2 +-
arch/x86/kernel/process.c | 2 +-
drivers/cpuidle/cpuidle-powernv.c | 2 +-
drivers/cpuidle/cpuidle-pseries.c | 2 +-
drivers/cpuidle/poll_state.c | 2 +-
include/linux/sched.h | 5 +++++
include/linux/sched/idle.h | 4 ++--
kernel/sched/idle.c | 7 ++++---
8 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 920426d691ce..3fa6f0bbd74f 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -125,7 +125,7 @@ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned lo

__monitor((void *)&current_thread_info()->flags, 0, 0);

- if (!need_resched()) {
+ if (!need_resched_or_ipi()) {
if (ecx & 1) {
__mwait(eax, ecx);
} else {
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b8441147eb5e..dd73cd6f735c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -901,7 +901,7 @@ static __cpuidle void mwait_idle(void)
}

__monitor((void *)&current_thread_info()->flags, 0, 0);
- if (!need_resched()) {
+ if (!need_resched_or_ipi()) {
__sti_mwait(0, 0);
raw_local_irq_disable();
}
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 9ebedd972df0..77c3bb371f56 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -79,7 +79,7 @@ static int snooze_loop(struct cpuidle_device *dev,
dev->poll_time_limit = false;
ppc64_runlatch_off();
HMT_very_low();
- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
/*
* Task has not woken up but we are exiting the polling
diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
index 14db9b7d985d..4f2b490f8b73 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -46,7 +46,7 @@ int snooze_loop(struct cpuidle_device *dev, struct cpuidle_driver *drv,
snooze_exit_time = get_tb() + snooze_timeout;
dev->poll_time_limit = false;

- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
HMT_low();
HMT_very_low();
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index 9b6d90a72601..225f37897e45 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -26,7 +26,7 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,

limit = cpuidle_poll_time(drv, dev);

- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
cpu_relax();
if (loop_count++ < POLL_IDLE_RELAX_COUNT)
continue;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 90691d99027e..e52cdd1298bf 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2069,6 +2069,11 @@ static __always_inline bool need_resched(void)
return unlikely(tif_need_resched());
}

+static __always_inline bool need_resched_or_ipi(void)
+{
+ return unlikely(tif_need_resched() || tif_notify_ipi());
+}
+
/*
* Wrappers for p->thread_info->cpu access. No-op on UP.
*/
diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index e670ac282333..497518b84e8d 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -63,7 +63,7 @@ static __always_inline bool __must_check current_set_polling_and_test(void)
*/
smp_mb__after_atomic();

- return unlikely(tif_need_resched());
+ return unlikely(need_resched_or_ipi());
}

static __always_inline bool __must_check current_clr_polling_and_test(void)
@@ -76,7 +76,7 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
*/
smp_mb__after_atomic();

- return unlikely(tif_need_resched());
+ return unlikely(need_resched_or_ipi());
}

#else
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 6e78d071beb5..7de94df5d477 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -57,7 +57,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
ct_cpuidle_enter();

raw_local_irq_enable();
- while (!tif_need_resched() &&
+ while (!need_resched_or_ipi() &&
(cpu_idle_force_poll || tick_check_broadcast_expired()))
cpu_relax();
raw_local_irq_disable();
@@ -174,7 +174,7 @@ static void cpuidle_idle_call(void)
* Check if the idle task must be rescheduled. If it is the
* case, exit the function after re-enabling the local IRQ.
*/
- if (need_resched()) {
+ if (need_resched_or_ipi()) {
local_irq_enable();
return;
}
@@ -270,7 +270,7 @@ static void do_idle(void)
__current_set_polling();
tick_nohz_idle_enter();

- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
rmb();

/*
@@ -350,6 +350,7 @@ static void do_idle(void)
* RCU relies on this call to be done outside of an RCU read-side
* critical section.
*/
+ current_clr_notify_ipi();
flush_smp_call_function_queue();
schedule_idle();

--
2.34.1


2024-06-13 18:19:17

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 03/14] sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of pending IPI

From: "Gautham R. Shenoy" <[email protected]>

Problem statement
=================

When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
upstream kernel (v6.7-rc6).

Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime. Reverting the optimization introduced by the above commit fixed
the regression in ipistorm, however benchmarks like tbench and netperf
regressed with the revert, supporting the validity of the optimization.

Following are the benchmark results on top of tip:sched/core with the
optimization reverted on a dual socket 3rd Generation aMD EPYC system
(2 x 64C/128T) running with boost enabled and C2 disabled:

tip:sched/core was at commit c793a62823d1 ("sched/core: Drop spinlocks
on contention iff kernel is preemptible") at the time of testing.

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [baseline]
tip:sched/core + revert 0.41 [60.00]

==================================================================
Test : tbench
Units : Normalized throughput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip (CV) revert (CV) [pct imp]
1 1.00 (0.60) 0.90 (0.08) [-10%]
2 1.00 (0.27) 0.90 (0.76) [-10%]
4 1.00 (0.42) 0.90 (0.52) [-10%]
8 1.00 (0.78) 0.91 (0.54) [ -9%]
16 1.00 (1.70) 0.92 (0.39) [ -8%]
32 1.00 (1.73) 0.91 (1.39) [ -9%]
64 1.00 (1.09) 0.92 (1.60) [ -8%]
128 1.00 (1.45) 0.95 (0.52) [ -5%]
256 1.00 (0.96) 1.01 (0.28) [ 1%]
512 1.00 (0.32) 1.01 (0.20) [ 1%]
1024 1.00 (0.06) 1.01 (0.03) [ 1%]

Since a simple revert is not a viable solution, the changes in the code
path of call_function_single_prep_ipi(), with and without the
optimization were audited to better understand the effect of the commit.

Effects of call_function_single_prep_ipi()
==========================================

To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.

When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.

Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched()) {
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NEED_RESCHED); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_set_need_resched();
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func();
} /* End of wait */ }
} schedule_idle() {
...
local_irq_disable();
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
arch_send_call_function_single_ipi(CPU1); ...
\ ...
\ newidle_balance() {
\ ...
/* Delay */ ...
\ }
\ ...
\--------------> local_irq_enable();
/* Processes the IPI */
--

Skipping newidle_balance()
==========================

In an earlier attempt to solve the challenge of the long IRQ disabled
section, newidle_balance() was skipped when a CPU waking up from idle
was found to have no runnable tasks, and was transitioning back to
idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
may be viable for CPUs that are idling with tick enabled, where the
newidle_balance() has the opportunity to pull tasks onto the idle CPU.

Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.

Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.

Proposed solution: TIF_NOTIFY_IPI
=================================

Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
of idle, TIF_NOTIFY_IPI is a newly introduced flag that
call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
indicate a pending IPI, which the idle CPU promises to process soon.

On architectures that do not support the TIF_NOTIFY_IPI flag,
call_function_single_prep_ipi() will fallback to setting
TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.

Since the pending IPI handlers are processed before the call to
schedule_idle() in do_idle(), schedule_idle() will only be called if the
IPI handler have woken / migrated a new task on the idle CPU and has set
TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
long IRQ disabled section in schedule_idle() unnecessarily, and any
need_resched() check within a call function will accurately notify if a
task is waiting for CPU time on the CPU handling the IPI.

Following is the crude visualization of how the situation changes with
the newly introduced TIF_NOTIFY_IPI flag:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched_or_ipi()) {
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NOTIFY_IPI); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
} /* End of wait */ }
} if (need_resched()) {
schedule_idle();
smp_call_function_single(CPU1, func, wait = 1) { }
... ... /* IRQs remain enabled */
arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
--

Results
=======

With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
using ipistorm improves drastically. Following are the numbers from the
same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
C2 disabled) running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [baseline]
tip:sched/core + revert 0.40 [60.26]
tip:sched/core + TIF_NOTIFY_IPI 0.46 [54.88]

netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.

Additional benefits
===================

In nohz_csd_func(), the need_resched() check returns true when an idle
CPU in TIF_POLLING mode is woken up to do an idle load balance which
leads to the idle load balance bailing out early today since
send_call_function_single_ipi() ends up setting the TIF_NEED_RESCHED
flag to put the CPU out of idle and the flag is not cleared until
__schedule() is called much later in the call path.

With TIF_NOTIFY_IPI, this is no longer the case since TIF_NEED_RESCHED
is only set if there is a genuine need to call schedule() and not used
in an overloaded manner to notify a pending IPI.

[ prateek: Split the changes into a separate patch, added the
TIF_NEED_RESCHED optimization in notify_ipi_if_polling().
TIF_WAKE_FLAG macro, commit log ]

Link: https://github.com/antonblanchard/ipistorm [1]
Link: https://lore.kernel.org/lkml/[email protected]/ [2]
Link: https://lore.kernel.org/lkml/[email protected]/ [3]
Link: https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/ [4]
Link: https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/ [5]
Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Andreas Larsson <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Benjamin Gray <[email protected]>
Cc: Frederic Weisbecker <[email protected]>
Cc: Xin Li <[email protected]>
Cc: Kees Cook <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Bjorn Helgaas <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Leonardo Bras <[email protected]>
Cc: Imran Khan <[email protected]>
Cc: "Paul E. McKenney" <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o Updated benchmark numbers.
---
include/linux/sched/idle.h | 8 ++++----
kernel/sched/core.c | 41 ++++++++++++++++++++++++++++++--------
kernel/sched/idle.c | 16 +++++++++++----
3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index 497518b84e8d..4757a6ab5c2c 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -58,8 +58,8 @@ static __always_inline bool __must_check current_set_polling_and_test(void)
__current_set_polling();

/*
- * Polling state must be visible before we test NEED_RESCHED,
- * paired by resched_curr()
+ * Polling state must be visible before we test NEED_RESCHED or
+ * NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
*/
smp_mb__after_atomic();

@@ -71,8 +71,8 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
__current_clr_polling();

/*
- * Polling state must be visible before we test NEED_RESCHED,
- * paired by resched_curr()
+ * Polling state must be visible before we test NEED_RESCHED or
+ * NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
*/
smp_mb__after_atomic();

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0935f9d4bb7b..bb01b063320b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -911,12 +911,30 @@ static inline bool set_nr_and_not_polling(struct task_struct *p)
}

/*
- * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
+ * Certain architectures that support TIF_POLLING_NRFLAG may not support
+ * TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of a pending
+ * IPI. On such architectures, set TIF_NEED_RESCHED instead to wake the
+ * idle CPU and process the pending IPI.
+ */
+#ifdef _TIF_NOTIFY_IPI
+#define _TIF_WAKE_FLAG _TIF_NOTIFY_IPI
+#else
+#define _TIF_WAKE_FLAG _TIF_NEED_RESCHED
+#endif
+
+/*
+ * Atomically set TIF_WAKE_FLAG when TIF_POLLING_NRFLAG is set.
+ *
+ * On architectures that define TIF_NOTIFY_IPI, the same is set in the
+ * idle task's thread_info to pull the CPU out of idle and process
+ * the pending interrupt. On architectures that don't support
+ * TIF_NOTIFY_IPI, TIF_NEED_RESCHED is set instead to notify the
+ * pending IPI.
*
- * If this returns true, then the idle task promises to call
- * sched_ttwu_pending() and reschedule soon.
+ * If this returns true, then the idle task promises to process the
+ * call function soon.
*/
-static bool set_nr_if_polling(struct task_struct *p)
+static bool notify_ipi_if_polling(struct task_struct *p)
{
struct thread_info *ti = task_thread_info(p);
typeof(ti->flags) val = READ_ONCE(ti->flags);
@@ -924,9 +942,16 @@ static bool set_nr_if_polling(struct task_struct *p)
do {
if (!(val & _TIF_POLLING_NRFLAG))
return false;
- if (val & _TIF_NEED_RESCHED)
+ /*
+ * If TIF_NEED_RESCHED flag is set in addition to
+ * TIF_POLLING_NRFLAG, the CPU will soon fall out of
+ * idle. Since flush_smp_call_function_queue() is called
+ * soon after the idle exit, setting TIF_WAKE_FLAG is
+ * not necessary.
+ */
+ if (val & (_TIF_NEED_RESCHED | _TIF_WAKE_FLAG))
return true;
- } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
+ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_WAKE_FLAG));

return true;
}
@@ -939,7 +964,7 @@ static inline bool set_nr_and_not_polling(struct task_struct *p)
}

#ifdef CONFIG_SMP
-static inline bool set_nr_if_polling(struct task_struct *p)
+static inline bool notify_ipi_if_polling(struct task_struct *p)
{
return false;
}
@@ -3710,7 +3735,7 @@ void sched_ttwu_pending(void *arg)
*/
bool call_function_single_prep_ipi(int cpu)
{
- if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
+ if (notify_ipi_if_polling(cpu_rq(cpu)->idle)) {
trace_sched_wake_idle_without_ipi(cpu);
return false;
}
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 7de94df5d477..6748735156a7 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -329,13 +329,13 @@ static void do_idle(void)
}

/*
- * Since we fell out of the loop above, we know TIF_NEED_RESCHED must
- * be set, propagate it into PREEMPT_NEED_RESCHED.
+ * Since we fell out of the loop above, TIF_NEED_RESCHED may be set.
+ * Propagate it into PREEMPT_NEED_RESCHED.
*
* This is required because for polling idle loops we will not have had
* an IPI to fold the state for us.
*/
- preempt_set_need_resched();
+ preempt_fold_need_resched();
tick_nohz_idle_exit();
__current_clr_polling();

@@ -352,7 +352,15 @@ static void do_idle(void)
*/
current_clr_notify_ipi();
flush_smp_call_function_queue();
- schedule_idle();
+
+ /*
+ * When NEED_RESCHED is set, the idle thread promises to call
+ * schedule_idle(). schedule_idle() can be skipped when an idle CPU
+ * was woken up to process an IPI that does not queue a task on the
+ * idle CPU, facilitating faster idle re-entry.
+ */
+ if (need_resched())
+ schedule_idle();

if (unlikely(klp_patch_pending(current)))
klp_update_patch_state(current);
--
2.34.1


2024-06-13 18:19:32

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 04/14] x86/thread_info: Introduce TIF_NOTIFY_IPI flag

From: "Gautham R. Shenoy" <[email protected]>

Add support for TIF_NOTIFY_IPI on x86. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

IPI throughput measured using a modified version of Anton Blanchard's
ipistorm benchmark [1], configured to measure time taken to perform a
fixed number of smp_call_function_single() (with wait set to 1),
improves significantly with TIF_NOTIFY_IPI on a dual socket Ampere Server
(2 x 64C) with the benchmark time reducing to less than half for
100000 IPIs between two CPUs. (Note: Only WFI idle mode was left enabled
during testing to reduce variance)

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [baseline]
tip:sched/core + TIF_NOTIFY_IPI 0.46 [54.88]

tip:sched/core was at commit c793a62823d1 ("sched/core: Drop spinlocks
on contention iff kernel is preemptible") at the time of testing.

[ prateek: Split the changes into a separate patch, commit log ]

Link: https://github.com/antonblanchard/ipistorm [1]
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Xin Li <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: Julia Lawall <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o Updated benchmark numbers.
---
arch/x86/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 12da7dfd5ef1..640ee41b8c82 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -89,6 +89,7 @@ struct thread_info {
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/
#define TIF_SSBD 5 /* Speculative store bypass disable */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SPEC_IB 9 /* Indirect branch speculation mitigation */
#define TIF_SPEC_L1D_FLUSH 10 /* Flush L1D on mm switches (processes) */
#define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */
@@ -112,6 +113,7 @@ struct thread_info {
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_SSBD (1 << TIF_SSBD)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_SPEC_IB (1 << TIF_SPEC_IB)
#define _TIF_SPEC_L1D_FLUSH (1 << TIF_SPEC_L1D_FLUSH)
#define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY)
--
2.34.1


2024-06-13 18:20:07

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 06/14] alpha/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on Alpha. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/alpha/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/alpha/include/asm/thread_info.h b/arch/alpha/include/asm/thread_info.h
index 4a4d00b37986..8c17855c85c7 100644
--- a/arch/alpha/include/asm/thread_info.h
+++ b/arch/alpha/include/asm/thread_info.h
@@ -64,6 +64,7 @@ register unsigned long *current_stack_pointer __asm__ ("$30");
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_SYSCALL_AUDIT 4 /* syscall audit active */
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_DIE_IF_KERNEL 9 /* dik recursion lock */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_POLLING_NRFLAG 14 /* idle is polling for TIF_NEED_RESCHED */
@@ -74,6 +75,7 @@ register unsigned long *current_stack_pointer __asm__ ("$30");
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)

/* Work to do on interrupt/exception return. */
--
2.34.1


2024-06-13 18:20:37

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 07/14] openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on OpenRISC. With TIF_NOTIFY_IPI, a
sender sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/openrisc/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/openrisc/include/asm/thread_info.h b/arch/openrisc/include/asm/thread_info.h
index 4af3049c34c2..6a386703bc43 100644
--- a/arch/openrisc/include/asm/thread_info.h
+++ b/arch/openrisc/include/asm/thread_info.h
@@ -92,6 +92,7 @@ register struct thread_info *current_thread_info_reg asm("r10");
* mode
*/
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */
#define TIF_RESTORE_SIGMASK 9
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling * TIF_NEED_RESCHED
@@ -104,6 +105,7 @@ register struct thread_info *current_thread_info_reg asm("r10");
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
#define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)


--
2.34.1


2024-06-13 18:20:51

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 05/14] arm/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on ARM. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Russell King <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://github.com/antonblanchard/ipistorm [1]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o Dropped the ARM benchmark numbers since I could not get my hands on an
ARM64 server internally. I tested if the lack of TIF_NOTIFY_IPI
triggers any of the modified path or not on x86 by only applying till
Patch 3 and I did not see anything unexpected. I'll update the thread
with the results from the ARM server [if and] when I get my hands on
it again.
---
arch/arm/include/asm/thread_info.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 943ffcf069d2..324248d87c9e 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -136,6 +136,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
* thread information flags:
* TIF_USEDFPU - FPU was used by this task this quantum (SMP)
* TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED
+ * or TIF_NOTIFY_IPI
*
* Any bit in the range of 0..15 will cause do_work_pending() to be invoked.
*/
@@ -144,6 +145,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_UPROBE 3 /* breakpointed or singlestepping */
#define TIF_NOTIFY_SIGNAL 4 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 5 /* pending IPI on TIF_POLLLING idle CPU */

#define TIF_USING_IWMMXT 17
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
@@ -164,6 +166,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
#define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)

/* Checks for any syscall work in entry-common.S */
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
--
2.34.1


2024-06-13 18:21:33

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 08/14] powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on PowerPC. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Benjamin Gray <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/powerpc/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 15c5691dd218..9545e164463b 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -103,6 +103,7 @@ void arch_setup_new_exec(void);
#define TIF_PATCH_PENDING 6 /* pending live patching update */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SINGLESTEP 8 /* singlestepping active */
+#define TIF_NOTIFY_IPI 9 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SECCOMP 10 /* secure computing */
#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */
#define TIF_NOERROR 12 /* Force successful syscall return */
@@ -129,6 +130,7 @@ void arch_setup_new_exec(void);
#define _TIF_PATCH_PENDING (1<<TIF_PATCH_PENDING)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_SECCOMP (1<<TIF_SECCOMP)
#define _TIF_RESTOREALL (1<<TIF_RESTOREALL)
#define _TIF_NOERROR (1<<TIF_NOERROR)
--
2.34.1


2024-06-13 18:21:51

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 09/14] sh/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on SuperH. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/sh/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/sh/include/asm/thread_info.h b/arch/sh/include/asm/thread_info.h
index 9f19a682d315..8cd9d2a5361b 100644
--- a/arch/sh/include/asm/thread_info.h
+++ b/arch/sh/include/asm/thread_info.h
@@ -106,6 +106,7 @@ extern void init_thread_xstate(void);
#define TIF_SECCOMP 6 /* secure computing */
#define TIF_NOTIFY_RESUME 7 /* callback before returning to user */
#define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */
+#define TIF_NOTIFY_IPI 9 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */

@@ -118,6 +119,7 @@ extern void init_thread_xstate(void);
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)

/* work to do in syscall trace */
--
2.34.1


2024-06-13 18:22:36

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 11/14] csky/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on C-SKY. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Guo Ren <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Acked-by: Guo Ren <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o Collected the ack. Thank you :)
---
arch/csky/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/csky/include/asm/thread_info.h b/arch/csky/include/asm/thread_info.h
index b5ed788f0c68..9bc7a037c476 100644
--- a/arch/csky/include/asm/thread_info.h
+++ b/arch/csky/include/asm/thread_info.h
@@ -61,6 +61,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_TRACEPOINT 5 /* syscall tracepoint instrumentation */
#define TIF_SYSCALL_AUDIT 6 /* syscall auditing */
#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 8 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_POLLING_NRFLAG 16 /* poll_idle() is TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 20 /* restore signal mask in do_signal() */
@@ -73,6 +74,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_MEMDIE (1 << TIF_MEMDIE)
--
2.34.1


2024-06-13 18:23:01

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 12/14] parisc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on PA-RISC. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/parisc/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/parisc/include/asm/thread_info.h b/arch/parisc/include/asm/thread_info.h
index 1a58795f785c..35f1deeb8f36 100644
--- a/arch/parisc/include/asm/thread_info.h
+++ b/arch/parisc/include/asm/thread_info.h
@@ -52,6 +52,7 @@ struct thread_info {
#define TIF_SECCOMP 11 /* secure computing */
#define TIF_SYSCALL_TRACEPOINT 12 /* syscall tracepoint instrumentation */
#define TIF_NONBLOCK_WARNING 13 /* warned about wrong O_NONBLOCK usage */
+#define TIF_NOTIFY_IPI 14 /* Pending IPI on TIF_POLLLING idle CPU */

#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@@ -65,6 +66,7 @@ struct thread_info {
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)

#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED | _TIF_NOTIFY_SIGNAL)
--
2.34.1


2024-06-13 18:23:15

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 13/14] nios2/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on Nios II. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Dinh Nguyen <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/nios2/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/nios2/include/asm/thread_info.h b/arch/nios2/include/asm/thread_info.h
index 5abac9893b32..24882fd5ad11 100644
--- a/arch/nios2/include/asm/thread_info.h
+++ b/arch/nios2/include/asm/thread_info.h
@@ -79,6 +79,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_AUDIT 6 /* syscall auditing active */
#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
#define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */
+#define TIF_NOTIFY_IPI 10 /* Pending IPI on TIF_POLLLING idle CPU */

#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
@@ -91,6 +92,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)

/* work to do on interrupt/exception return */
--
2.34.1


2024-06-13 18:23:33

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 14/14] microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on MicroBlaze. With TIF_NOTIFY_IPI, a
sender sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Michal Simek <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/microblaze/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/microblaze/include/asm/thread_info.h b/arch/microblaze/include/asm/thread_info.h
index a0ddd2a36fb9..953a334bb4fe 100644
--- a/arch/microblaze/include/asm/thread_info.h
+++ b/arch/microblaze/include/asm/thread_info.h
@@ -103,6 +103,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SINGLESTEP 4
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
#define TIF_MEMDIE 6 /* is terminating due to OOM killer */
+#define TIF_NOTIFY_IPI 7 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */
#define TIF_SECCOMP 10 /* secure computing */

@@ -115,6 +116,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
--
2.34.1


2024-06-13 18:24:25

by K Prateek Nayak

[permalink] [raw]
Subject: [PATCH v2 10/14] sparc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on SPARC. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: "David S. Miller" <[email protected]>
Cc: Andreas Larsson <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
v1..v2:
o No changes.
---
arch/sparc/include/asm/thread_info_32.h | 2 ++
arch/sparc/include/asm/thread_info_64.h | 2 ++
2 files changed, 4 insertions(+)

diff --git a/arch/sparc/include/asm/thread_info_32.h b/arch/sparc/include/asm/thread_info_32.h
index 45b4955b253f..f538ede526d1 100644
--- a/arch/sparc/include/asm/thread_info_32.h
+++ b/arch/sparc/include/asm/thread_info_32.h
@@ -105,6 +105,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_RESTORE_SIGMASK 4 /* restore signal mask in do_signal() */
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_USEDFPU 8 /* FPU was used by this task
* this quantum (SMP) */
#define TIF_POLLING_NRFLAG 9 /* true if poll_idle() is polling
@@ -117,6 +118,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
#define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)

diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
index 1a44372e2bc0..3558101ccdd1 100644
--- a/arch/sparc/include/asm/thread_info_64.h
+++ b/arch/sparc/include/asm/thread_info_64.h
@@ -194,6 +194,7 @@ extern struct thread_info *current_thread_info(void);
#define TIF_MCDPER 12 /* Precise MCD exception */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_POLLING_NRFLAG 14
+#define TIF_NOTIFY_IPI 15 /* Pending IPI on TIF_POLLLING idle CPU */

#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
@@ -208,6 +209,7 @@ extern struct thread_info *current_thread_info(void);
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)

#define _TIF_USER_WORK_MASK ((0xff << TI_FLAG_WSAVED_SHIFT) | \
_TIF_DO_NOTIFY_RESUME_MASK | \
--
2.34.1


2024-06-14 09:28:26

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On Thu, Jun 13, 2024 at 06:15:59PM +0000, K Prateek Nayak wrote:
> Effects of call_function_single_prep_ipi()
> ==========================================
>
> To pull a TIF_POLLING thread out of idle to process an IPI, the sender
> sets the TIF_NEED_RESCHED bit in the idle task's thread info in
> call_function_single_prep_ipi() and avoids sending an actual IPI to the
> target. As a result, the scheduler expects a task to be enqueued when
> exiting the idle path. This is not the case with non-polling idle states
> where the idle CPU exits the non-polling idle state to process the
> interrupt, and since need_resched() returns false, soon goes back to
> idle again.
>
> When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
> a large part of which runs with local IRQ disabled. In case of ipistorm,
> when measuring IPI throughput, this large IRQ disabled section delays
> processing of IPIs. Further auditing revealed that in absence of any
> runnable tasks, pick_next_task_fair(), which is called from the
> pick_next_task() fast path, will always call newidle_balance() in this
> scenario, further increasing the time spent in the IRQ disabled section.
>
> Following is the crude visualization of the problem with relevant
> functions expanded:
> --
> CPU0 CPU1
> ==== ====
> do_idle() {
> __current_set_polling();
> ...
> monitor(addr);
> if (!need_resched())
> mwait() {
> /* Waiting */
> smp_call_function_single(CPU1, func, wait = 1) { ...
> ... ...
> set_nr_if_polling(CPU1) { ...
> /* Realizes CPU1 is polling */ ...
> try_cmpxchg(addr, ...
> &val, ...
> val | _TIF_NEED_RESCHED); ...
> } /* Does not send an IPI */ ...
> ... } /* mwait exit due to write at addr */
> csd_lock_wait() { }
> /* Waiting */ preempt_set_need_resched();
> ... __current_clr_polling();
> ... flush_smp_call_function_queue() {
> ... func();
> } /* End of wait */ }
> } schedule_idle() {
> ...
> local_irq_disable();
> smp_call_function_single(CPU1, func, wait = 1) { ...
> ... ...
> arch_send_call_function_single_ipi(CPU1); ...
> \ ...
> \ newidle_balance() {
> \ ...
> /* Delay */ ...
> \ }
> \ ...
> \--------------> local_irq_enable();
> /* Processes the IPI */
> --
>
>
> Skipping newidle_balance()
> ==========================
>
> In an earlier attempt to solve the challenge of the long IRQ disabled
> section, newidle_balance() was skipped when a CPU waking up from idle
> was found to have no runnable tasks, and was transitioning back to
> idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
> may be viable for CPUs that are idling with tick enabled, where the
> newidle_balance() has the opportunity to pull tasks onto the idle CPU.

I don't think we should be relying on this in any way shape or form.
NOHZ can kill that tick at any time.

Also, semantically, calling newidle from the idle thread is just daft.
You're really not newly idle in that case.

> Vincent [5] pointed out a case where the idle load kick will fail to
> run on an idle CPU since the IPI handler launching the ILB will check
> for need_resched(). In such cases, the idle CPU relies on
> newidle_balance() to pull tasks towards itself.

Is this the need_resched() in _nohz_idle_balance() ? Should we change
this to 'need_resched() && (rq->nr_running || rq->ttwu_pending)' or
something long those lines?

I mean, it's fairly trivial to figure out if there really is going to be
work there.

> Using an alternate flag instead of NEED_RESCHED to indicate a pending
> IPI was suggested as the correct approach to solve this problem on the
> same thread.

So adding per-arch changes for this seems like something we shouldn't
unless there really is no other sane options.

That is, I really think we should start with something like the below
and then fix any fallout from that.

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0935f9d4bb7b..cfa45338ae97 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5799,7 +5800,7 @@ static inline struct task_struct *
__pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
{
const struct sched_class *class;
- struct task_struct *p;
+ struct task_struct *p = NULL;

/*
* Optimization: we know that if all tasks are in the fair class we can
@@ -5810,9 +5811,11 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) &&
rq->nr_running == rq->cfs.h_nr_running)) {

- p = pick_next_task_fair(rq, prev, rf);
- if (unlikely(p == RETRY_TASK))
- goto restart;
+ if (rq->nr_running) {
+ p = pick_next_task_fair(rq, prev, rf);
+ if (unlikely(p == RETRY_TASK))
+ goto restart;
+ }

/* Assume the next prioritized class is idle_sched_class */
if (!p) {

2024-06-14 10:49:28

by Vincent Guittot

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On Fri, 14 Jun 2024 at 11:28, Peter Zijlstra <[email protected]> wrote:
>
> On Thu, Jun 13, 2024 at 06:15:59PM +0000, K Prateek Nayak wrote:
> > Effects of call_function_single_prep_ipi()
> > ==========================================
> >
> > To pull a TIF_POLLING thread out of idle to process an IPI, the sender
> > sets the TIF_NEED_RESCHED bit in the idle task's thread info in
> > call_function_single_prep_ipi() and avoids sending an actual IPI to the
> > target. As a result, the scheduler expects a task to be enqueued when
> > exiting the idle path. This is not the case with non-polling idle states
> > where the idle CPU exits the non-polling idle state to process the
> > interrupt, and since need_resched() returns false, soon goes back to
> > idle again.
> >
> > When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
> > a large part of which runs with local IRQ disabled. In case of ipistorm,
> > when measuring IPI throughput, this large IRQ disabled section delays
> > processing of IPIs. Further auditing revealed that in absence of any
> > runnable tasks, pick_next_task_fair(), which is called from the
> > pick_next_task() fast path, will always call newidle_balance() in this
> > scenario, further increasing the time spent in the IRQ disabled section.
> >
> > Following is the crude visualization of the problem with relevant
> > functions expanded:
> > --
> > CPU0 CPU1
> > ==== ====
> > do_idle() {
> > __current_set_polling();
> > ...
> > monitor(addr);
> > if (!need_resched())
> > mwait() {
> > /* Waiting */
> > smp_call_function_single(CPU1, func, wait = 1) { ...
> > ... ...
> > set_nr_if_polling(CPU1) { ...
> > /* Realizes CPU1 is polling */ ...
> > try_cmpxchg(addr, ...
> > &val, ...
> > val | _TIF_NEED_RESCHED); ...
> > } /* Does not send an IPI */ ...
> > ... } /* mwait exit due to write at addr */
> > csd_lock_wait() { }
> > /* Waiting */ preempt_set_need_resched();
> > ... __current_clr_polling();
> > ... flush_smp_call_function_queue() {
> > ... func();
> > } /* End of wait */ }
> > } schedule_idle() {
> > ...
> > local_irq_disable();
> > smp_call_function_single(CPU1, func, wait = 1) { ...
> > ... ...
> > arch_send_call_function_single_ipi(CPU1); ...
> > \ ...
> > \ newidle_balance() {
> > \ ...
> > /* Delay */ ...
> > \ }
> > \ ...
> > \--------------> local_irq_enable();
> > /* Processes the IPI */
> > --
> >
> >
> > Skipping newidle_balance()
> > ==========================
> >
> > In an earlier attempt to solve the challenge of the long IRQ disabled
> > section, newidle_balance() was skipped when a CPU waking up from idle
> > was found to have no runnable tasks, and was transitioning back to
> > idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
> > may be viable for CPUs that are idling with tick enabled, where the
> > newidle_balance() has the opportunity to pull tasks onto the idle CPU.
>
> I don't think we should be relying on this in any way shape or form.
> NOHZ can kill that tick at any time.
>
> Also, semantically, calling newidle from the idle thread is just daft.
> You're really not newly idle in that case.
>
> > Vincent [5] pointed out a case where the idle load kick will fail to
> > run on an idle CPU since the IPI handler launching the ILB will check
> > for need_resched(). In such cases, the idle CPU relies on
> > newidle_balance() to pull tasks towards itself.
>
> Is this the need_resched() in _nohz_idle_balance() ? Should we change
> this to 'need_resched() && (rq->nr_running || rq->ttwu_pending)' or
> something long those lines?

It's not only this but also in do_idle() as well which exits the loop
to look for tasks to schedule

>
> I mean, it's fairly trivial to figure out if there really is going to be
> work there.
>
> > Using an alternate flag instead of NEED_RESCHED to indicate a pending
> > IPI was suggested as the correct approach to solve this problem on the
> > same thread.
>
> So adding per-arch changes for this seems like something we shouldn't
> unless there really is no other sane options.
>
> That is, I really think we should start with something like the below
> and then fix any fallout from that.

The main problem is that need_resched becomes somewhat meaningless
because it doesn't only mean "I need to resched a task" and we have
to add more tests around even for those not using polling

>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 0935f9d4bb7b..cfa45338ae97 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -5799,7 +5800,7 @@ static inline struct task_struct *
> __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> {
> const struct sched_class *class;
> - struct task_struct *p;
> + struct task_struct *p = NULL;
>
> /*
> * Optimization: we know that if all tasks are in the fair class we can
> @@ -5810,9 +5811,11 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) &&
> rq->nr_running == rq->cfs.h_nr_running)) {
>
> - p = pick_next_task_fair(rq, prev, rf);
> - if (unlikely(p == RETRY_TASK))
> - goto restart;
> + if (rq->nr_running) {

How do you make the diff between a spurious need_resched() because of
polling and a cpu becoming idle ? isn't rq->nr_running null in both
cases ?
In the later case, we need to call sched_balance_newidle() but not in the former

> + p = pick_next_task_fair(rq, prev, rf);
> + if (unlikely(p == RETRY_TASK))
> + goto restart;
> + }
>
> /* Assume the next prioritized class is idle_sched_class */
> if (!p) {

2024-06-14 16:35:43

by Chen Yu

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On 2024-06-14 at 12:48:37 +0200, Vincent Guittot wrote:
> On Fri, 14 Jun 2024 at 11:28, Peter Zijlstra <[email protected]> wrote:
> >
> > On Thu, Jun 13, 2024 at 06:15:59PM +0000, K Prateek Nayak wrote:
> > > Effects of call_function_single_prep_ipi()
> > > ==========================================
> > >
> > > To pull a TIF_POLLING thread out of idle to process an IPI, the sender
> > > sets the TIF_NEED_RESCHED bit in the idle task's thread info in
> > > call_function_single_prep_ipi() and avoids sending an actual IPI to the
> > > target. As a result, the scheduler expects a task to be enqueued when
> > > exiting the idle path. This is not the case with non-polling idle states
> > > where the idle CPU exits the non-polling idle state to process the
> > > interrupt, and since need_resched() returns false, soon goes back to
> > > idle again.
> > >
> > > When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
> > > a large part of which runs with local IRQ disabled. In case of ipistorm,
> > > when measuring IPI throughput, this large IRQ disabled section delays
> > > processing of IPIs. Further auditing revealed that in absence of any
> > > runnable tasks, pick_next_task_fair(), which is called from the
> > > pick_next_task() fast path, will always call newidle_balance() in this
> > > scenario, further increasing the time spent in the IRQ disabled section.
> > >
> > > Following is the crude visualization of the problem with relevant
> > > functions expanded:
> > > --
> > > CPU0 CPU1
> > > ==== ====
> > > do_idle() {
> > > __current_set_polling();
> > > ...
> > > monitor(addr);
> > > if (!need_resched())
> > > mwait() {
> > > /* Waiting */
> > > smp_call_function_single(CPU1, func, wait = 1) { ...
> > > ... ...
> > > set_nr_if_polling(CPU1) { ...
> > > /* Realizes CPU1 is polling */ ...
> > > try_cmpxchg(addr, ...
> > > &val, ...
> > > val | _TIF_NEED_RESCHED); ...
> > > } /* Does not send an IPI */ ...
> > > ... } /* mwait exit due to write at addr */
> > > csd_lock_wait() { }
> > > /* Waiting */ preempt_set_need_resched();
> > > ... __current_clr_polling();
> > > ... flush_smp_call_function_queue() {
> > > ... func();
> > > } /* End of wait */ }
> > > } schedule_idle() {
> > > ...
> > > local_irq_disable();
> > > smp_call_function_single(CPU1, func, wait = 1) { ...
> > > ... ...
> > > arch_send_call_function_single_ipi(CPU1); ...
> > > \ ...
> > > \ newidle_balance() {
> > > \ ...
> > > /* Delay */ ...
> > > \ }
> > > \ ...
> > > \--------------> local_irq_enable();
> > > /* Processes the IPI */
> > > --
> > >
> > >
> > > Skipping newidle_balance()
> > > ==========================
> > >
> > > In an earlier attempt to solve the challenge of the long IRQ disabled
> > > section, newidle_balance() was skipped when a CPU waking up from idle
> > > was found to have no runnable tasks, and was transitioning back to
> > > idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
> > > may be viable for CPUs that are idling with tick enabled, where the
> > > newidle_balance() has the opportunity to pull tasks onto the idle CPU.
> >
> > I don't think we should be relying on this in any way shape or form.
> > NOHZ can kill that tick at any time.
> >
> > Also, semantically, calling newidle from the idle thread is just daft.
> > You're really not newly idle in that case.
> >
> > > Vincent [5] pointed out a case where the idle load kick will fail to
> > > run on an idle CPU since the IPI handler launching the ILB will check
> > > for need_resched(). In such cases, the idle CPU relies on
> > > newidle_balance() to pull tasks towards itself.
> >
> > Is this the need_resched() in _nohz_idle_balance() ? Should we change
> > this to 'need_resched() && (rq->nr_running || rq->ttwu_pending)' or
> > something long those lines?
>
> It's not only this but also in do_idle() as well which exits the loop
> to look for tasks to schedule
>
> >
> > I mean, it's fairly trivial to figure out if there really is going to be
> > work there.
> >
> > > Using an alternate flag instead of NEED_RESCHED to indicate a pending
> > > IPI was suggested as the correct approach to solve this problem on the
> > > same thread.
> >
> > So adding per-arch changes for this seems like something we shouldn't
> > unless there really is no other sane options.
> >
> > That is, I really think we should start with something like the below
> > and then fix any fallout from that.
>
> The main problem is that need_resched becomes somewhat meaningless
> because it doesn't only mean "I need to resched a task" and we have
> to add more tests around even for those not using polling
>
> >
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 0935f9d4bb7b..cfa45338ae97 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5799,7 +5800,7 @@ static inline struct task_struct *
> > __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> > {
> > const struct sched_class *class;
> > - struct task_struct *p;
> > + struct task_struct *p = NULL;
> >
> > /*
> > * Optimization: we know that if all tasks are in the fair class we can
> > @@ -5810,9 +5811,11 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> > if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) &&
> > rq->nr_running == rq->cfs.h_nr_running)) {
> >
> > - p = pick_next_task_fair(rq, prev, rf);
> > - if (unlikely(p == RETRY_TASK))
> > - goto restart;
> > + if (rq->nr_running) {
>
> How do you make the diff between a spurious need_resched() because of
> polling and a cpu becoming idle ? isn't rq->nr_running null in both
> cases ?
> In the later case, we need to call sched_balance_newidle() but not in the former
>

Not sure if I understand correctly, if the goal of smp_call_function_single() is to
kick the idle CPU and do not force it to launch the schedule()->sched_balance_newidle(),
can we set the _TIF_POLLING_NRFLAG rather than _TIF_NEED_RESCHED in set_nr_if_polling()?
I think writing any value to the monitor address would wakeup the idle CPU. And _TIF_POLLING_NRFLAG
will be cleared once that idle CPU exit the idle loop, so we don't introduce arch-wide flag.

thanks,
Chenyu

> > + p = pick_next_task_fair(rq, prev, rf);
> > + if (unlikely(p == RETRY_TASK))
> > + goto restart;
> > + }
> >
> > /* Assume the next prioritized class is idle_sched_class */
> > if (!p) {

2024-06-15 01:33:19

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On Fri, Jun 14, 2024 at 12:48:37PM +0200, Vincent Guittot wrote:
> On Fri, 14 Jun 2024 at 11:28, Peter Zijlstra <[email protected]> wrote:

> > > Vincent [5] pointed out a case where the idle load kick will fail to
> > > run on an idle CPU since the IPI handler launching the ILB will check
> > > for need_resched(). In such cases, the idle CPU relies on
> > > newidle_balance() to pull tasks towards itself.
> >
> > Is this the need_resched() in _nohz_idle_balance() ? Should we change
> > this to 'need_resched() && (rq->nr_running || rq->ttwu_pending)' or
> > something long those lines?
>
> It's not only this but also in do_idle() as well which exits the loop
> to look for tasks to schedule

Is that really a problem? Reading the initial email the problem seems to
be newidle balance, not hitting schedule. Schedule should be fairly
quick if there's nothing to do, no?

> > I mean, it's fairly trivial to figure out if there really is going to be
> > work there.
> >
> > > Using an alternate flag instead of NEED_RESCHED to indicate a pending
> > > IPI was suggested as the correct approach to solve this problem on the
> > > same thread.
> >
> > So adding per-arch changes for this seems like something we shouldn't
> > unless there really is no other sane options.
> >
> > That is, I really think we should start with something like the below
> > and then fix any fallout from that.
>
> The main problem is that need_resched becomes somewhat meaningless
> because it doesn't only mean "I need to resched a task" and we have
> to add more tests around even for those not using polling

True, however we already had some of that by having the wakeup list,
that made nr_running less 'reliable'.

The thing is, most architectures seem to have the TIF_POLLING_NRFLAG
bit, even if their main idle routine isn't actually using it, much of
the idle loop until it hits the arch idle will be having it set and will
thus tickle these cases *sometimes*.

> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 0935f9d4bb7b..cfa45338ae97 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -5799,7 +5800,7 @@ static inline struct task_struct *
> > __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> > {
> > const struct sched_class *class;
> > - struct task_struct *p;
> > + struct task_struct *p = NULL;
> >
> > /*
> > * Optimization: we know that if all tasks are in the fair class we can
> > @@ -5810,9 +5811,11 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
> > if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) &&
> > rq->nr_running == rq->cfs.h_nr_running)) {
> >
> > - p = pick_next_task_fair(rq, prev, rf);
> > - if (unlikely(p == RETRY_TASK))
> > - goto restart;
> > + if (rq->nr_running) {
>
> How do you make the diff between a spurious need_resched() because of
> polling and a cpu becoming idle ? isn't rq->nr_running null in both
> cases ?

Bah, true. It should also check current being idle, which then makes a
mess of things again. Still, we shouldn't be calling newidle from idle,
that's daft.

I should probably not write code at 3am, but the below horror is what
I came up with.

---

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 0935f9d4bb7b..cfe8d3350819 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -6343,19 +6344,12 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
* Constants for the sched_mode argument of __schedule().
*
* The mode argument allows RT enabled kernels to differentiate a
- * preemption from blocking on an 'sleeping' spin/rwlock. Note that
- * SM_MASK_PREEMPT for !RT has all bits set, which allows the compiler to
- * optimize the AND operation out and just check for zero.
+ * preemption from blocking on an 'sleeping' spin/rwlock.
*/
-#define SM_NONE 0x0
-#define SM_PREEMPT 0x1
-#define SM_RTLOCK_WAIT 0x2
-
-#ifndef CONFIG_PREEMPT_RT
-# define SM_MASK_PREEMPT (~0U)
-#else
-# define SM_MASK_PREEMPT SM_PREEMPT
-#endif
+#define SM_IDLE (-1)
+#define SM_NONE 0
+#define SM_PREEMPT 1
+#define SM_RTLOCK_WAIT 2

/*
* __schedule() is the main scheduler function.
@@ -6396,11 +6390,12 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
*
* WARNING: must be called with preemption disabled!
*/
-static void __sched notrace __schedule(unsigned int sched_mode)
+static void __sched notrace __schedule(int sched_mode)
{
struct task_struct *prev, *next;
unsigned long *switch_count;
unsigned long prev_state;
+ bool preempt = sched_mode > 0;
struct rq_flags rf;
struct rq *rq;
int cpu;
@@ -6409,13 +6404,13 @@ static void __sched notrace __schedule(unsigned int sched_mode)
rq = cpu_rq(cpu);
prev = rq->curr;

- schedule_debug(prev, !!sched_mode);
+ schedule_debug(prev, preempt);

if (sched_feat(HRTICK) || sched_feat(HRTICK_DL))
hrtick_clear(rq);

local_irq_disable();
- rcu_note_context_switch(!!sched_mode);
+ rcu_note_context_switch(preempt);

/*
* Make sure that signal_pending_state()->signal_pending() below
@@ -6449,7 +6444,12 @@ static void __sched notrace __schedule(unsigned int sched_mode)
* that we form a control dependency vs deactivate_task() below.
*/
prev_state = READ_ONCE(prev->__state);
- if (!(sched_mode & SM_MASK_PREEMPT) && prev_state) {
+ if (sched_mode == SM_IDLE) {
+ if (!rq->nr_running) {
+ next = prev;
+ goto picked;
+ }
+ } else if (!preempt && prev_state) {
if (signal_pending_state(prev_state, prev)) {
WRITE_ONCE(prev->__state, TASK_RUNNING);
} else {
@@ -6483,6 +6483,7 @@ static void __sched notrace __schedule(unsigned int sched_mode)
}

next = pick_next_task(rq, prev, &rf);
+picked:
clear_tsk_need_resched(prev);
clear_preempt_need_resched();
#ifdef CONFIG_SCHED_DEBUG
@@ -6521,9 +6522,9 @@ static void __sched notrace __schedule(unsigned int sched_mode)
++*switch_count;

migrate_disable_switch(rq, prev);
psi_sched_switch(prev, next, !task_on_rq_queued(prev));

- trace_sched_switch(sched_mode & SM_MASK_PREEMPT, prev, next, prev_state);
+ trace_sched_switch(preempt, prev, next, prev_state);

/* Also unlocks the rq: */
rq = context_switch(rq, prev, next, &rf);
@@ -6599,7 +6601,7 @@ static void sched_update_worker(struct task_struct *tsk)
}
}

-static __always_inline void __schedule_loop(unsigned int sched_mode)
+static __always_inline void __schedule_loop(int sched_mode)
{
do {
preempt_disable();
@@ -6644,7 +6646,7 @@ void __sched schedule_idle(void)
*/
WARN_ON_ONCE(current->__state);
do {
- __schedule(SM_NONE);
+ __schedule(SM_IDLE);
} while (need_resched());
}


2024-06-15 04:12:58

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On Sat, Jun 15, 2024 at 03:28:14AM +0200, Peter Zijlstra wrote:
> On Fri, Jun 14, 2024 at 12:48:37PM +0200, Vincent Guittot wrote:

> > The main problem is that need_resched becomes somewhat meaningless
> > because it doesn't only mean "I need to resched a task" and we have
> > to add more tests around even for those not using polling
>
> True, however we already had some of that by having the wakeup list,
> that made nr_running less 'reliable'.

Doesn't using !idle_cpu() instead of need_resched() in those balance
paths already do the right thing?

Checking need_resched() as an indicator of it getting work is already a
bit an assumption.

Also, Ingo, idle_cpu() and friends don't really belong in syscalls.c...

2024-06-15 05:51:52

by Peter Zijlstra

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On Fri, Jun 14, 2024 at 12:48:37PM +0200, Vincent Guittot wrote:

> The main problem is that need_resched becomes somewhat meaningless
> because it doesn't only mean "I need to resched a task" and we have
> to add more tests around even for those not using polling

The converse problem is that you're adding a bunch of atomic ops that
might be avoided.

It might now need to set both the RESCHED and IPI flags -- and clear
them again.


2024-06-15 14:27:42

by Russell King (Oracle)

[permalink] [raw]
Subject: Re: [PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag

On Thu, Jun 13, 2024 at 06:15:59PM +0000, K Prateek Nayak wrote:
> o Dropping the ARM results since I never got my hands on the ARM64
> system I used in my last testing. If I do manage to get my hands on it
> again, I'll rerun the experiments and share the results on the thread.
> To test the case where TIF_NOTIFY_IPI is not enabled for a particular
> architecture, I applied the series only until Patch 3 and tested the
> same on my x86 machine with a WARN_ON_ONCE() in do_idle() to check if
> tif_notify_ipi() ever return true and then repeated the same with
> Patch 4 applied.

Confused. ARM (32-bit) or ARM64? You patch 32-bit ARM, but you don't
touch 64-bit Arm. "ARM" on its own in the context above to me suggests
32-bit, since you refer to ARM64 later.

--
RMK's Patch system: https://www.armlinux.org.uk/developer/patches/
FTTP is here! 80Mbps down 10Mbps up. Decent connectivity at last!

2024-06-15 17:14:03

by Guo Ren

[permalink] [raw]
Subject: Re: [PATCH v2 11/14] csky/thread_info: Introduce TIF_NOTIFY_IPI flag

On Fri, Jun 14, 2024 at 2:21 AM K Prateek Nayak <[email protected]> wrote:
>
> Add support for TIF_NOTIFY_IPI on C-SKY. With TIF_NOTIFY_IPI, a sender
> sending an IPI to an idle CPU in TIF_POLLING mode will set the
> TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
> CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
> avoids spurious calls to schedule_idle() in cases where an IPI does not
> necessarily wake up a task on the idle CPU.
>
> Cc: Guo Ren <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Daniel Lezcano <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Juri Lelli <[email protected]>
> Cc: Vincent Guittot <[email protected]>
> Cc: Dietmar Eggemann <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Ben Segall <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Daniel Bristot de Oliveira <[email protected]>
> Cc: Valentin Schneider <[email protected]>
> Cc: K Prateek Nayak <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Acked-by: Guo Ren <[email protected]>
> Signed-off-by: K Prateek Nayak <[email protected]>
> ---
> v1..v2:
> o Collected the ack. Thank you :)
> ---
> arch/csky/include/asm/thread_info.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/csky/include/asm/thread_info.h b/arch/csky/include/asm/thread_info.h
> index b5ed788f0c68..9bc7a037c476 100644
> --- a/arch/csky/include/asm/thread_info.h
> +++ b/arch/csky/include/asm/thread_info.h
> @@ -61,6 +61,7 @@ static inline struct thread_info *current_thread_info(void)
> #define TIF_SYSCALL_TRACEPOINT 5 /* syscall tracepoint instrumentation */
> #define TIF_SYSCALL_AUDIT 6 /* syscall auditing */
> #define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
> +#define TIF_NOTIFY_IPI 8 /* Pending IPI on TIF_POLLLING idle CPU */
> #define TIF_POLLING_NRFLAG 16 /* poll_idle() is TIF_NEED_RESCHED */
> #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
> #define TIF_RESTORE_SIGMASK 20 /* restore signal mask in do_signal() */
> @@ -73,6 +74,7 @@ static inline struct thread_info *current_thread_info(void)
> #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
> #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
> #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
> +#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
Acked-by: Guo Ren <[email protected]>

> #define _TIF_UPROBE (1 << TIF_UPROBE)
> #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
> #define _TIF_MEMDIE (1 << TIF_MEMDIE)
> --
> 2.34.1
>


--
Best Regards
Guo Ren