2024-02-20 17:16:01

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

Hello everyone,

Before jumping into the issue, let me clarify the Cc list. Everyone have
been cc'ed on Patch 0 through Patch 3. Respective arch maintainers,
reviewers, and committers returned by scripts/get_maintainer.pl have
been cc'ed on the respective arch side changes. Scheduler and CPU Idle
maintainers and reviewers have been included for the entire series. If I
have missed anyone, please do add them. If you would like to be dropped
from the cc list, wholly or partially, for the future iterations, please
do let me know.

With that out of the way ...

Problem statement
=================

When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
current upstream release (v6.7-rc6 at the time of encounter).

Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime.


Experiments
===========

Since the commit cannot be cleanly reverted on top of the current
tip:sched/core, the effects of the optimizations were reverted by:

1. Removing the check for call_function_single_prep_ipi() in
send_call_function_single_ipi(). With this change
send_call_function_single_ipi() always calls
arch_send_call_function_single_ipi()

2. Removing the call to flush_smp_call_function_queue() in do_idle()
since every smp_call_function, with (1.), would unconditionally send
an IPI to an idle CPU in TIF_POLLING mode.

Following is the diff of the above described changes which will be
henceforth referred to as the "revert":

diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 31231925f1ec..735184d98c0f 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -332,11 +332,6 @@ static void do_idle(void)
*/
smp_mb__after_atomic();

- /*
- * RCU relies on this call to be done outside of an RCU read-side
- * critical section.
- */
- flush_smp_call_function_queue();
schedule_idle();

if (unlikely(klp_patch_pending(current)))
diff --git a/kernel/smp.c b/kernel/smp.c
index f085ebcdf9e7..2ff100c41885 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -111,11 +111,9 @@ void __init call_function_init(void)
static __always_inline void
send_call_function_single_ipi(int cpu)
{
- if (call_function_single_prep_ipi(cpu)) {
- trace_ipi_send_cpu(cpu, _RET_IP_,
- generic_smp_call_function_single_interrupt);
- arch_send_call_function_single_ipi(cpu);
- }
+ trace_ipi_send_cpu(cpu, _RET_IP_,
+ generic_smp_call_function_single_interrupt);
+ arch_send_call_function_single_ipi(cpu);
}

static __always_inline void
--

With the revert, the time taken to complete a fixed set of IPIs using
ipistorm improves significantly. Following are the numbers from a dual
socket 3rd Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled)
running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

(tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
below)

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + revert 0.81 [19.36]

Although the revert improves ipistorm performance, it also regresses
tbench and netperf, supporting the validity of the optimization.
Following are netperf and tbench numbers from the same machine comparing
vanilla tip:sched/core and the revert applied on top:

==================================================================
Test : tbench
Units : Normalized throughput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip[pct imp](CV) revert[pct imp](CV)
1 1.00 [ 0.00]( 0.24) 0.91 [ -8.96]( 0.30)
2 1.00 [ 0.00]( 0.25) 0.92 [ -8.20]( 0.97)
4 1.00 [ 0.00]( 0.23) 0.91 [ -9.20]( 1.75)
8 1.00 [ 0.00]( 0.69) 0.91 [ -9.48]( 1.56)
16 1.00 [ 0.00]( 0.66) 0.92 [ -8.49]( 2.43)
32 1.00 [ 0.00]( 0.96) 0.89 [-11.13]( 0.96)
64 1.00 [ 0.00]( 1.06) 0.90 [ -9.72]( 2.49)
128 1.00 [ 0.00]( 0.70) 0.92 [ -8.36]( 1.26)
256 1.00 [ 0.00]( 0.72) 0.97 [ -3.30]( 1.10)
512 1.00 [ 0.00]( 0.42) 0.98 [ -1.73]( 0.37)
1024 1.00 [ 0.00]( 0.28) 0.99 [ -1.39]( 0.43)

==================================================================
Test : netperf
Units : Normalized Througput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip[pct imp](CV) revert[pct imp](CV)
1-clients 1.00 [ 0.00]( 0.50) 0.89 [-10.51]( 0.20)
2-clients 1.00 [ 0.00]( 1.16) 0.89 [-11.10]( 0.59)
4-clients 1.00 [ 0.00]( 1.03) 0.89 [-10.68]( 0.38)
8-clients 1.00 [ 0.00]( 0.99) 0.89 [-10.54]( 0.50)
16-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.92]( 0.95)
32-clients 1.00 [ 0.00]( 1.24) 0.89 [-10.85]( 0.63)
64-clients 1.00 [ 0.00]( 1.58) 0.90 [-10.11]( 1.18)
128-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.94]( 1.11)
256-clients 1.00 [ 0.00]( 4.77) 1.00 [ -0.16]( 3.45)
512-clients 1.00 [ 0.00](56.16) 1.02 [ 2.10](56.05)

Since a simple revert is not a viable solution, we delved deeper into
the changes in the execution path with call_function_single_prep_ipi()
check.


Effects of call_function_single_prep_ipi()
==========================================

To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.

When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.

Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched())
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NEED_RESCHED); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_set_need_resched();
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func();
} /* End of wait */ }
} schedule_idle() {
...
local_irq_disable();
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
arch_send_call_function_single_ipi(CPU1); ...
\ ...
\ newidle_balance() {
\ ...
/* Delay */ ...
\ }
\ ...
\--------------> local_irq_enable();
/* Processes the IPI */
--


Skipping newidle_balance()
==========================

In an earlier attempt to solve the challenge of the long IRQ disabled
section, newidle_balance() was skipped when a CPU waking up from idle
was found to have no runnable tasks, and was transitioning back to
idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
may be viable for CPUs that are idling with tick enabled, where the
newidle_balance() has the opportunity to pull tasks onto the idle CPU.

Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.

Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.


Proposed solution: TIF_NOTIFY_IPI
=================================

Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
of idle, TIF_NOTIFY_IPI is a newly introduced flag that
call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
indicate a pending IPI, which the idle CPU promises to process soon.

On architectures that do not support the TIF_NOTIFY_IPI flag (this
series only adds support for x86 and ARM processors for now),
call_function_single_prep_ipi() will fallback to setting
TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.

Since the pending IPI handlers are processed before the call to
schedule_idle() in do_idle(), schedule_idle() will only be called if the
IPI handler have woken / migrated a new task on the idle CPU and has set
TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
long IRQ disabled section in schedule_idle() unnecessarily, and any
need_resched() check within a call function will accurately notify if a
task is waiting for CPU time on the CPU handling the IPI.

Following is the crude visualization of how the situation changes with
the newly introduced TIF_NOTIFY_IPI flag:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched_or_ipi())
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NOTIFY_IPI); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { ...
/* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
} /* End of wait */ }
} if (need_resched()) {
schedule_idle();
smp_call_function_single(CPU1, func, wait = 1) { }
... ... /* IRQs remain enabled */
arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
--

Results
=======

With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
using ipistorm improves drastically. Following are the numbers from the
same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
C2 disabled) running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + revert 0.81 [19.36]
tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]

Same experiment was repeated on an dual socket ARM server (2 x 64C)
which too saw a significant improvement in the ipistorm performance:

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]

netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.


Future Work
===========

Evaluate impact of newidle_balance() when scheduler tick hits an idle
CPU. The call to newidle_balance() will be skipped with the
TIF_NOTIFY_IPI solution similar to [2]. Counter argument for the case is
that if the idle state did not set the TIF_POLLING bit, the idle CPU
would not have called schedule_idle() unless the IPI handler set the
NEED_RESCHED bit.


Links
=====

[1] https://github.com/antonblanchard/ipistorm
[2] https://lore.kernel.org/lkml/[email protected]/
[3] https://lore.kernel.org/lkml/[email protected]/
[4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
[5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/

This series is based on tip:sched/core at tag "sched-core-2024-01-08".
---
Gautham R. Shenoy (4):
thread_info: Add helpers to test and clear TIF_NOTIFY_IPI
sched: Define a need_resched_or_ipi() helper and use it treewide
sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING
mode of pending IPI
x86/thread_info: Introduce TIF_NOTIFY_IPI flag

K Prateek Nayak (10):
arm/thread_info: Introduce TIF_NOTIFY_IPI flag
alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag
powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag
sh/thread_info: Introduce TIF_NOTIFY_IPI flag
sparc/thread_info: Introduce TIF_NOTIFY_IPI flag
csky/thread_info: Introduce TIF_NOTIFY_IPI flag
parisc/thread_info: Introduce TIF_NOTIFY_IPI flag
nios2/thread_info: Introduce TIF_NOTIFY_IPI flag
microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag
---
Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Nicholas Miehlbradt <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
---
arch/alpha/include/asm/thread_info.h | 2 ++
arch/arm/include/asm/thread_info.h | 3 ++
arch/csky/include/asm/thread_info.h | 2 ++
arch/microblaze/include/asm/thread_info.h | 2 ++
arch/nios2/include/asm/thread_info.h | 2 ++
arch/openrisc/include/asm/thread_info.h | 2 ++
arch/parisc/include/asm/thread_info.h | 2 ++
arch/powerpc/include/asm/thread_info.h | 2 ++
arch/sh/include/asm/thread_info.h | 2 ++
arch/sparc/include/asm/thread_info_32.h | 2 ++
arch/sparc/include/asm/thread_info_64.h | 2 ++
arch/x86/include/asm/mwait.h | 2 +-
arch/x86/include/asm/thread_info.h | 2 ++
arch/x86/kernel/process.c | 2 +-
drivers/cpuidle/cpuidle-powernv.c | 2 +-
drivers/cpuidle/cpuidle-pseries.c | 2 +-
drivers/cpuidle/poll_state.c | 2 +-
include/linux/sched.h | 5 +++
include/linux/sched/idle.h | 12 +++----
include/linux/thread_info.h | 43 +++++++++++++++++++++++
kernel/sched/core.c | 41 ++++++++++++++++-----
kernel/sched/idle.c | 23 ++++++++----
22 files changed, 133 insertions(+), 26 deletions(-)

--
2.34.1



2024-02-20 17:16:43

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 01/14] thread_info: Add helpers to test and clear TIF_NOTIFY_IPI

From: "Gautham R. Shenoy" <[email protected]>

Introduce the notion of TIF_NOTIFY_IPI flag. When a processor in
TIF_POLLING mode needs to process an IPI, the sender sets NEED_RESCHED
bit in idle task's thread_info to pull the target out of idle and avoids
sending an interrupt to the idle CPU. When NEED_RESCHED is set, the
scheduler assumes that a new task has been queued on the idle CPU and
calls schedule_idle(), however, it is not necessary that an IPI on an
idle CPU will necessarily end up waking a task on the said CPU. To avoid
spurious calls to schedule_idle() assuming an IPI on an idle CPU will
always wake a task on the said CPU, TIF_NOTIFY_IPI will be used to pull
a TIF_POLLING CPU out of idle.

Since the IPI handlers are processed before the call to schedule_idle(),
schedule_idle() will be called only if one of the handlers have woken up
a new task on the CPU and has set NEED_RESCHED.

Add tif_notify_ipi() and current_clr_notify_ipi() helpers to test if
TIF_NOTIFY_IPI is set in the current task's thread_info, and to clear it
respectively. These interfaces will be used in subsequent patches as
TIF_NOTIFY_IPI notion is integrated in the scheduler and in the idle
path.

[ prateek: Split the changes into a separate patch, add commit log ]

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Nicholas Miehlbradt <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
include/linux/thread_info.h | 43 +++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)

diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 9ea0b28068f4..1e10dd8c0227 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -195,6 +195,49 @@ static __always_inline bool tif_need_resched(void)

#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */

+#ifdef TIF_NOTIFY_IPI
+
+#ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
+
+static __always_inline bool tif_notify_ipi(void)
+{
+ return arch_test_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+ arch_clear_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+#else
+
+static __always_inline bool tif_notify_ipi(void)
+{
+ return test_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+static __always_inline void current_clr_notify_ipi(void)
+{
+ clear_bit(TIF_NOTIFY_IPI,
+ (unsigned long *)(&current_thread_info()->flags));
+}
+
+#endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */
+
+#else /* !TIF_NOTIFY_IPI */
+
+static __always_inline bool tif_notify_ipi(void)
+{
+ return false;
+}
+
+static __always_inline void current_clr_notify_ipi(void) { }
+
+#endif /* TIF_NOTIFY_IPI */
+
#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
static inline int arch_within_stack_frames(const void * const stack,
const void * const stackend,
--
2.34.1


2024-02-20 17:17:09

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 02/14] sched: Define a need_resched_or_ipi() helper and use it treewide

From: "Gautham R. Shenoy" <[email protected]>

Currently TIF_NEED_RESCHED is being overloaded, to wakeup an idle CPU in
TIF_POLLING mode to service an IPI even if there are no new tasks being
woken up on the said CPU.

In preparation of a proper fix, introduce a new helper
"need_resched_or_ipi()" which is intended to return true if either
the TIF_NEED_RESCHED flag or if TIF_NOTIFY_IPI flag is set. Use this
helper function in place of need_resched() in idle loops where
TIF_POLLING_NRFLAG is set.

To preserve bisectibility and avoid unbreakable idle loops, all the
need_resched() checks within TIF_POLLING_NRFLAGS sections, have been
replaced tree-wide with the need_resched_or_ipi() check.

[ prateek: Replaced some of the missed out occurrences of
need_resched() within a TIF_POLLING sections with
need_resched_or_ipi() ]

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Nicholas Miehlbradt <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/x86/include/asm/mwait.h | 2 +-
arch/x86/kernel/process.c | 2 +-
drivers/cpuidle/cpuidle-powernv.c | 2 +-
drivers/cpuidle/cpuidle-pseries.c | 2 +-
drivers/cpuidle/poll_state.c | 2 +-
include/linux/sched.h | 5 +++++
include/linux/sched/idle.h | 4 ++--
kernel/sched/idle.c | 7 ++++---
8 files changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 778df05f8539..ac1370143407 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -115,7 +115,7 @@ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned lo
}

__monitor((void *)&current_thread_info()->flags, 0, 0);
- if (!need_resched())
+ if (!need_resched_or_ipi())
__mwait(eax, ecx);
}
current_clr_polling();
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b6f4e8399fca..ca6cb7e28cba 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -925,7 +925,7 @@ static __cpuidle void mwait_idle(void)
}

__monitor((void *)&current_thread_info()->flags, 0, 0);
- if (!need_resched()) {
+ if (!need_resched_or_ipi()) {
__sti_mwait(0, 0);
raw_local_irq_disable();
}
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 9ebedd972df0..77c3bb371f56 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -79,7 +79,7 @@ static int snooze_loop(struct cpuidle_device *dev,
dev->poll_time_limit = false;
ppc64_runlatch_off();
HMT_very_low();
- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
/*
* Task has not woken up but we are exiting the polling
diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
index 14db9b7d985d..4f2b490f8b73 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -46,7 +46,7 @@ int snooze_loop(struct cpuidle_device *dev, struct cpuidle_driver *drv,
snooze_exit_time = get_tb() + snooze_timeout;
dev->poll_time_limit = false;

- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
HMT_low();
HMT_very_low();
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index 9b6d90a72601..225f37897e45 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -26,7 +26,7 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,

limit = cpuidle_poll_time(drv, dev);

- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
cpu_relax();
if (loop_count++ < POLL_IDLE_RELAX_COUNT)
continue;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 03bfe9ab2951..63451f6f25b9 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2258,6 +2258,11 @@ static __always_inline bool need_resched(void)
return unlikely(tif_need_resched());
}

+static __always_inline bool need_resched_or_ipi(void)
+{
+ return unlikely(tif_need_resched() || tif_notify_ipi());
+}
+
/*
* Wrappers for p->thread_info->cpu access. No-op on UP.
*/
diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index 478084f9105e..d739ab810e00 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -63,7 +63,7 @@ static __always_inline bool __must_check current_set_polling_and_test(void)
*/
smp_mb__after_atomic();

- return unlikely(tif_need_resched());
+ return unlikely(need_resched_or_ipi());
}

static __always_inline bool __must_check current_clr_polling_and_test(void)
@@ -76,7 +76,7 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
*/
smp_mb__after_atomic();

- return unlikely(tif_need_resched());
+ return unlikely(need_resched_or_ipi());
}

#else
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 31231925f1ec..fcc734f45a2a 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -57,7 +57,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
ct_cpuidle_enter();

raw_local_irq_enable();
- while (!tif_need_resched() &&
+ while (!need_resched_or_ipi() &&
(cpu_idle_force_poll || tick_check_broadcast_expired()))
cpu_relax();
raw_local_irq_disable();
@@ -153,7 +153,7 @@ static void cpuidle_idle_call(void)
* Check if the idle task must be rescheduled. If it is the
* case, exit the function after re-enabling the local irq.
*/
- if (need_resched()) {
+ if (need_resched_or_ipi()) {
local_irq_enable();
return;
}
@@ -255,7 +255,7 @@ static void do_idle(void)
__current_set_polling();
tick_nohz_idle_enter();

- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
rmb();

/*
@@ -336,6 +336,7 @@ static void do_idle(void)
* RCU relies on this call to be done outside of an RCU read-side
* critical section.
*/
+ current_clr_notify_ipi();
flush_smp_call_function_queue();
schedule_idle();

--
2.34.1


2024-02-20 17:17:37

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 03/14] sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of pending IPI

From: "Gautham R. Shenoy" <[email protected]>

Problem statement
=================

When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
upstream kernel (v6.7-rc6).

Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime. Reverting the optimization introduced by the above commit fixed
the regression in ipistorm, however benchmarks like tbench and netperf
regressed with the revert, supporting the validity of the optimization.

Following are the benchmark results on top of tip:sched/core with the
optimization reverted on a dual socket 3rd Generation aMD EPYC system
(2 x 64C/128T) running with boost enabled and C2 disabled:

(tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
below)

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
cmdline : insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + revert 0.81 [19.36]

==================================================================
Test : tbench
Units : Normalized throughput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip[pct imp](CV) revert[pct imp](CV)
1 1.00 [ 0.00]( 0.24) 0.91 [ -8.96]( 0.30)
2 1.00 [ 0.00]( 0.25) 0.92 [ -8.20]( 0.97)
4 1.00 [ 0.00]( 0.23) 0.91 [ -9.20]( 1.75)
8 1.00 [ 0.00]( 0.69) 0.91 [ -9.48]( 1.56)
16 1.00 [ 0.00]( 0.66) 0.92 [ -8.49]( 2.43)
32 1.00 [ 0.00]( 0.96) 0.89 [-11.13]( 0.96)
64 1.00 [ 0.00]( 1.06) 0.90 [ -9.72]( 2.49)
128 1.00 [ 0.00]( 0.70) 0.92 [ -8.36]( 1.26)
256 1.00 [ 0.00]( 0.72) 0.97 [ -3.30]( 1.10)
512 1.00 [ 0.00]( 0.42) 0.98 [ -1.73]( 0.37)
1024 1.00 [ 0.00]( 0.28) 0.99 [ -1.39]( 0.43)

==================================================================
Test : netperf
Units : Normalized Througput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip[pct imp](CV) revert[pct imp](CV)
1-clients 1.00 [ 0.00]( 0.50) 0.89 [-10.51]( 0.20)
2-clients 1.00 [ 0.00]( 1.16) 0.89 [-11.10]( 0.59)
4-clients 1.00 [ 0.00]( 1.03) 0.89 [-10.68]( 0.38)
8-clients 1.00 [ 0.00]( 0.99) 0.89 [-10.54]( 0.50)
16-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.92]( 0.95)
32-clients 1.00 [ 0.00]( 1.24) 0.89 [-10.85]( 0.63)
64-clients 1.00 [ 0.00]( 1.58) 0.90 [-10.11]( 1.18)
128-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.94]( 1.11)
256-clients 1.00 [ 0.00]( 4.77) 1.00 [ -0.16]( 3.45)
512-clients 1.00 [ 0.00](56.16) 1.02 [ 2.10](56.05)

Since a simple revert is not a viable solution, the changes in the code
path of call_function_single_prep_ipi(), with and without the
optimization were audited to better understand the effect of the commit.

Effects of call_function_single_prep_ipi()
==========================================

To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.

When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.

Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched()) {
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NEED_RESCHED); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_set_need_resched();
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func();
} /* End of wait */ }
} schedule_idle() {
...
local_irq_disable();
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
arch_send_call_function_single_ipi(CPU1); ...
\ ...
\ newidle_balance() {
\ ...
/* Delay */ ...
\ }
\ ...
\--------------> local_irq_enable();
/* Processes the IPI */
--

Skipping newidle_balance()
==========================

In an earlier attempt to solve the challenge of the long IRQ disabled
section, newidle_balance() was skipped when a CPU waking up from idle
was found to have no runnable tasks, and was transitioning back to
idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
may be viable for CPUs that are idling with tick enabled, where the
newidle_balance() has the opportunity to pull tasks onto the idle CPU.

Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.

Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.

Proposed solution: TIF_NOTIFY_IPI
=================================

Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
of idle, TIF_NOTIFY_IPI is a newly introduced flag that
call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
indicate a pending IPI, which the idle CPU promises to process soon.

On architectures that do not support the TIF_NOTIFY_IPI flag,
call_function_single_prep_ipi() will fallback to setting
TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.

Since the pending IPI handlers are processed before the call to
schedule_idle() in do_idle(), schedule_idle() will only be called if the
IPI handler have woken / migrated a new task on the idle CPU and has set
TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
long IRQ disabled section in schedule_idle() unnecessarily, and any
need_resched() check within a call function will accurately notify if a
task is waiting for CPU time on the CPU handling the IPI.

Following is the crude visualization of how the situation changes with
the newly introduced TIF_NOTIFY_IPI flag:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched_or_ipi()) {
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NOTIFY_IPI); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
} /* End of wait */ }
} if (need_resched()) {
schedule_idle();
smp_call_function_single(CPU1, func, wait = 1) { }
... ... /* IRQs remain enabled */
arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
--

Results
=======

With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
using ipistorm improves drastically. Following are the numbers from the
same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
C2 disabled) running ipistorm between CPU8 and CPU16:

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + revert 0.81 [19.36]
tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]

Same experiment was repeated on an dual socket ARM server (2 x 64C)
which too saw a significant improvement in the ipistorm performance:

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]

netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.

[ prateek: Split the changes into a separate patch, added the
TIF_NEED_RESCHED optimization in notify_ipi_if_polling().
TIF_WAKE_FLAG macro, commit log ]

Link: https://github.com/antonblanchard/ipistorm [1]
Link: https://lore.kernel.org/lkml/[email protected]/ [2]
Link: https://lore.kernel.org/lkml/[email protected]/ [3]
Link: https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/ [4]
Link: https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/ [5]
Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Russell King <[email protected]>
Cc: Guo Ren <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Dinh Nguyen <[email protected]>
Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Al Viro <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: Nicholas Miehlbradt <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Josh Poimboeuf <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Rick Edgecombe <[email protected]>
Cc: Tony Battersby <[email protected]>
Cc: Brian Gerst <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
include/linux/sched/idle.h | 8 ++++----
kernel/sched/core.c | 41 ++++++++++++++++++++++++++++++--------
kernel/sched/idle.c | 16 +++++++++++----
3 files changed, 49 insertions(+), 16 deletions(-)

diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index d739ab810e00..c22312087c30 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -58,8 +58,8 @@ static __always_inline bool __must_check current_set_polling_and_test(void)
__current_set_polling();

/*
- * Polling state must be visible before we test NEED_RESCHED,
- * paired by resched_curr()
+ * Polling state must be visible before we test NEED_RESCHED or
+ * NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
*/
smp_mb__after_atomic();

@@ -71,8 +71,8 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
__current_clr_polling();

/*
- * Polling state must be visible before we test NEED_RESCHED,
- * paired by resched_curr()
+ * Polling state must be visible before we test NEED_RESCHED or
+ * NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
*/
smp_mb__after_atomic();

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index db4be4921e7f..6fb6e5b75724 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -909,12 +909,30 @@ static inline bool set_nr_and_not_polling(struct task_struct *p)
}

/*
- * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
+ * Certain architectures that support TIF_POLLING_NRFLAG may not support
+ * TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of a pending
+ * IPI. On such architectures, set TIF_NEED_RESCHED instead to wake the
+ * idle CPU and process the pending IPI.
+ */
+#ifdef _TIF_NOTIFY_IPI
+#define _TIF_WAKE_FLAG _TIF_NOTIFY_IPI
+#else
+#define _TIF_WAKE_FLAG _TIF_NEED_RESCHED
+#endif
+
+/*
+ * Atomically set TIF_WAKE_FLAG when TIF_POLLING_NRFLAG is set.
+ *
+ * On architectures that define TIF_NOTIFY_IPI, the same is set in the
+ * idle task's thread_info to pull the CPU out of idle and process
+ * the pending interrupt. On architectures that don't support
+ * TIF_NOTIFY_IPI, TIF_NEED_RESCHED is set instead to notify the
+ * pending IPI.
*
- * If this returns true, then the idle task promises to call
- * sched_ttwu_pending() and reschedule soon.
+ * If this returns true, then the idle task promises to process the
+ * call function soon.
*/
-static bool set_nr_if_polling(struct task_struct *p)
+static bool notify_ipi_if_polling(struct task_struct *p)
{
struct thread_info *ti = task_thread_info(p);
typeof(ti->flags) val = READ_ONCE(ti->flags);
@@ -922,9 +940,16 @@ static bool set_nr_if_polling(struct task_struct *p)
do {
if (!(val & _TIF_POLLING_NRFLAG))
return false;
- if (val & _TIF_NEED_RESCHED)
+ /*
+ * If TIF_NEED_RESCHED flag is set in addition to
+ * TIF_POLLING_NRFLAG, the CPU will soon fall out of
+ * idle. Since flush_smp_call_function_queue() is called
+ * soon after the idle exit, setting TIF_WAKE_FLAG is
+ * not necessary.
+ */
+ if (val & (_TIF_NEED_RESCHED | _TIF_WAKE_FLAG))
return true;
- } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
+ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_WAKE_FLAG));

return true;
}
@@ -937,7 +962,7 @@ static inline bool set_nr_and_not_polling(struct task_struct *p)
}

#ifdef CONFIG_SMP
-static inline bool set_nr_if_polling(struct task_struct *p)
+static inline bool notify_ipi_if_polling(struct task_struct *p)
{
return false;
}
@@ -3918,7 +3943,7 @@ void sched_ttwu_pending(void *arg)
*/
bool call_function_single_prep_ipi(int cpu)
{
- if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
+ if (notify_ipi_if_polling(cpu_rq(cpu)->idle)) {
trace_sched_wake_idle_without_ipi(cpu);
return false;
}
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index fcc734f45a2a..b91dc1f62a56 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -315,13 +315,13 @@ static void do_idle(void)
}

/*
- * Since we fell out of the loop above, we know TIF_NEED_RESCHED must
- * be set, propagate it into PREEMPT_NEED_RESCHED.
+ * Since we fell out of the loop above, TIF_NEED_RESCHED may be set.
+ * Propagate it into PREEMPT_NEED_RESCHED.
*
* This is required because for polling idle loops we will not have had
* an IPI to fold the state for us.
*/
- preempt_set_need_resched();
+ preempt_fold_need_resched();
tick_nohz_idle_exit();
__current_clr_polling();

@@ -338,7 +338,15 @@ static void do_idle(void)
*/
current_clr_notify_ipi();
flush_smp_call_function_queue();
- schedule_idle();
+
+ /*
+ * When NEED_RESCHED is set, the idle thread promises to call
+ * schedule_idle(). schedule_idle() can be skipped when an idle CPU
+ * was woken up to process an IPI that does not queue a task on the
+ * idle CPU, facilitating faster idle re-entry.
+ */
+ if (need_resched())
+ schedule_idle();

if (unlikely(klp_patch_pending(current)))
klp_update_patch_state(current);
--
2.34.1


2024-02-20 17:18:05

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 04/14] x86/thread_info: Introduce TIF_NOTIFY_IPI flag

From: "Gautham R. Shenoy" <[email protected]>

Add support for TIF_NOTIFY_IPI on x86. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

IPI throughput measured using a modified version of Anton Blanchard's
ipistorm benchmark [1], configured to measure time taken to perform a
fixed number of smp_call_function_single() (with wait set to 1),
improves significantly with TIF_NOTIFY_IPI on a dual socket Ampere Server
(2 x 64C) with the benchmark time reducing to less than half for
100000 IPIs between two CPUs. (Note: Only WFI idle mode was left enabled
during testing to reduce variance)

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]

tip:sched/core was at tag "sched-core-2024-01-08" at the time of
testing.

[ prateek: Split the changes into a separate patch, commit log ]

Link: https://github.com/antonblanchard/ipistorm [1]
Cc: Thomas Gleixner <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Borislav Petkov <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Tim Chen <[email protected]>
Cc: David Vernet <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Gautham R. Shenoy <[email protected]>
Co-developed-by: K Prateek Nayak <[email protected]>
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/x86/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index d63b02940747..90cdce971397 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -83,6 +83,7 @@ struct thread_info {
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_SINGLESTEP 4 /* reenable singlestep on user return*/
#define TIF_SSBD 5 /* Speculative store bypass disable */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SPEC_IB 9 /* Indirect branch speculation mitigation */
#define TIF_SPEC_L1D_FLUSH 10 /* Flush L1D on mm switches (processes) */
#define TIF_USER_RETURN_NOTIFY 11 /* notify kernel of userspace return */
@@ -106,6 +107,7 @@ struct thread_info {
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_SSBD (1 << TIF_SSBD)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_SPEC_IB (1 << TIF_SPEC_IB)
#define _TIF_SPEC_L1D_FLUSH (1 << TIF_SPEC_L1D_FLUSH)
#define _TIF_USER_RETURN_NOTIFY (1 << TIF_USER_RETURN_NOTIFY)
--
2.34.1


2024-02-20 17:18:25

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 05/14] arm/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on ARM. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

IPI throughput measured using a modified version of Anton Blanchard's
ipistorm benchmark [1], configured to measure time taken to perform a
fixed number of smp_call_function_single() (with wait set to 1),
improves significantly with TIF_NOTIFY_IPI on a dual socket Ampere Server
(2 x 64C) with the benchmark time reducing to less than half for
100000 IPIs between two CPUs. (Note: Only WFI idle mode was left enabled
during testing to reduce variance)

cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1

==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]

tip:sched/core was at tag "sched-core-2024-01-08" at the time of
testing.

Cc: Russell King <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Link: https://github.com/antonblanchard/ipistorm [1]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/arm/include/asm/thread_info.h | 3 +++
1 file changed, 3 insertions(+)

diff --git a/arch/arm/include/asm/thread_info.h b/arch/arm/include/asm/thread_info.h
index 943ffcf069d2..324248d87c9e 100644
--- a/arch/arm/include/asm/thread_info.h
+++ b/arch/arm/include/asm/thread_info.h
@@ -136,6 +136,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
* thread information flags:
* TIF_USEDFPU - FPU was used by this task this quantum (SMP)
* TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED
+ * or TIF_NOTIFY_IPI
*
* Any bit in the range of 0..15 will cause do_work_pending() to be invoked.
*/
@@ -144,6 +145,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
#define TIF_NOTIFY_RESUME 2 /* callback before returning to user */
#define TIF_UPROBE 3 /* breakpointed or singlestepping */
#define TIF_NOTIFY_SIGNAL 4 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 5 /* pending IPI on TIF_POLLLING idle CPU */

#define TIF_USING_IWMMXT 17
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
@@ -164,6 +166,7 @@ extern int vfp_restore_user_hwstate(struct user_vfp *,
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
#define _TIF_USING_IWMMXT (1 << TIF_USING_IWMMXT)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)

/* Checks for any syscall work in entry-common.S */
#define _TIF_SYSCALL_WORK (_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT | \
--
2.34.1


2024-02-20 17:18:56

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 06/14] alpha/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on Alpha. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Richard Henderson <[email protected]>
Cc: Ivan Kokshaysky <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Al Viro <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/alpha/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/alpha/include/asm/thread_info.h b/arch/alpha/include/asm/thread_info.h
index 4a4d00b37986..8c17855c85c7 100644
--- a/arch/alpha/include/asm/thread_info.h
+++ b/arch/alpha/include/asm/thread_info.h
@@ -64,6 +64,7 @@ register unsigned long *current_stack_pointer __asm__ ("$30");
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_SYSCALL_AUDIT 4 /* syscall audit active */
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_DIE_IF_KERNEL 9 /* dik recursion lock */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_POLLING_NRFLAG 14 /* idle is polling for TIF_NEED_RESCHED */
@@ -74,6 +75,7 @@ register unsigned long *current_stack_pointer __asm__ ("$30");
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)

/* Work to do on interrupt/exception return. */
--
2.34.1


2024-02-20 17:19:35

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 07/14] openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on OpenRISC. With TIF_NOTIFY_IPI, a
sender sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Jonas Bonn <[email protected]>
Cc: Stefan Kristiansson <[email protected]>
Cc: Stafford Horne <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/openrisc/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/openrisc/include/asm/thread_info.h b/arch/openrisc/include/asm/thread_info.h
index 4af3049c34c2..6a386703bc43 100644
--- a/arch/openrisc/include/asm/thread_info.h
+++ b/arch/openrisc/include/asm/thread_info.h
@@ -92,6 +92,7 @@ register struct thread_info *current_thread_info_reg asm("r10");
* mode
*/
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */
#define TIF_RESTORE_SIGMASK 9
#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling * TIF_NEED_RESCHED
@@ -104,6 +105,7 @@ register struct thread_info *current_thread_info_reg asm("r10");
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
#define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)


--
2.34.1


2024-02-20 17:20:24

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 08/14] powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on PowerPC. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Michael Ellerman <[email protected]>
Cc: Nicholas Piggin <[email protected]>
Cc: Christophe Leroy <[email protected]>
Cc: "Aneesh Kumar K.V" <[email protected]>
Cc: "Naveen N. Rao" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Donnellan <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: Nicholas Miehlbradt <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/powerpc/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index bf5dde1a4114..b48db55192e0 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -103,6 +103,7 @@ void arch_setup_new_exec(void);
#define TIF_PATCH_PENDING 6 /* pending live patching update */
#define TIF_SYSCALL_AUDIT 7 /* syscall auditing active */
#define TIF_SINGLESTEP 8 /* singlestepping active */
+#define TIF_NOTIFY_IPI 9 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SECCOMP 10 /* secure computing */
#define TIF_RESTOREALL 11 /* Restore all regs (implies NOERROR) */
#define TIF_NOERROR 12 /* Force successful syscall return */
@@ -129,6 +130,7 @@ void arch_setup_new_exec(void);
#define _TIF_PATCH_PENDING (1<<TIF_PATCH_PENDING)
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SINGLESTEP (1<<TIF_SINGLESTEP)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_SECCOMP (1<<TIF_SECCOMP)
#define _TIF_RESTOREALL (1<<TIF_RESTOREALL)
#define _TIF_NOERROR (1<<TIF_NOERROR)
--
2.34.1


2024-02-20 17:20:33

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 09/14] sh/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on SuperH. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Yoshinori Sato <[email protected]>
Cc: Rich Felker <[email protected]>
Cc: John Paul Adrian Glaubitz <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: Andrew Morton <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/sh/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/sh/include/asm/thread_info.h b/arch/sh/include/asm/thread_info.h
index 9f19a682d315..8cd9d2a5361b 100644
--- a/arch/sh/include/asm/thread_info.h
+++ b/arch/sh/include/asm/thread_info.h
@@ -106,6 +106,7 @@ extern void init_thread_xstate(void);
#define TIF_SECCOMP 6 /* secure computing */
#define TIF_NOTIFY_RESUME 7 /* callback before returning to user */
#define TIF_SYSCALL_TRACEPOINT 8 /* for ftrace syscall instrumentation */
+#define TIF_NOTIFY_IPI 9 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_POLLING_NRFLAG 17 /* true if poll_idle() is polling TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */

@@ -118,6 +119,7 @@ extern void init_thread_xstate(void);
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_NOTIFY_RESUME (1 << TIF_NOTIFY_RESUME)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)

/* work to do in syscall trace */
--
2.34.1


2024-02-20 17:20:56

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 10/14] sparc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on SPARC. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: "David S. Miller" <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/sparc/include/asm/thread_info_32.h | 2 ++
arch/sparc/include/asm/thread_info_64.h | 2 ++
2 files changed, 4 insertions(+)

diff --git a/arch/sparc/include/asm/thread_info_32.h b/arch/sparc/include/asm/thread_info_32.h
index 45b4955b253f..f538ede526d1 100644
--- a/arch/sparc/include/asm/thread_info_32.h
+++ b/arch/sparc/include/asm/thread_info_32.h
@@ -105,6 +105,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
#define TIF_NEED_RESCHED 3 /* rescheduling necessary */
#define TIF_RESTORE_SIGMASK 4 /* restore signal mask in do_signal() */
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 6 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_USEDFPU 8 /* FPU was used by this task
* this quantum (SMP) */
#define TIF_POLLING_NRFLAG 9 /* true if poll_idle() is polling
@@ -117,6 +118,7 @@ register struct thread_info *current_thread_info_reg asm("g6");
#define _TIF_SIGPENDING (1<<TIF_SIGPENDING)
#define _TIF_NEED_RESCHED (1<<TIF_NEED_RESCHED)
#define _TIF_NOTIFY_SIGNAL (1<<TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)
#define _TIF_USEDFPU (1<<TIF_USEDFPU)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)

diff --git a/arch/sparc/include/asm/thread_info_64.h b/arch/sparc/include/asm/thread_info_64.h
index 1a44372e2bc0..3558101ccdd1 100644
--- a/arch/sparc/include/asm/thread_info_64.h
+++ b/arch/sparc/include/asm/thread_info_64.h
@@ -194,6 +194,7 @@ extern struct thread_info *current_thread_info(void);
#define TIF_MCDPER 12 /* Precise MCD exception */
#define TIF_MEMDIE 13 /* is terminating due to OOM killer */
#define TIF_POLLING_NRFLAG 14
+#define TIF_NOTIFY_IPI 15 /* Pending IPI on TIF_POLLLING idle CPU */

#define _TIF_SYSCALL_TRACE (1<<TIF_SYSCALL_TRACE)
#define _TIF_NOTIFY_RESUME (1<<TIF_NOTIFY_RESUME)
@@ -208,6 +209,7 @@ extern struct thread_info *current_thread_info(void);
#define _TIF_SYSCALL_AUDIT (1<<TIF_SYSCALL_AUDIT)
#define _TIF_SYSCALL_TRACEPOINT (1<<TIF_SYSCALL_TRACEPOINT)
#define _TIF_POLLING_NRFLAG (1<<TIF_POLLING_NRFLAG)
+#define _TIF_NOTIFY_IPI (1<<TIF_NOTIFY_IPI)

#define _TIF_USER_WORK_MASK ((0xff << TI_FLAG_WSAVED_SHIFT) | \
_TIF_DO_NOTIFY_RESUME_MASK | \
--
2.34.1


2024-02-20 17:23:58

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 11/14] csky/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on C-SKY. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Guo Ren <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/csky/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/csky/include/asm/thread_info.h b/arch/csky/include/asm/thread_info.h
index b5ed788f0c68..9bc7a037c476 100644
--- a/arch/csky/include/asm/thread_info.h
+++ b/arch/csky/include/asm/thread_info.h
@@ -61,6 +61,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_TRACEPOINT 5 /* syscall tracepoint instrumentation */
#define TIF_SYSCALL_AUDIT 6 /* syscall auditing */
#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
+#define TIF_NOTIFY_IPI 8 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_POLLING_NRFLAG 16 /* poll_idle() is TIF_NEED_RESCHED */
#define TIF_MEMDIE 18 /* is terminating due to OOM killer */
#define TIF_RESTORE_SIGMASK 20 /* restore signal mask in do_signal() */
@@ -73,6 +74,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_UPROBE (1 << TIF_UPROBE)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_MEMDIE (1 << TIF_MEMDIE)
--
2.34.1


2024-02-20 17:24:21

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 12/14] parisc/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on PA-RISC. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: "James E.J. Bottomley" <[email protected]>
Cc: Helge Deller <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/parisc/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/parisc/include/asm/thread_info.h b/arch/parisc/include/asm/thread_info.h
index 1a58795f785c..35f1deeb8f36 100644
--- a/arch/parisc/include/asm/thread_info.h
+++ b/arch/parisc/include/asm/thread_info.h
@@ -52,6 +52,7 @@ struct thread_info {
#define TIF_SECCOMP 11 /* secure computing */
#define TIF_SYSCALL_TRACEPOINT 12 /* syscall tracepoint instrumentation */
#define TIF_NONBLOCK_WARNING 13 /* warned about wrong O_NONBLOCK usage */
+#define TIF_NOTIFY_IPI 14 /* Pending IPI on TIF_POLLLING idle CPU */

#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@@ -65,6 +66,7 @@ struct thread_info {
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)

#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED | _TIF_NOTIFY_SIGNAL)
--
2.34.1


2024-02-20 17:24:43

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 13/14] nios2/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on Nios II. With TIF_NOTIFY_IPI, a sender
sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Dinh Nguyen <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: Al Viro <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/nios2/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/nios2/include/asm/thread_info.h b/arch/nios2/include/asm/thread_info.h
index 5abac9893b32..24882fd5ad11 100644
--- a/arch/nios2/include/asm/thread_info.h
+++ b/arch/nios2/include/asm/thread_info.h
@@ -79,6 +79,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SYSCALL_AUDIT 6 /* syscall auditing active */
#define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
#define TIF_RESTORE_SIGMASK 9 /* restore signal mask in do_signal() */
+#define TIF_NOTIFY_IPI 10 /* Pending IPI on TIF_POLLLING idle CPU */

#define TIF_POLLING_NRFLAG 16 /* true if poll_idle() is polling
TIF_NEED_RESCHED */
@@ -91,6 +92,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
#define _TIF_RESTORE_SIGMASK (1 << TIF_RESTORE_SIGMASK)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)

/* work to do on interrupt/exception return */
--
2.34.1


2024-02-20 17:25:07

by K Prateek Nayak

[permalink] [raw]
Subject: [RFC PATCH 14/14] microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag

Add support for TIF_NOTIFY_IPI on MicroBlaze. With TIF_NOTIFY_IPI, a
sender sending an IPI to an idle CPU in TIF_POLLING mode will set the
TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
avoids spurious calls to schedule_idle() in cases where an IPI does not
necessarily wake up a task on the idle CPU.

Cc: Michal Simek <[email protected]>
Cc: "Rafael J. Wysocki" <[email protected]>
Cc: Daniel Lezcano <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Peter Zijlstra <[email protected]>
Cc: Juri Lelli <[email protected]>
Cc: Vincent Guittot <[email protected]>
Cc: Dietmar Eggemann <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Ben Segall <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Daniel Bristot de Oliveira <[email protected]>
Cc: Valentin Schneider <[email protected]>
Cc: K Prateek Nayak <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: K Prateek Nayak <[email protected]>
---
arch/microblaze/include/asm/thread_info.h | 2 ++
1 file changed, 2 insertions(+)

diff --git a/arch/microblaze/include/asm/thread_info.h b/arch/microblaze/include/asm/thread_info.h
index a0ddd2a36fb9..953a334bb4fe 100644
--- a/arch/microblaze/include/asm/thread_info.h
+++ b/arch/microblaze/include/asm/thread_info.h
@@ -103,6 +103,7 @@ static inline struct thread_info *current_thread_info(void)
#define TIF_SINGLESTEP 4
#define TIF_NOTIFY_SIGNAL 5 /* signal notifications exist */
#define TIF_MEMDIE 6 /* is terminating due to OOM killer */
+#define TIF_NOTIFY_IPI 7 /* Pending IPI on TIF_POLLLING idle CPU */
#define TIF_SYSCALL_AUDIT 9 /* syscall auditing active */
#define TIF_SECCOMP 10 /* secure computing */

@@ -115,6 +116,7 @@ static inline struct thread_info *current_thread_info(void)
#define _TIF_NEED_RESCHED (1 << TIF_NEED_RESCHED)
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
+#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
#define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
#define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
--
2.34.1


2024-02-23 04:37:54

by Guo Ren

[permalink] [raw]
Subject: Re: [RFC PATCH 11/14] csky/thread_info: Introduce TIF_NOTIFY_IPI flag

On Wed, Feb 21, 2024 at 1:20 AM K Prateek Nayak <kprateek.nayak@amdcom> wrote:
>
> Add support for TIF_NOTIFY_IPI on C-SKY. With TIF_NOTIFY_IPI, a sender
> sending an IPI to an idle CPU in TIF_POLLING mode will set the
> TIF_NOTIFY_IPI flag in the target's idle tasks's thread_info to pull the
> CPU out of idle, as opposed to setting TIF_NEED_RESCHED previously. This
> avoids spurious calls to schedule_idle() in cases where an IPI does not
> necessarily wake up a task on the idle CPU.
>
> Cc: Guo Ren <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Daniel Lezcano <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Juri Lelli <[email protected]>
> Cc: Vincent Guittot <[email protected]>
> Cc: Dietmar Eggemann <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Ben Segall <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Daniel Bristot de Oliveira <[email protected]>
> Cc: Valentin Schneider <[email protected]>
> Cc: K Prateek Nayak <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Signed-off-by: K Prateek Nayak <[email protected]>
> ---
> arch/csky/include/asm/thread_info.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/csky/include/asm/thread_info.h b/arch/csky/include/asm/thread_info.h
> index b5ed788f0c68..9bc7a037c476 100644
> --- a/arch/csky/include/asm/thread_info.h
> +++ b/arch/csky/include/asm/thread_info.h
> @@ -61,6 +61,7 @@ static inline struct thread_info *current_thread_info(void)
> #define TIF_SYSCALL_TRACEPOINT 5 /* syscall tracepoint instrumentation */
> #define TIF_SYSCALL_AUDIT 6 /* syscall auditing */
> #define TIF_NOTIFY_SIGNAL 7 /* signal notifications exist */
> +#define TIF_NOTIFY_IPI 8 /* Pending IPI on TIF_POLLLING idle CPU */
> #define TIF_POLLING_NRFLAG 16 /* poll_idle() is TIF_NEED_RESCHED */
> #define TIF_MEMDIE 18 /* is terminating due to OOM killer */
> #define TIF_RESTORE_SIGMASK 20 /* restore signal mask in do_signal() */
> @@ -73,6 +74,7 @@ static inline struct thread_info *current_thread_info(void)
> #define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
> #define _TIF_SYSCALL_AUDIT (1 << TIF_SYSCALL_AUDIT)
> #define _TIF_NOTIFY_SIGNAL (1 << TIF_NOTIFY_SIGNAL)
> +#define _TIF_NOTIFY_IPI (1 << TIF_NOTIFY_IPI)
> #define _TIF_UPROBE (1 << TIF_UPROBE)
> #define _TIF_POLLING_NRFLAG (1 << TIF_POLLING_NRFLAG)
> #define _TIF_MEMDIE (1 << TIF_MEMDIE)
> --
> 2.34.1
>
LGTM

Acked-by: Guo Ren <[email protected]>

--
Best Regards
Guo Ren

2024-03-06 09:47:51

by Linus Walleij

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

Hi K Prateek,

I trimmed down the recipient list so we don't bounce.

On Tue, Feb 20, 2024 at 6:15 PM K Prateek Nayak <kprateek.nayak@amdcom> wrote:

> Same experiment was repeated on an dual socket ARM server (2 x 64C)
> which too saw a significant improvement in the ipistorm performance:
>
> ==================================================================
> Test : ipistorm (modified)
> Units : Normalized runtime
> Interpretation: Lower is better
> Statistic : AMean
> ==================================================================
> kernel: time [pct imp]
> tip:sched/core 1.00 [0.00]
> tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]

Is that a 64bit ARM64 system or really an ARM 32-bit 64-core system?

I'm confused because:

> K Prateek Nayak (10):
> arm/thread_info: Introduce TIF_NOTIFY_IPI flag

There is no arm64 patch in the patch series.

I can perhaps test the patches on an ARM32 system but all I have is dualcore
I think.

Yours,
Linus Walleij

2024-03-06 10:00:33

by Vincent Guittot

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

Hi Prateek,

Adding Julia who could be interested in this patchset. Your patchset
should trigger idle load balance instead of newly idle load balance
now when the polling is used. This was one reason for not migrating
task in idle CPU

On Tue, 20 Feb 2024 at 18:15, K Prateek Nayak <[email protected]> wrote:
>
> Hello everyone,
>
> Before jumping into the issue, let me clarify the Cc list. Everyone have
> been cc'ed on Patch 0 through Patch 3. Respective arch maintainers,
> reviewers, and committers returned by scripts/get_maintainer.pl have
> been cc'ed on the respective arch side changes. Scheduler and CPU Idle
> maintainers and reviewers have been included for the entire series. If I
> have missed anyone, please do add them. If you would like to be dropped
> from the cc list, wholly or partially, for the future iterations, please
> do let me know.
>
> With that out of the way ...
>
> Problem statement
> =================
>
> When measuring IPI throughput using a modified version of Anton
> Blanchard's ipistorm benchmark [1], configured to measure time taken to
> perform a fixed number of smp_call_function_single() (with wait set to
> 1), an increase in benchmark time was observed between v5.7 and the
> current upstream release (v6.7-rc6 at the time of encounter).
>
> Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
> send_call_function_single_ipi()") as the reason behind this increase in
> runtime.
>
>
> Experiments
> ===========
>
> Since the commit cannot be cleanly reverted on top of the current
> tip:sched/core, the effects of the optimizations were reverted by:
>
> 1. Removing the check for call_function_single_prep_ipi() in
> send_call_function_single_ipi(). With this change
> send_call_function_single_ipi() always calls
> arch_send_call_function_single_ipi()
>
> 2. Removing the call to flush_smp_call_function_queue() in do_idle()
> since every smp_call_function, with (1.), would unconditionally send
> an IPI to an idle CPU in TIF_POLLING mode.
>
> Following is the diff of the above described changes which will be
> henceforth referred to as the "revert":
>
> diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> index 31231925f1ec..735184d98c0f 100644
> --- a/kernel/sched/idle.c
> +++ b/kernel/sched/idle.c
> @@ -332,11 +332,6 @@ static void do_idle(void)
> */
> smp_mb__after_atomic();
>
> - /*
> - * RCU relies on this call to be done outside of an RCU read-side
> - * critical section.
> - */
> - flush_smp_call_function_queue();
> schedule_idle();
>
> if (unlikely(klp_patch_pending(current)))
> diff --git a/kernel/smp.c b/kernel/smp.c
> index f085ebcdf9e7..2ff100c41885 100644
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -111,11 +111,9 @@ void __init call_function_init(void)
> static __always_inline void
> send_call_function_single_ipi(int cpu)
> {
> - if (call_function_single_prep_ipi(cpu)) {
> - trace_ipi_send_cpu(cpu, _RET_IP_,
> - generic_smp_call_function_single_interrupt);
> - arch_send_call_function_single_ipi(cpu);
> - }
> + trace_ipi_send_cpu(cpu, _RET_IP_,
> + generic_smp_call_function_single_interrupt);
> + arch_send_call_function_single_ipi(cpu);
> }
>
> static __always_inline void
> --
>
> With the revert, the time taken to complete a fixed set of IPIs using
> ipistorm improves significantly. Following are the numbers from a dual
> socket 3rd Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled)
> running ipistorm between CPU8 and CPU16:
>
> cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
>
> (tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
> below)
>
> ==================================================================
> Test : ipistorm (modified)
> Units : Normalized runtime
> Interpretation: Lower is better
> Statistic : AMean
> ==================================================================
> kernel: time [pct imp]
> tip:sched/core 1.00 [0.00]
> tip:sched/core + revert 0.81 [19.36]
>
> Although the revert improves ipistorm performance, it also regresses
> tbench and netperf, supporting the validity of the optimization.
> Following are netperf and tbench numbers from the same machine comparing
> vanilla tip:sched/core and the revert applied on top:
>
> ==================================================================
> Test : tbench
> Units : Normalized throughput
> Interpretation: Higher is better
> Statistic : AMean
> ==================================================================
> Clients: tip[pct imp](CV) revert[pct imp](CV)
> 1 1.00 [ 0.00]( 0.24) 0.91 [ -8.96]( 0.30)
> 2 1.00 [ 0.00]( 0.25) 0.92 [ -8.20]( 0.97)
> 4 1.00 [ 0.00]( 0.23) 0.91 [ -9.20]( 1.75)
> 8 1.00 [ 0.00]( 0.69) 0.91 [ -9.48]( 1.56)
> 16 1.00 [ 0.00]( 0.66) 0.92 [ -8.49]( 2.43)
> 32 1.00 [ 0.00]( 0.96) 0.89 [-11.13]( 0.96)
> 64 1.00 [ 0.00]( 1.06) 0.90 [ -9.72]( 2.49)
> 128 1.00 [ 0.00]( 0.70) 0.92 [ -8.36]( 1.26)
> 256 1.00 [ 0.00]( 0.72) 0.97 [ -3.30]( 1.10)
> 512 1.00 [ 0.00]( 0.42) 0.98 [ -1.73]( 0.37)
> 1024 1.00 [ 0.00]( 0.28) 0.99 [ -1.39]( 0.43)
>
> ==================================================================
> Test : netperf
> Units : Normalized Througput
> Interpretation: Higher is better
> Statistic : AMean
> ==================================================================
> Clients: tip[pct imp](CV) revert[pct imp](CV)
> 1-clients 1.00 [ 0.00]( 0.50) 0.89 [-10.51]( 0.20)
> 2-clients 1.00 [ 0.00]( 1.16) 0.89 [-11.10]( 0.59)
> 4-clients 1.00 [ 0.00]( 1.03) 0.89 [-10.68]( 0.38)
> 8-clients 1.00 [ 0.00]( 0.99) 0.89 [-10.54]( 0.50)
> 16-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.92]( 0.95)
> 32-clients 1.00 [ 0.00]( 1.24) 0.89 [-10.85]( 0.63)
> 64-clients 1.00 [ 0.00]( 1.58) 0.90 [-10.11]( 1.18)
> 128-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.94]( 1.11)
> 256-clients 1.00 [ 0.00]( 4.77) 1.00 [ -0.16]( 3.45)
> 512-clients 1.00 [ 0.00](56.16) 1.02 [ 2.10](56.05)
>
> Since a simple revert is not a viable solution, we delved deeper into
> the changes in the execution path with call_function_single_prep_ipi()
> check.
>
>
> Effects of call_function_single_prep_ipi()
> ==========================================
>
> To pull a TIF_POLLING thread out of idle to process an IPI, the sender
> sets the TIF_NEED_RESCHED bit in the idle task's thread info in
> call_function_single_prep_ipi() and avoids sending an actual IPI to the
> target. As a result, the scheduler expects a task to be enqueued when
> exiting the idle path. This is not the case with non-polling idle states
> where the idle CPU exits the non-polling idle state to process the
> interrupt, and since need_resched() returns false, soon goes back to
> idle again.
>
> When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
> a large part of which runs with local IRQ disabled. In case of ipistorm,
> when measuring IPI throughput, this large IRQ disabled section delays
> processing of IPIs. Further auditing revealed that in absence of any
> runnable tasks, pick_next_task_fair(), which is called from the
> pick_next_task() fast path, will always call newidle_balance() in this
> scenario, further increasing the time spent in the IRQ disabled section.
>
> Following is the crude visualization of the problem with relevant
> functions expanded:
> --
> CPU0 CPU1
> ==== ====
> do_idle() {
> __current_set_polling();
> ...
> monitor(addr);
> if (!need_resched())
> mwait() {
> /* Waiting */
> smp_call_function_single(CPU1, func, wait = 1) { ...
> ... ...
> set_nr_if_polling(CPU1) { ...
> /* Realizes CPU1 is polling */ ...
> try_cmpxchg(addr, ...
> &val, ...
> val | _TIF_NEED_RESCHED); ...
> } /* Does not send an IPI */ ...
> ... } /* mwait exit due to write at addr */
> csd_lock_wait() { }
> /* Waiting */ preempt_set_need_resched();
> ... __current_clr_polling();
> ... flush_smp_call_function_queue() {
> ... func();
> } /* End of wait */ }
> } schedule_idle() {
> ...
> local_irq_disable();
> smp_call_function_single(CPU1, func, wait = 1) { ...
> ... ...
> arch_send_call_function_single_ipi(CPU1); ...
> \ ...
> \ newidle_balance() {
> \ ...
> /* Delay */ ...
> \ }
> \ ...
> \--------------> local_irq_enable();
> /* Processes the IPI */
> --
>
>
> Skipping newidle_balance()
> ==========================
>
> In an earlier attempt to solve the challenge of the long IRQ disabled
> section, newidle_balance() was skipped when a CPU waking up from idle
> was found to have no runnable tasks, and was transitioning back to
> idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
> may be viable for CPUs that are idling with tick enabled, where the
> newidle_balance() has the opportunity to pull tasks onto the idle CPU.
>
> Vincent [5] pointed out a case where the idle load kick will fail to
> run on an idle CPU since the IPI handler launching the ILB will check
> for need_resched(). In such cases, the idle CPU relies on
> newidle_balance() to pull tasks towards itself.

Calling newidle_balance() instead of the normal idle load balance
prevents the CPU to pull tasks from other groups

>
> Using an alternate flag instead of NEED_RESCHED to indicate a pending
> IPI was suggested as the correct approach to solve this problem on the
> same thread.
>
>
> Proposed solution: TIF_NOTIFY_IPI
> =================================
>
> Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
> of idle, TIF_NOTIFY_IPI is a newly introduced flag that
> call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
> indicate a pending IPI, which the idle CPU promises to process soon.
>
> On architectures that do not support the TIF_NOTIFY_IPI flag (this
> series only adds support for x86 and ARM processors for now),

I'm surprised that you are mentioning ARM processors because they
don't use TIF_POLLING.

> call_function_single_prep_ipi() will fallback to setting
> TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
>
> Since the pending IPI handlers are processed before the call to
> schedule_idle() in do_idle(), schedule_idle() will only be called if the
> IPI handler have woken / migrated a new task on the idle CPU and has set
> TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
> long IRQ disabled section in schedule_idle() unnecessarily, and any
> need_resched() check within a call function will accurately notify if a
> task is waiting for CPU time on the CPU handling the IPI.
>
> Following is the crude visualization of how the situation changes with
> the newly introduced TIF_NOTIFY_IPI flag:
> --
> CPU0 CPU1
> ==== ====
> do_idle() {
> __current_set_polling();
> ...
> monitor(addr);
> if (!need_resched_or_ipi())
> mwait() {
> /* Waiting */
> smp_call_function_single(CPU1, func, wait = 1) { ...
> ... ...
> set_nr_if_polling(CPU1) { ...
> /* Realizes CPU1 is polling */ ...
> try_cmpxchg(addr, ...
> &val, ...
> val | _TIF_NOTIFY_IPI); ...
> } /* Does not send an IPI */ ...
> ... } /* mwait exit due to write at addr */
> csd_lock_wait() { ...
> /* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
> ... __current_clr_polling();
> ... flush_smp_call_function_queue() {
> ... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
> } /* End of wait */ }
> } if (need_resched()) {
> schedule_idle();
> smp_call_function_single(CPU1, func, wait = 1) { }
> ... ... /* IRQs remain enabled */
> arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
> --
>
> Results
> =======
>
> With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
> using ipistorm improves drastically. Following are the numbers from the
> same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
> C2 disabled) running ipistorm between CPU8 and CPU16:
>
> cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
>
> ==================================================================
> Test : ipistorm (modified)
> Units : Normalized runtime
> Interpretation: Lower is better
> Statistic : AMean
> ==================================================================
> kernel: time [pct imp]
> tip:sched/core 1.00 [0.00]
> tip:sched/core + revert 0.81 [19.36]
> tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]
>
> Same experiment was repeated on an dual socket ARM server (2 x 64C)
> which too saw a significant improvement in the ipistorm performance:

Could you share more details about this ARM server ? Could it be an Arm64 one ?
I was not expecting any change for arm/arm64 which are not using TIF_POLLING


>
> ==================================================================
> Test : ipistorm (modified)
> Units : Normalized runtime
> Interpretation: Lower is better
> Statistic : AMean
> ==================================================================
> kernel: time [pct imp]
> tip:sched/core 1.00 [0.00]
> tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]
>
> netperf and tbench results with the patch match the results on tip on
> the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
> hackbench, stream, and schbench too were tested, with results from the
> patched kernel matching that of the tip.
>
>
> Future Work
> ===========
>
> Evaluate impact of newidle_balance() when scheduler tick hits an idle
> CPU. The call to newidle_balance() will be skipped with the

But it should call the normal idle load balance instead

> TIF_NOTIFY_IPI solution similar to [2]. Counter argument for the case is
> that if the idle state did not set the TIF_POLLING bit, the idle CPU
> would not have called schedule_idle() unless the IPI handler set the
> NEED_RESCHED bit.
>
>
> Links
> =====
>
> [1] https://github.com/antonblanchard/ipistorm
> [2] https://lore.kernel.org/lkml/[email protected]/
> [3] https://lore.kernel.org/lkml/[email protected]/
> [4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
> [5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/
>
> This series is based on tip:sched/core at tag "sched-core-2024-01-08".
> ---
> Gautham R. Shenoy (4):
> thread_info: Add helpers to test and clear TIF_NOTIFY_IPI
> sched: Define a need_resched_or_ipi() helper and use it treewide
> sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING
> mode of pending IPI
> x86/thread_info: Introduce TIF_NOTIFY_IPI flag
>
> K Prateek Nayak (10):
> arm/thread_info: Introduce TIF_NOTIFY_IPI flag
> alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
> openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag
> powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag
> sh/thread_info: Introduce TIF_NOTIFY_IPI flag
> sparc/thread_info: Introduce TIF_NOTIFY_IPI flag
> csky/thread_info: Introduce TIF_NOTIFY_IPI flag
> parisc/thread_info: Introduce TIF_NOTIFY_IPI flag
> nios2/thread_info: Introduce TIF_NOTIFY_IPI flag
> microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag
> ---
> Cc: Richard Henderson <[email protected]>
> Cc: Ivan Kokshaysky <[email protected]>
> Cc: Matt Turner <[email protected]>
> Cc: Russell King <[email protected]>
> Cc: Guo Ren <[email protected]>
> Cc: Michal Simek <[email protected]>
> Cc: Dinh Nguyen <[email protected]>
> Cc: Jonas Bonn <[email protected]>
> Cc: Stefan Kristiansson <[email protected]>
> Cc: Stafford Horne <[email protected]>
> Cc: "James E.J. Bottomley" <[email protected]>
> Cc: Helge Deller <[email protected]>
> Cc: Michael Ellerman <[email protected]>
> Cc: Nicholas Piggin <[email protected]>
> Cc: Christophe Leroy <[email protected]>
> Cc: "Aneesh Kumar K.V" <[email protected]>
> Cc: "Naveen N. Rao" <[email protected]>
> Cc: Yoshinori Sato <[email protected]>
> Cc: Rich Felker <[email protected]>
> Cc: John Paul Adrian Glaubitz <[email protected]>
> Cc: "David S. Miller" <[email protected]>
> Cc: Thomas Gleixner <[email protected]>
> Cc: Ingo Molnar <[email protected]>
> Cc: Borislav Petkov <[email protected]>
> Cc: Dave Hansen <[email protected]>
> Cc: "H. Peter Anvin" <[email protected]>
> Cc: "Rafael J. Wysocki" <[email protected]>
> Cc: Daniel Lezcano <[email protected]>
> Cc: Peter Zijlstra <[email protected]>
> Cc: Juri Lelli <[email protected]>
> Cc: Vincent Guittot <[email protected]>
> Cc: Dietmar Eggemann <[email protected]>
> Cc: Steven Rostedt <[email protected]>
> Cc: Ben Segall <[email protected]>
> Cc: Mel Gorman <[email protected]>
> Cc: Daniel Bristot de Oliveira <[email protected]>
> Cc: Valentin Schneider <[email protected]>
> Cc: Al Viro <[email protected]>
> Cc: Linus Walleij <[email protected]>
> Cc: Ard Biesheuvel <[email protected]>
> Cc: Andrew Donnellan <[email protected]>
> Cc: Nicholas Miehlbradt <[email protected]>
> Cc: Andrew Morton <[email protected]>
> Cc: Arnd Bergmann <[email protected]>
> Cc: Josh Poimboeuf <[email protected]>
> Cc: "Kirill A. Shutemov" <[email protected]>
> Cc: Rick Edgecombe <[email protected]>
> Cc: Tony Battersby <[email protected]>
> Cc: Brian Gerst <[email protected]>
> Cc: Tim Chen <[email protected]>
> Cc: David Vernet <[email protected]>
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> Cc: [email protected]
> ---
> arch/alpha/include/asm/thread_info.h | 2 ++
> arch/arm/include/asm/thread_info.h | 3 ++
> arch/csky/include/asm/thread_info.h | 2 ++
> arch/microblaze/include/asm/thread_info.h | 2 ++
> arch/nios2/include/asm/thread_info.h | 2 ++
> arch/openrisc/include/asm/thread_info.h | 2 ++
> arch/parisc/include/asm/thread_info.h | 2 ++
> arch/powerpc/include/asm/thread_info.h | 2 ++
> arch/sh/include/asm/thread_info.h | 2 ++
> arch/sparc/include/asm/thread_info_32.h | 2 ++
> arch/sparc/include/asm/thread_info_64.h | 2 ++
> arch/x86/include/asm/mwait.h | 2 +-
> arch/x86/include/asm/thread_info.h | 2 ++
> arch/x86/kernel/process.c | 2 +-
> drivers/cpuidle/cpuidle-powernv.c | 2 +-
> drivers/cpuidle/cpuidle-pseries.c | 2 +-
> drivers/cpuidle/poll_state.c | 2 +-
> include/linux/sched.h | 5 +++
> include/linux/sched/idle.h | 12 +++----
> include/linux/thread_info.h | 43 +++++++++++++++++++++++
> kernel/sched/core.c | 41 ++++++++++++++++-----
> kernel/sched/idle.c | 23 ++++++++----
> 22 files changed, 133 insertions(+), 26 deletions(-)
>
> --
> 2.34.1
>

2024-03-06 10:06:10

by K Prateek Nayak

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

Hello Linus,

Thank you for taking a look at the patch.

On 3/6/2024 3:14 PM, Linus Walleij wrote:
> Hi K Prateek,
>
> I trimmed down the recipient list so we don't bounce.
>
> On Tue, Feb 20, 2024 at 6:15 PM K Prateek Nayak <[email protected]> wrote:
>
>> Same experiment was repeated on an dual socket ARM server (2 x 64C)
>> which too saw a significant improvement in the ipistorm performance:
>>
>> ==================================================================
>> Test : ipistorm (modified)
>> Units : Normalized runtime
>> Interpretation: Lower is better
>> Statistic : AMean
>> ==================================================================
>> kernel: time [pct imp]
>> tip:sched/core 1.00 [0.00]
>> tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]
>
> Is that a 64bit ARM64 system or really an ARM 32-bit 64-core system?
>
> I'm confused because:
>
>> K Prateek Nayak (10):
>> arm/thread_info: Introduce TIF_NOTIFY_IPI flag
>
> There is no arm64 patch in the patch series.

When I started out, assumed both arm32 and arm64 shared the same
thread_info file. I basically ran:

$ grep -r "TIF_POLLING_NRFLAG" arch/arm*
arch/arm/include/asm/thread_info.h: * TIF_POLLING_NRFLAG - true if poll_idle() is polling TIF_NEED_RESCHED

and reached here but now I see "arch/arm64/include/asm/thread_info.h".
The machine I tested on was "aarch64" and was listed "Neoverse-N1" as
the "Model name" when running lscpu. This series changes some behavior
around IPI delivery to an idle thread but without "TIF_POLLING_NRFLAG"
defined, the behavior should remain same. I have limited access to the
server I tested on. Let me see if I can get some cycles to test this
once again.

>
> I can perhaps test the patches on an ARM32 system but all I have is dualcore
> I think.
>
> Yours,
> Linus Walleij

--
Thanks and Regards,
Prateek

2024-03-06 10:27:13

by K Prateek Nayak

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

Hello Vincent,

Thank you for taking a look at the series.

On 3/6/2024 3:29 PM, Vincent Guittot wrote:
> Hi Prateek,
>
> Adding Julia who could be interested in this patchset. Your patchset
> should trigger idle load balance instead of newly idle load balance
> now when the polling is used. This was one reason for not migrating
> task in idle CPU

Thank you.

>
> On Tue, 20 Feb 2024 at 18:15, K Prateek Nayak <[email protected]> wrote:
>>
>> Hello everyone,
>>
>> [..snip..]
>>
>>
>> Skipping newidle_balance()
>> ==========================
>>
>> In an earlier attempt to solve the challenge of the long IRQ disabled
>> section, newidle_balance() was skipped when a CPU waking up from idle
>> was found to have no runnable tasks, and was transitioning back to
>> idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
>> may be viable for CPUs that are idling with tick enabled, where the
>> newidle_balance() has the opportunity to pull tasks onto the idle CPU.
>>
>> Vincent [5] pointed out a case where the idle load kick will fail to
>> run on an idle CPU since the IPI handler launching the ILB will check
>> for need_resched(). In such cases, the idle CPU relies on
>> newidle_balance() to pull tasks towards itself.
>
> Calling newidle_balance() instead of the normal idle load balance
> prevents the CPU to pull tasks from other groups

Thank you for the correction.

>
>>
>> Using an alternate flag instead of NEED_RESCHED to indicate a pending
>> IPI was suggested as the correct approach to solve this problem on the
>> same thread.
>>
>>
>> Proposed solution: TIF_NOTIFY_IPI
>> =================================
>>
>> Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
>> of idle, TIF_NOTIFY_IPI is a newly introduced flag that
>> call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
>> indicate a pending IPI, which the idle CPU promises to process soon.
>>
>> On architectures that do not support the TIF_NOTIFY_IPI flag (this
>> series only adds support for x86 and ARM processors for now),
>
> I'm surprised that you are mentioning ARM processors because they
> don't use TIF_POLLING.

Yup I just realised that after Linus Walleij pointed it out on the
thread.

>
>> call_function_single_prep_ipi() will fallback to setting
>> TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
>>
>> Since the pending IPI handlers are processed before the call to
>> schedule_idle() in do_idle(), schedule_idle() will only be called if the
>> IPI handler have woken / migrated a new task on the idle CPU and has set
>> TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
>> long IRQ disabled section in schedule_idle() unnecessarily, and any
>> need_resched() check within a call function will accurately notify if a
>> task is waiting for CPU time on the CPU handling the IPI.
>>
>> Following is the crude visualization of how the situation changes with
>> the newly introduced TIF_NOTIFY_IPI flag:
>> --
>> CPU0 CPU1
>> ==== ====
>> do_idle() {
>> __current_set_polling();
>> ...
>> monitor(addr);
>> if (!need_resched_or_ipi())
>> mwait() {
>> /* Waiting */
>> smp_call_function_single(CPU1, func, wait = 1) { ...
>> ... ...
>> set_nr_if_polling(CPU1) { ...
>> /* Realizes CPU1 is polling */ ...
>> try_cmpxchg(addr, ...
>> &val, ...
>> val | _TIF_NOTIFY_IPI); ...
>> } /* Does not send an IPI */ ...
>> ... } /* mwait exit due to write at addr */
>> csd_lock_wait() { ...
>> /* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
>> ... __current_clr_polling();
>> ... flush_smp_call_function_queue() {
>> ... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
>> } /* End of wait */ }
>> } if (need_resched()) {
>> schedule_idle();
>> smp_call_function_single(CPU1, func, wait = 1) { }
>> ... ... /* IRQs remain enabled */
>> arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
>> --
>>
>> Results
>> =======
>>
>> With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
>> using ipistorm improves drastically. Following are the numbers from the
>> same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
>> C2 disabled) running ipistorm between CPU8 and CPU16:
>>
>> cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
>>
>> ==================================================================
>> Test : ipistorm (modified)
>> Units : Normalized runtime
>> Interpretation: Lower is better
>> Statistic : AMean
>> ==================================================================
>> kernel: time [pct imp]
>> tip:sched/core 1.00 [0.00]
>> tip:sched/core + revert 0.81 [19.36]
>> tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]
>>
>> Same experiment was repeated on an dual socket ARM server (2 x 64C)
>> which too saw a significant improvement in the ipistorm performance:
>
> Could you share more details about this ARM server ? Could it be an Arm64 one ?
> I was not expecting any change for arm/arm64 which are not using TIF_POLLING

I looked at the lscpu output and it said It was an "aarch64" server with
model name "Neoverse-N1". Let me go back and test it once again just to
be sure I did not catch a one off behavior (Might be a while since I
have limited access to this machine) I'll also add a debug
WARN_ON_ONCE() to see if "TIF_NOTIF_IPI" is being set.

>
>
>>
>> ==================================================================
>> Test : ipistorm (modified)
>> Units : Normalized runtime
>> Interpretation: Lower is better
>> Statistic : AMean
>> ==================================================================
>> kernel: time [pct imp]
>> tip:sched/core 1.00 [0.00]
>> tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]
>>
>> netperf and tbench results with the patch match the results on tip on
>> the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
>> hackbench, stream, and schbench too were tested, with results from the
>> patched kernel matching that of the tip.
>>
>>
>> Future Work
>> ===========
>>
>> Evaluate impact of newidle_balance() when scheduler tick hits an idle
>> CPU. The call to newidle_balance() will be skipped with the
>
> But it should call the normal idle load balance instead

Yup, but the frequency of normal idle balance will be lower than the
frequency at which a newidle balance is being triggered currently if
tick is not disabled right? Please correct me if I'm wrong.

>
>> TIF_NOTIFY_IPI solution similar to [2]. Counter argument for the case is
>> that if the idle state did not set the TIF_POLLING bit, the idle CPU
>> would not have called schedule_idle() unless the IPI handler set the
>> NEED_RESCHED bit.
>>
>>
>> Links
>> =====
>>
>> [1] https://github.com/antonblanchard/ipistorm
>> [2] https://lore.kernel.org/lkml/[email protected]/
>> [3] https://lore.kernel.org/lkml/[email protected]/
>> [4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
>> [5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/
>>
>> This series is based on tip:sched/core at tag "sched-core-2024-01-08".
>> [..snip..]
>>

--
Thanks and Regards,
Prateek

2024-03-06 10:31:20

by Vincent Guittot

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

On Wed, 6 Mar 2024 at 11:18, K Prateek Nayak <[email protected]> wrote:
>
> Hello Vincent,
>
> Thank you for taking a look at the series.
>
> On 3/6/2024 3:29 PM, Vincent Guittot wrote:
> > Hi Prateek,
> >
> > Adding Julia who could be interested in this patchset. Your patchset
> > should trigger idle load balance instead of newly idle load balance
> > now when the polling is used. This was one reason for not migrating
> > task in idle CPU
>
> Thank you.
>
> >
> > On Tue, 20 Feb 2024 at 18:15, K Prateek Nayak <[email protected]> wrote:
> >>
> >> Hello everyone,
> >>
> >> [..snip..]
> >>
> >>
> >> Skipping newidle_balance()
> >> ==========================
> >>
> >> In an earlier attempt to solve the challenge of the long IRQ disabled
> >> section, newidle_balance() was skipped when a CPU waking up from idle
> >> was found to have no runnable tasks, and was transitioning back to
> >> idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
> >> may be viable for CPUs that are idling with tick enabled, where the
> >> newidle_balance() has the opportunity to pull tasks onto the idle CPU.
> >>
> >> Vincent [5] pointed out a case where the idle load kick will fail to
> >> run on an idle CPU since the IPI handler launching the ILB will check
> >> for need_resched(). In such cases, the idle CPU relies on
> >> newidle_balance() to pull tasks towards itself.
> >
> > Calling newidle_balance() instead of the normal idle load balance
> > prevents the CPU to pull tasks from other groups
>
> Thank you for the correction.
>
> >
> >>
> >> Using an alternate flag instead of NEED_RESCHED to indicate a pending
> >> IPI was suggested as the correct approach to solve this problem on the
> >> same thread.
> >>
> >>
> >> Proposed solution: TIF_NOTIFY_IPI
> >> =================================
> >>
> >> Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
> >> of idle, TIF_NOTIFY_IPI is a newly introduced flag that
> >> call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
> >> indicate a pending IPI, which the idle CPU promises to process soon.
> >>
> >> On architectures that do not support the TIF_NOTIFY_IPI flag (this
> >> series only adds support for x86 and ARM processors for now),
> >
> > I'm surprised that you are mentioning ARM processors because they
> > don't use TIF_POLLING.
>
> Yup I just realised that after Linus Walleij pointed it out on the
> thread.
>
> >
> >> call_function_single_prep_ipi() will fallback to setting
> >> TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
> >>
> >> Since the pending IPI handlers are processed before the call to
> >> schedule_idle() in do_idle(), schedule_idle() will only be called if the
> >> IPI handler have woken / migrated a new task on the idle CPU and has set
> >> TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
> >> long IRQ disabled section in schedule_idle() unnecessarily, and any
> >> need_resched() check within a call function will accurately notify if a
> >> task is waiting for CPU time on the CPU handling the IPI.
> >>
> >> Following is the crude visualization of how the situation changes with
> >> the newly introduced TIF_NOTIFY_IPI flag:
> >> --
> >> CPU0 CPU1
> >> ==== ====
> >> do_idle() {
> >> __current_set_polling();
> >> ...
> >> monitor(addr);
> >> if (!need_resched_or_ipi())
> >> mwait() {
> >> /* Waiting */
> >> smp_call_function_single(CPU1, func, wait = 1) { ...
> >> ... ...
> >> set_nr_if_polling(CPU1) { ...
> >> /* Realizes CPU1 is polling */ ...
> >> try_cmpxchg(addr, ...
> >> &val, ...
> >> val | _TIF_NOTIFY_IPI); ...
> >> } /* Does not send an IPI */ ...
> >> ... } /* mwait exit due to write at addr */
> >> csd_lock_wait() { ...
> >> /* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
> >> ... __current_clr_polling();
> >> ... flush_smp_call_function_queue() {
> >> ... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
> >> } /* End of wait */ }
> >> } if (need_resched()) {
> >> schedule_idle();
> >> smp_call_function_single(CPU1, func, wait = 1) { }
> >> ... ... /* IRQs remain enabled */
> >> arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
> >> --
> >>
> >> Results
> >> =======
> >>
> >> With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
> >> using ipistorm improves drastically. Following are the numbers from the
> >> same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
> >> C2 disabled) running ipistorm between CPU8 and CPU16:
> >>
> >> cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
> >>
> >> ==================================================================
> >> Test : ipistorm (modified)
> >> Units : Normalized runtime
> >> Interpretation: Lower is better
> >> Statistic : AMean
> >> ==================================================================
> >> kernel: time [pct imp]
> >> tip:sched/core 1.00 [0.00]
> >> tip:sched/core + revert 0.81 [19.36]
> >> tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]
> >>
> >> Same experiment was repeated on an dual socket ARM server (2 x 64C)
> >> which too saw a significant improvement in the ipistorm performance:
> >
> > Could you share more details about this ARM server ? Could it be an Arm64 one ?
> > I was not expecting any change for arm/arm64 which are not using TIF_POLLING
>
> I looked at the lscpu output and it said It was an "aarch64" server with
> model name "Neoverse-N1". Let me go back and test it once again just to
> be sure I did not catch a one off behavior (Might be a while since I
> have limited access to this machine) I'll also add a debug
> WARN_ON_ONCE() to see if "TIF_NOTIF_IPI" is being set.
>
> >
> >
> >>
> >> ==================================================================
> >> Test : ipistorm (modified)
> >> Units : Normalized runtime
> >> Interpretation: Lower is better
> >> Statistic : AMean
> >> ==================================================================
> >> kernel: time [pct imp]
> >> tip:sched/core 1.00 [0.00]
> >> tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]
> >>
> >> netperf and tbench results with the patch match the results on tip on
> >> the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
> >> hackbench, stream, and schbench too were tested, with results from the
> >> patched kernel matching that of the tip.
> >>
> >>
> >> Future Work
> >> ===========
> >>
> >> Evaluate impact of newidle_balance() when scheduler tick hits an idle
> >> CPU. The call to newidle_balance() will be skipped with the
> >
> > But it should call the normal idle load balance instead
>
> Yup, but the frequency of normal idle balance will be lower than the
> frequency at which a newidle balance is being triggered currently if
> tick is not disabled right? Please correct me if I'm wrong.

No it should be the same. When a cpu is idle, we do some periodic idle
load balance either directly on the CPU if it has not stopped its tick
or we wakes up one idle CPU to run the idle load balance of all idle
cpus which stopped their tick.

The newidle balance happens when the cpu becomes idle, i.e. when the
current thread is going to sleep and before idle thread becomes the
current.

The newidle balance has some restrictions compared to idle load balance

>
> >
> >> TIF_NOTIFY_IPI solution similar to [2]. Counter argument for the case is
> >> that if the idle state did not set the TIF_POLLING bit, the idle CPU
> >> would not have called schedule_idle() unless the IPI handler set the
> >> NEED_RESCHED bit.
> >>
> >>
> >> Links
> >> =====
> >>
> >> [1] https://github.com/antonblanchard/ipistorm
> >> [2] https://lore.kernel.org/lkml/[email protected]/
> >> [3] https://lore.kernel.org/lkml/[email protected]/
> >> [4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
> >> [5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/
> >>
> >> This series is based on tip:sched/core at tag "sched-core-2024-01-08".
> >> [..snip..]
> >>
>
> --
> Thanks and Regards,
> Prateek

2024-03-07 19:58:26

by Julia Lawall

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag



On Wed, 6 Mar 2024, Vincent Guittot wrote:

> Hi Prateek,
>
> Adding Julia who could be interested in this patchset. Your patchset
> should trigger idle load balance instead of newly idle load balance
> now when the polling is used. This was one reason for not migrating
> task in idle CPU

My situation is roughly as follows:

The machine is an Intel 6130 with two sockets and 32 hardware threads
(subsequently referred to as cores) per socket. The test is bt.B of the
OpenMP version of the NAS benchmark suite. Initially there is one
thread per core. NUMA balancing occurs, resulting in a move, and thus 31
threads on one socket and 33 on the other.

Load balancing should result in the idle core pulling one of the threads
from the other socket. But that doesn't happen in normal load balancing,
because all 33 threads on the overloaded socket are considered to have a
preference for that socket. Active balancing could pull a thread, but it
is not triggered because the idle core is seen as being newly idle.

The question is then why a core that has been idle for up to multiple
seconds is continually seen as newly idle. Every 4ms, a scheduler tick
submits some work to try to load balance. This submission process
previously broke out of the idle loop due to a need_resched, hence the
same issue as involved in this patch series. The need_resched caused
invocation of schedule, which would then see that there was no task to
pick, making the core be considered to be newly idle. The classification
as newly idle doesn't take into account whether any task was running prior
to the call to schedule.

The load balancing work that was submitted every 4ms is also a NOP due a
test for need_resched.

This patch series no longer makes need resched be the only way out of the
idle loop. Without the need resched, the load balancing work that is
submitted every 4ms can actually try to do load balancing. The core is
not newly idle, so active balancing could in principle occur. But now
nothing happens because the work is run by ksoftirqd. The presence of
ksoftirqd on the idle core means that the core is no longer idle. Thus
there is no more need for load balancing.

So this patch series in itself doesn't solve the problem. I did 500 runs
with this patch series and 500 runs with the Linux kernel that this patch
series builds on, and there is essentially no difference in the
performance.

julia


>
> On Tue, 20 Feb 2024 at 18:15, K Prateek Nayak <[email protected]> wrote:
> >
> > Hello everyone,
> >
> > Before jumping into the issue, let me clarify the Cc list. Everyone have
> > been cc'ed on Patch 0 through Patch 3. Respective arch maintainers,
> > reviewers, and committers returned by scripts/get_maintainer.pl have
> > been cc'ed on the respective arch side changes. Scheduler and CPU Idle
> > maintainers and reviewers have been included for the entire series. If I
> > have missed anyone, please do add them. If you would like to be dropped
> > from the cc list, wholly or partially, for the future iterations, please
> > do let me know.
> >
> > With that out of the way ...
> >
> > Problem statement
> > =================
> >
> > When measuring IPI throughput using a modified version of Anton
> > Blanchard's ipistorm benchmark [1], configured to measure time taken to
> > perform a fixed number of smp_call_function_single() (with wait set to
> > 1), an increase in benchmark time was observed between v5.7 and the
> > current upstream release (v6.7-rc6 at the time of encounter).
> >
> > Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
> > send_call_function_single_ipi()") as the reason behind this increase in
> > runtime.
> >
> >
> > Experiments
> > ===========
> >
> > Since the commit cannot be cleanly reverted on top of the current
> > tip:sched/core, the effects of the optimizations were reverted by:
> >
> > 1. Removing the check for call_function_single_prep_ipi() in
> > send_call_function_single_ipi(). With this change
> > send_call_function_single_ipi() always calls
> > arch_send_call_function_single_ipi()
> >
> > 2. Removing the call to flush_smp_call_function_queue() in do_idle()
> > since every smp_call_function, with (1.), would unconditionally send
> > an IPI to an idle CPU in TIF_POLLING mode.
> >
> > Following is the diff of the above described changes which will be
> > henceforth referred to as the "revert":
> >
> > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> > index 31231925f1ec..735184d98c0f 100644
> > --- a/kernel/sched/idle.c
> > +++ b/kernel/sched/idle.c
> > @@ -332,11 +332,6 @@ static void do_idle(void)
> > */
> > smp_mb__after_atomic();
> >
> > - /*
> > - * RCU relies on this call to be done outside of an RCU read-side
> > - * critical section.
> > - */
> > - flush_smp_call_function_queue();
> > schedule_idle();
> >
> > if (unlikely(klp_patch_pending(current)))
> > diff --git a/kernel/smp.c b/kernel/smp.c
> > index f085ebcdf9e7..2ff100c41885 100644
> > --- a/kernel/smp.c
> > +++ b/kernel/smp.c
> > @@ -111,11 +111,9 @@ void __init call_function_init(void)
> > static __always_inline void
> > send_call_function_single_ipi(int cpu)
> > {
> > - if (call_function_single_prep_ipi(cpu)) {
> > - trace_ipi_send_cpu(cpu, _RET_IP_,
> > - generic_smp_call_function_single_interrupt);
> > - arch_send_call_function_single_ipi(cpu);
> > - }
> > + trace_ipi_send_cpu(cpu, _RET_IP_,
> > + generic_smp_call_function_single_interrupt);
> > + arch_send_call_function_single_ipi(cpu);
> > }
> >
> > static __always_inline void
> > --
> >
> > With the revert, the time taken to complete a fixed set of IPIs using
> > ipistorm improves significantly. Following are the numbers from a dual
> > socket 3rd Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled)
> > running ipistorm between CPU8 and CPU16:
> >
> > cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
> >
> > (tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
> > below)
> >
> > ==================================================================
> > Test : ipistorm (modified)
> > Units : Normalized runtime
> > Interpretation: Lower is better
> > Statistic : AMean
> > ==================================================================
> > kernel: time [pct imp]
> > tip:sched/core 1.00 [0.00]
> > tip:sched/core + revert 0.81 [19.36]
> >
> > Although the revert improves ipistorm performance, it also regresses
> > tbench and netperf, supporting the validity of the optimization.
> > Following are netperf and tbench numbers from the same machine comparing
> > vanilla tip:sched/core and the revert applied on top:
> >
> > ==================================================================
> > Test : tbench
> > Units : Normalized throughput
> > Interpretation: Higher is better
> > Statistic : AMean
> > ==================================================================
> > Clients: tip[pct imp](CV) revert[pct imp](CV)
> > 1 1.00 [ 0.00]( 0.24) 0.91 [ -8.96]( 0.30)
> > 2 1.00 [ 0.00]( 0.25) 0.92 [ -8.20]( 0.97)
> > 4 1.00 [ 0.00]( 0.23) 0.91 [ -9.20]( 1.75)
> > 8 1.00 [ 0.00]( 0.69) 0.91 [ -9.48]( 1.56)
> > 16 1.00 [ 0.00]( 0.66) 0.92 [ -8.49]( 2.43)
> > 32 1.00 [ 0.00]( 0.96) 0.89 [-11.13]( 0.96)
> > 64 1.00 [ 0.00]( 1.06) 0.90 [ -9.72]( 2.49)
> > 128 1.00 [ 0.00]( 0.70) 0.92 [ -8.36]( 1.26)
> > 256 1.00 [ 0.00]( 0.72) 0.97 [ -3.30]( 1.10)
> > 512 1.00 [ 0.00]( 0.42) 0.98 [ -1.73]( 0.37)
> > 1024 1.00 [ 0.00]( 0.28) 0.99 [ -1.39]( 0.43)
> >
> > ==================================================================
> > Test : netperf
> > Units : Normalized Througput
> > Interpretation: Higher is better
> > Statistic : AMean
> > ==================================================================
> > Clients: tip[pct imp](CV) revert[pct imp](CV)
> > 1-clients 1.00 [ 0.00]( 0.50) 0.89 [-10.51]( 0.20)
> > 2-clients 1.00 [ 0.00]( 1.16) 0.89 [-11.10]( 0.59)
> > 4-clients 1.00 [ 0.00]( 1.03) 0.89 [-10.68]( 0.38)
> > 8-clients 1.00 [ 0.00]( 0.99) 0.89 [-10.54]( 0.50)
> > 16-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.92]( 0.95)
> > 32-clients 1.00 [ 0.00]( 1.24) 0.89 [-10.85]( 0.63)
> > 64-clients 1.00 [ 0.00]( 1.58) 0.90 [-10.11]( 1.18)
> > 128-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.94]( 1.11)
> > 256-clients 1.00 [ 0.00]( 4.77) 1.00 [ -0.16]( 3.45)
> > 512-clients 1.00 [ 0.00](56.16) 1.02 [ 2.10](56.05)
> >
> > Since a simple revert is not a viable solution, we delved deeper into
> > the changes in the execution path with call_function_single_prep_ipi()
> > check.
> >
> >
> > Effects of call_function_single_prep_ipi()
> > ==========================================
> >
> > To pull a TIF_POLLING thread out of idle to process an IPI, the sender
> > sets the TIF_NEED_RESCHED bit in the idle task's thread info in
> > call_function_single_prep_ipi() and avoids sending an actual IPI to the
> > target. As a result, the scheduler expects a task to be enqueued when
> > exiting the idle path. This is not the case with non-polling idle states
> > where the idle CPU exits the non-polling idle state to process the
> > interrupt, and since need_resched() returns false, soon goes back to
> > idle again.
> >
> > When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
> > a large part of which runs with local IRQ disabled. In case of ipistorm,
> > when measuring IPI throughput, this large IRQ disabled section delays
> > processing of IPIs. Further auditing revealed that in absence of any
> > runnable tasks, pick_next_task_fair(), which is called from the
> > pick_next_task() fast path, will always call newidle_balance() in this
> > scenario, further increasing the time spent in the IRQ disabled section.
> >
> > Following is the crude visualization of the problem with relevant
> > functions expanded:
> > --
> > CPU0 CPU1
> > ==== ====
> > do_idle() {
> > __current_set_polling();
> > ...
> > monitor(addr);
> > if (!need_resched())
> > mwait() {
> > /* Waiting */
> > smp_call_function_single(CPU1, func, wait = 1) { ...
> > ... ...
> > set_nr_if_polling(CPU1) { ...
> > /* Realizes CPU1 is polling */ ...
> > try_cmpxchg(addr, ...
> > &val, ...
> > val | _TIF_NEED_RESCHED); ...
> > } /* Does not send an IPI */ ...
> > ... } /* mwait exit due to write at addr */
> > csd_lock_wait() { }
> > /* Waiting */ preempt_set_need_resched();
> > ... __current_clr_polling();
> > ... flush_smp_call_function_queue() {
> > ... func();
> > } /* End of wait */ }
> > } schedule_idle() {
> > ...
> > local_irq_disable();
> > smp_call_function_single(CPU1, func, wait = 1) { ...
> > ... ...
> > arch_send_call_function_single_ipi(CPU1); ...
> > \ ...
> > \ newidle_balance() {
> > \ ...
> > /* Delay */ ...
> > \ }
> > \ ...
> > \--------------> local_irq_enable();
> > /* Processes the IPI */
> > --
> >
> >
> > Skipping newidle_balance()
> > ==========================
> >
> > In an earlier attempt to solve the challenge of the long IRQ disabled
> > section, newidle_balance() was skipped when a CPU waking up from idle
> > was found to have no runnable tasks, and was transitioning back to
> > idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
> > may be viable for CPUs that are idling with tick enabled, where the
> > newidle_balance() has the opportunity to pull tasks onto the idle CPU.
> >
> > Vincent [5] pointed out a case where the idle load kick will fail to
> > run on an idle CPU since the IPI handler launching the ILB will check
> > for need_resched(). In such cases, the idle CPU relies on
> > newidle_balance() to pull tasks towards itself.
>
> Calling newidle_balance() instead of the normal idle load balance
> prevents the CPU to pull tasks from other groups
>
> >
> > Using an alternate flag instead of NEED_RESCHED to indicate a pending
> > IPI was suggested as the correct approach to solve this problem on the
> > same thread.
> >
> >
> > Proposed solution: TIF_NOTIFY_IPI
> > =================================
> >
> > Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
> > of idle, TIF_NOTIFY_IPI is a newly introduced flag that
> > call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
> > indicate a pending IPI, which the idle CPU promises to process soon.
> >
> > On architectures that do not support the TIF_NOTIFY_IPI flag (this
> > series only adds support for x86 and ARM processors for now),
>
> I'm surprised that you are mentioning ARM processors because they
> don't use TIF_POLLING.
>
> > call_function_single_prep_ipi() will fallback to setting
> > TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
> >
> > Since the pending IPI handlers are processed before the call to
> > schedule_idle() in do_idle(), schedule_idle() will only be called if the
> > IPI handler have woken / migrated a new task on the idle CPU and has set
> > TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
> > long IRQ disabled section in schedule_idle() unnecessarily, and any
> > need_resched() check within a call function will accurately notify if a
> > task is waiting for CPU time on the CPU handling the IPI.
> >
> > Following is the crude visualization of how the situation changes with
> > the newly introduced TIF_NOTIFY_IPI flag:
> > --
> > CPU0 CPU1
> > ==== ====
> > do_idle() {
> > __current_set_polling();
> > ...
> > monitor(addr);
> > if (!need_resched_or_ipi())
> > mwait() {
> > /* Waiting */
> > smp_call_function_single(CPU1, func, wait = 1) { ...
> > ... ...
> > set_nr_if_polling(CPU1) { ...
> > /* Realizes CPU1 is polling */ ...
> > try_cmpxchg(addr, ...
> > &val, ...
> > val | _TIF_NOTIFY_IPI); ...
> > } /* Does not send an IPI */ ...
> > ... } /* mwait exit due to write at addr */
> > csd_lock_wait() { ...
> > /* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
> > ... __current_clr_polling();
> > ... flush_smp_call_function_queue() {
> > ... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
> > } /* End of wait */ }
> > } if (need_resched()) {
> > schedule_idle();
> > smp_call_function_single(CPU1, func, wait = 1) { }
> > ... ... /* IRQs remain enabled */
> > arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
> > --
> >
> > Results
> > =======
> >
> > With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
> > using ipistorm improves drastically. Following are the numbers from the
> > same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
> > C2 disabled) running ipistorm between CPU8 and CPU16:
> >
> > cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
> >
> > ==================================================================
> > Test : ipistorm (modified)
> > Units : Normalized runtime
> > Interpretation: Lower is better
> > Statistic : AMean
> > ==================================================================
> > kernel: time [pct imp]
> > tip:sched/core 1.00 [0.00]
> > tip:sched/core + revert 0.81 [19.36]
> > tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]
> >
> > Same experiment was repeated on an dual socket ARM server (2 x 64C)
> > which too saw a significant improvement in the ipistorm performance:
>
> Could you share more details about this ARM server ? Could it be an Arm64 one ?
> I was not expecting any change for arm/arm64 which are not using TIF_POLLING
>
>
> >
> > ==================================================================
> > Test : ipistorm (modified)
> > Units : Normalized runtime
> > Interpretation: Lower is better
> > Statistic : AMean
> > ==================================================================
> > kernel: time [pct imp]
> > tip:sched/core 1.00 [0.00]
> > tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]
> >
> > netperf and tbench results with the patch match the results on tip on
> > the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
> > hackbench, stream, and schbench too were tested, with results from the
> > patched kernel matching that of the tip.
> >
> >
> > Future Work
> > ===========
> >
> > Evaluate impact of newidle_balance() when scheduler tick hits an idle
> > CPU. The call to newidle_balance() will be skipped with the
>
> But it should call the normal idle load balance instead
>
> > TIF_NOTIFY_IPI solution similar to [2]. Counter argument for the case is
> > that if the idle state did not set the TIF_POLLING bit, the idle CPU
> > would not have called schedule_idle() unless the IPI handler set the
> > NEED_RESCHED bit.
> >
> >
> > Links
> > =====
> >
> > [1] https://github.com/antonblanchard/ipistorm
> > [2] https://lore.kernel.org/lkml/[email protected]/
> > [3] https://lore.kernel.org/lkml/[email protected]/
> > [4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
> > [5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/
> >
> > This series is based on tip:sched/core at tag "sched-core-2024-01-08".
> > ---
> > Gautham R. Shenoy (4):
> > thread_info: Add helpers to test and clear TIF_NOTIFY_IPI
> > sched: Define a need_resched_or_ipi() helper and use it treewide
> > sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING
> > mode of pending IPI
> > x86/thread_info: Introduce TIF_NOTIFY_IPI flag
> >
> > K Prateek Nayak (10):
> > arm/thread_info: Introduce TIF_NOTIFY_IPI flag
> > alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
> > openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag
> > powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag
> > sh/thread_info: Introduce TIF_NOTIFY_IPI flag
> > sparc/thread_info: Introduce TIF_NOTIFY_IPI flag
> > csky/thread_info: Introduce TIF_NOTIFY_IPI flag
> > parisc/thread_info: Introduce TIF_NOTIFY_IPI flag
> > nios2/thread_info: Introduce TIF_NOTIFY_IPI flag
> > microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag
> > ---
> > Cc: Richard Henderson <[email protected]>
> > Cc: Ivan Kokshaysky <[email protected]>
> > Cc: Matt Turner <[email protected]>
> > Cc: Russell King <[email protected]>
> > Cc: Guo Ren <[email protected]>
> > Cc: Michal Simek <[email protected]>
> > Cc: Dinh Nguyen <[email protected]>
> > Cc: Jonas Bonn <[email protected]>
> > Cc: Stefan Kristiansson <[email protected]>
> > Cc: Stafford Horne <[email protected]>
> > Cc: "James E.J. Bottomley" <[email protected]>
> > Cc: Helge Deller <[email protected]>
> > Cc: Michael Ellerman <[email protected]>
> > Cc: Nicholas Piggin <[email protected]>
> > Cc: Christophe Leroy <[email protected]>
> > Cc: "Aneesh Kumar K.V" <[email protected]>
> > Cc: "Naveen N. Rao" <[email protected]>
> > Cc: Yoshinori Sato <[email protected]>
> > Cc: Rich Felker <[email protected]>
> > Cc: John Paul Adrian Glaubitz <[email protected]>
> > Cc: "David S. Miller" <[email protected]>
> > Cc: Thomas Gleixner <[email protected]>
> > Cc: Ingo Molnar <[email protected]>
> > Cc: Borislav Petkov <[email protected]>
> > Cc: Dave Hansen <[email protected]>
> > Cc: "H. Peter Anvin" <[email protected]>
> > Cc: "Rafael J. Wysocki" <[email protected]>
> > Cc: Daniel Lezcano <[email protected]>
> > Cc: Peter Zijlstra <[email protected]>
> > Cc: Juri Lelli <[email protected]>
> > Cc: Vincent Guittot <[email protected]>
> > Cc: Dietmar Eggemann <[email protected]>
> > Cc: Steven Rostedt <[email protected]>
> > Cc: Ben Segall <[email protected]>
> > Cc: Mel Gorman <[email protected]>
> > Cc: Daniel Bristot de Oliveira <[email protected]>
> > Cc: Valentin Schneider <[email protected]>
> > Cc: Al Viro <[email protected]>
> > Cc: Linus Walleij <[email protected]>
> > Cc: Ard Biesheuvel <[email protected]>
> > Cc: Andrew Donnellan <[email protected]>
> > Cc: Nicholas Miehlbradt <[email protected]>
> > Cc: Andrew Morton <[email protected]>
> > Cc: Arnd Bergmann <[email protected]>
> > Cc: Josh Poimboeuf <[email protected]>
> > Cc: "Kirill A. Shutemov" <[email protected]>
> > Cc: Rick Edgecombe <[email protected]>
> > Cc: Tony Battersby <[email protected]>
> > Cc: Brian Gerst <[email protected]>
> > Cc: Tim Chen <[email protected]>
> > Cc: David Vernet <[email protected]>
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > Cc: [email protected]
> > ---
> > arch/alpha/include/asm/thread_info.h | 2 ++
> > arch/arm/include/asm/thread_info.h | 3 ++
> > arch/csky/include/asm/thread_info.h | 2 ++
> > arch/microblaze/include/asm/thread_info.h | 2 ++
> > arch/nios2/include/asm/thread_info.h | 2 ++
> > arch/openrisc/include/asm/thread_info.h | 2 ++
> > arch/parisc/include/asm/thread_info.h | 2 ++
> > arch/powerpc/include/asm/thread_info.h | 2 ++
> > arch/sh/include/asm/thread_info.h | 2 ++
> > arch/sparc/include/asm/thread_info_32.h | 2 ++
> > arch/sparc/include/asm/thread_info_64.h | 2 ++
> > arch/x86/include/asm/mwait.h | 2 +-
> > arch/x86/include/asm/thread_info.h | 2 ++
> > arch/x86/kernel/process.c | 2 +-
> > drivers/cpuidle/cpuidle-powernv.c | 2 +-
> > drivers/cpuidle/cpuidle-pseries.c | 2 +-
> > drivers/cpuidle/poll_state.c | 2 +-
> > include/linux/sched.h | 5 +++
> > include/linux/sched/idle.h | 12 +++----
> > include/linux/thread_info.h | 43 +++++++++++++++++++++++
> > kernel/sched/core.c | 41 ++++++++++++++++-----
> > kernel/sched/idle.c | 23 ++++++++----
> > 22 files changed, 133 insertions(+), 26 deletions(-)
> >
> > --
> > 2.34.1
> >
>

2024-03-15 06:38:53

by K Prateek Nayak

[permalink] [raw]
Subject: Re: [RFC PATCH 00/14] Introducing TIF_NOTIFY_IPI flag

(Trimming the cc list to only include scheduler folks)

Hello Julia,

On 3/8/2024 1:26 AM, Julia Lawall wrote:
>
>
> On Wed, 6 Mar 2024, Vincent Guittot wrote:
>
>> Hi Prateek,
>>
>> Adding Julia who could be interested in this patchset. Your patchset
>> should trigger idle load balance instead of newly idle load balance
>> now when the polling is used. This was one reason for not migrating
>> task in idle CPU
>
> My situation is roughly as follows:
>
> The machine is an Intel 6130 with two sockets and 32 hardware threads
> (subsequently referred to as cores) per socket. The test is bt.B of the
> OpenMP version of the NAS benchmark suite. Initially there is one
> thread per core. NUMA balancing occurs, resulting in a move, and thus 31
> threads on one socket and 33 on the other.
>
> Load balancing should result in the idle core pulling one of the threads
> from the other socket. But that doesn't happen in normal load balancing,
> because all 33 threads on the overloaded socket are considered to have a
> preference for that socket. Active balancing could pull a thread, but it
> is not triggered because the idle core is seen as being newly idle.
>
> The question is then why a core that has been idle for up to multiple
> seconds is continually seen as newly idle. Every 4ms, a scheduler tick
> submits some work to try to load balance. This submission process
> previously broke out of the idle loop due to a need_resched, hence the
> same issue as involved in this patch series. The need_resched caused
> invocation of schedule, which would then see that there was no task to
> pick, making the core be considered to be newly idle. The classification
> as newly idle doesn't take into account whether any task was running prior
> to the call to schedule.
>
> The load balancing work that was submitted every 4ms is also a NOP due a
> test for need_resched.
>
> This patch series no longer makes need resched be the only way out of the
> idle loop. Without the need resched, the load balancing work that is
> submitted every 4ms can actually try to do load balancing. The core is
> not newly idle, so active balancing could in principle occur. But now
> nothing happens because the work is run by ksoftirqd. The presence of
> ksoftirqd on the idle core means that the core is no longer idle. Thus
> there is no more need for load balancing.

Thinking slightly ahead, assuming that the idle balancer realizes
the ksoftirqd is running for load balancing itself and discounts it
from consideration, won't the NUMA_IMBALANCE_MIN considered by
adjust_numa_imbalance() continue to keep the 33-31 distribution?

In both task_numa_find_cpu() [1] and calculate_imbalance() [2],
even though the scheduler classifies the local group as
"group_has_spare", with the imbalance <= NUMA_IMBALANCE_MIN which is 2,
a migration across the NUMA domains is still restricted I believe.

Can you try setting NUMA_IMBALANCE_MIN to 0 and checking if the
situation changes with the upstream kernel? I'm hoping the newidle
balance that is triggered without this series on the way to idle, is
good enough to pull the task towards itself.

Please ignore if you've already tried this. I might have missed it
when going through the original thread.

[1] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/sched/fair.c?h=v6.8&id=e8f897f4afef0031fe618a8e94127a0934896aba#n2368
[2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/kernel/sched/fair.c?h=v6.8&id=e8f897f4afef0031fe618a8e94127a0934896aba#n10743

>
> [snip..]
>

--
Thanks and Regards,
Prateek