From: jun qian <[email protected]>
When get the pending softirqs, it need to process all the pending
softirqs in the while loop. If the processing time of each pending
softirq is need more than 2 msec in this loop, or one of the softirq
will running a long time, according to the original code logic, it
will process all the pending softirqs without wakeuping ksoftirqd,
which will cause a relatively large scheduling delay on the
corresponding CPU, which we do not wish to see. The patch will check
the total time to process pending softirq, if the time exceeds 2 ms
we need to wakeup the ksofirqd to aviod large sched delay.
Signed-off-by: jun qian <[email protected]>
---
kernel/softirq.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/kernel/softirq.c b/kernel/softirq.c
index c4201b7f..8f47554 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -200,17 +200,15 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
/*
* We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
* but break the loop if need_resched() is set or after 2 ms.
- * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in
- * certain cases, such as stop_machine(), jiffies may cease to
- * increment and so we need the MAX_SOFTIRQ_RESTART limit as
- * well to make sure we eventually return from this method.
+ * In the loop, if the processing time of the softirq has exceeded 2
+ * milliseconds, we also need to break the loop to wakeup the ksofirqd.
*
* These limits have been established via experimentation.
* The two things to balance is latency against fairness -
* we want to handle softirqs as soon as possible, but they
* should not be able to lock up the box.
*/
-#define MAX_SOFTIRQ_TIME msecs_to_jiffies(2)
+#define MAX_SOFTIRQ_TIME_NS 2000000
#define MAX_SOFTIRQ_RESTART 10
#ifdef CONFIG_TRACE_IRQFLAGS
@@ -248,7 +246,7 @@ static inline void lockdep_softirq_end(bool in_hardirq) { }
asmlinkage __visible void __softirq_entry __do_softirq(void)
{
- unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
+ ktime_t end = ktime_get() + MAX_SOFTIRQ_TIME_NS;
unsigned long old_flags = current->flags;
int max_restart = MAX_SOFTIRQ_RESTART;
struct softirq_action *h;
@@ -299,6 +297,13 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
}
h++;
pending >>= softirq_bit;
+
+ /*
+ * the softirq's action has been running for too much time
+ * so it may need to wakeup the ksoftirqd
+ */
+ if (need_resched() && ktime_get() > end)
+ break;
}
if (__this_cpu_read(ksoftirqd) == current)
@@ -307,8 +312,8 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
pending = local_softirq_pending();
if (pending) {
- if (time_before(jiffies, end) && !need_resched() &&
- --max_restart)
+ if (!need_resched() && --max_restart &&
+ ktime_get() <= end)
goto restart;
wakeup_softirqd();
--
1.8.3.1
[email protected] writes:
> From: jun qian <[email protected]>
> + /*
> + * the softirq's action has been running for too much time
> + * so it may need to wakeup the ksoftirqd
> + */
> + if (need_resched() && ktime_get() > end)
> + break;
As per my reply on V2 this is leaking non handled pending bits. If you
do a V4, can you please use sched_clock() instead of ktime_get()?
Thanks,
tglx
On Thu, Jul 23, 2020 at 9:41 PM Thomas Gleixner <[email protected]> wrote:
>
> [email protected] writes:
> > From: jun qian <[email protected]>
> > + /*
> > + * the softirq's action has been running for too much time
> > + * so it may need to wakeup the ksoftirqd
> > + */
> > + if (need_resched() && ktime_get() > end)
> > + break;
>
> As per my reply on V2 this is leaking non handled pending bits. If you
> do a V4, can you please use sched_clock() instead of ktime_get()?
>
The reason why the non handled pending bits leaked is
set_softirq_pending(0) called in the start, if the
loop is broken, the not handled bit will leak. This is my
understanding, I am not sure if it is correct or not.
Looking forward to your reply.
Thank you so much.
> Thanks,
>
> tglx