2024-02-08 12:59:21

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv6 0/2] *** Detect interrupt storm in softlockup ***

Hi, guys.
I have implemented a low-overhead method for detecting interrupt
storm in softlockup. Please review it, all comments are welcome.

Changes from v5 to v6:

- Use "./scripts/checkpatch.pl --strict" to get a few extra
style nits and fix them.

- Squash patch #3 into patch #1, and wrapp the help text to
80 columns.

- Sort existing headers alphabetically in watchdog.c

- Drop "softlockup_hardirq_cpus", just read "hardirq_counts"
and see if it's non-NULL.

- Store "nr_irqs" in a local variable.

- Simplify the calculation of "cpu_diff".

Changes from v4 to v5:

- Rearranging variable placement to make code look neater.

Changes from v3 to v4:

- Renaming some variable and function names to make the code logic
more readable.

- Change the code location to avoid predeclaring.

- Just swap rather than a double loop in tabulate_irq_count.

- Since nr_irqs has the potential to grow at runtime, bounds-check
logic has been implemented.

- Add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob.

Changes from v2 to v3:

- From Liu Song, using enum instead of macro for cpu_stats, shortening
the name 'idx_to_stat' to 'stats', adding 'get_16bit_precesion' instead
of using right shift operations, and using 'struct irq_counts'.

- From kernel robot test, using '__this_cpu_read' and '__this_cpu_write'
instead of accessing to an per-cpu array directly, in order to avoid
this warning.
'sparse: incorrect type in initializer (different modifiers)'

Changes from v1 to v2:

- From Douglas, optimize the memory of cpustats. With the maximum number
of CPUs, that's now this.
2 * 8192 * 4 + 1 * 8192 * 5 * 4 + 1 * 8192 = 237,568 bytes.

- From Liu Song, refactor the code format and add necessary comments.

- From Douglas, use interrupt counts instead of interrupt time to
determine the cause of softlockup.

- Remove the cmdline parameter added in PATCHv1.

Bitao Hu (2):
watchdog/softlockup: low-overhead detection of interrupt
watchdog/softlockup: report the most frequent interrupts

kernel/watchdog.c | 244 +++++++++++++++++++++++++++++++++++++++++++++-
lib/Kconfig.debug | 13 +++
2 files changed, 253 insertions(+), 4 deletions(-)

--
2.37.1 (Apple Git-137.1)



2024-02-08 13:15:18

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv6 1/2] watchdog/softlockup: low-overhead detection of interrupt

The following softlockup is caused by interrupt storm, but it cannot be
identified from the call tree. Because the call tree is just a snapshot
and doesn't fully capture the behavior of the CPU during the soft lockup.
watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
...
Call trace:
__do_softirq+0xa0/0x37c
__irq_exit_rcu+0x108/0x140
irq_exit+0x14/0x20
__handle_domain_irq+0x84/0xe0
gic_handle_irq+0x80/0x108
el0_irq_naked+0x50/0x58

Therefore,I think it is necessary to report CPU utilization during the
softlockup_thresh period (report once every sample_period, for a total
of 5 reportings), like this:
watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
CPU#28 Utilization every 4s during lockup:
#1: 0% system, 0% softirq, 100% hardirq, 0% idle
#2: 0% system, 0% softirq, 100% hardirq, 0% idle
#3: 0% system, 0% softirq, 100% hardirq, 0% idle
#4: 0% system, 0% softirq, 100% hardirq, 0% idle
#5: 0% system, 0% softirq, 100% hardirq, 0% idle
...

This would be helpful in determining whether an interrupt storm has
occurred or in identifying the cause of the softlockup. The criteria for
determination are as follows:
a. If the hardirq utilization is high, then interrupt storm should be
considered and the root cause cannot be determined from the call tree.
b. If the softirq utilization is high, then we could analyze the call
tree but it may cannot reflect the root cause.
c. If the system utilization is high, then we could analyze the root
cause from the call tree.

The mechanism requires a considerable amount of global storage space
when configured for the maximum number of CPUs. Therefore, adding a
SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that defaults to "yes"
if the max number of CPUs is <= 128.

Signed-off-by: Bitao Hu <[email protected]>
---
kernel/watchdog.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
lib/Kconfig.debug | 13 +++++++
2 files changed, 104 insertions(+)

diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 81a8862295d6..380b60074f1d 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -16,6 +16,8 @@
#include <linux/cpu.h>
#include <linux/nmi.h>
#include <linux/init.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
#include <linux/module.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
@@ -333,6 +335,92 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);

static void __lockup_detector_cleanup(void);

+#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
+#define NUM_STATS_GROUPS 5
+enum stats_per_group {
+ STATS_SYSTEM,
+ STATS_SOFTIRQ,
+ STATS_HARDIRQ,
+ STATS_IDLE,
+ NUM_STATS_PER_GROUP,
+};
+
+static const enum cpu_usage_stat tracked_stats[NUM_STATS_PER_GROUP] = {
+ CPUTIME_SYSTEM,
+ CPUTIME_SOFTIRQ,
+ CPUTIME_IRQ,
+ CPUTIME_IDLE,
+};
+
+static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]);
+static DEFINE_PER_CPU(u8, cpustat_util[NUM_STATS_GROUPS][NUM_STATS_PER_GROUP]);
+static DEFINE_PER_CPU(u8, cpustat_tail);
+
+/*
+ * We don't need nanosecond resolution. A granularity of 16ms is
+ * sufficient for our precision, allowing us to use u16 to store
+ * cpustats, which will roll over roughly every ~1000 seconds.
+ * 2^24 ~= 16 * 10^6
+ */
+static u16 get_16bit_precision(u64 data_ns)
+{
+ return data_ns >> 24LL; /* 2^24ns ~= 16.8ms */
+}
+
+static void update_cpustat(void)
+{
+ int i;
+ u8 util;
+ u16 old_stat, new_stat;
+ struct kernel_cpustat kcpustat;
+ u64 *cpustat = kcpustat.cpustat;
+ u8 tail = __this_cpu_read(cpustat_tail);
+ u16 sample_period_16 = get_16bit_precision(sample_period);
+
+ kcpustat_cpu_fetch(&kcpustat, smp_processor_id());
+ for (i = 0; i < NUM_STATS_PER_GROUP; i++) {
+ old_stat = __this_cpu_read(cpustat_old[i]);
+ new_stat = get_16bit_precision(cpustat[tracked_stats[i]]);
+ util = DIV_ROUND_UP(100 * (new_stat - old_stat), sample_period_16);
+ __this_cpu_write(cpustat_util[tail][i], util);
+ __this_cpu_write(cpustat_old[i], new_stat);
+ }
+ __this_cpu_write(cpustat_tail, (tail + 1) % NUM_STATS_GROUPS);
+}
+
+static void print_cpustat(void)
+{
+ int i, group;
+ u8 tail = __this_cpu_read(cpustat_tail);
+ u64 sample_period_second = sample_period;
+
+ do_div(sample_period_second, NSEC_PER_SEC);
+ /*
+ * We do not want the "watchdog: " prefix on every line,
+ * hence we use "printk" instead of "pr_crit".
+ */
+ printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
+ smp_processor_id(), sample_period_second);
+ for (i = 0; i < NUM_STATS_GROUPS; i++) {
+ group = (tail + i) % NUM_STATS_GROUPS;
+ printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
+ "%3u%% hardirq,\t%3u%% idle\n", i + 1,
+ __this_cpu_read(cpustat_util[group][STATS_SYSTEM]),
+ __this_cpu_read(cpustat_util[group][STATS_SOFTIRQ]),
+ __this_cpu_read(cpustat_util[group][STATS_HARDIRQ]),
+ __this_cpu_read(cpustat_util[group][STATS_IDLE]));
+ }
+}
+
+static void report_cpu_status(void)
+{
+ print_cpustat();
+}
+#else
+static inline void update_cpustat(void) { }
+static inline void report_cpu_status(void) { }
+#endif
+
/*
* Hard-lockup warnings should be triggered after just a few seconds. Soft-
* lockups can have false positives under extreme conditions. So we generally
@@ -504,6 +592,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
*/
period_ts = READ_ONCE(*this_cpu_ptr(&watchdog_report_ts));

+ update_cpustat();
+
/* Reset the interval when touched by known problematic code. */
if (period_ts == SOFTLOCKUP_DELAY_REPORT) {
if (unlikely(__this_cpu_read(softlockup_touch_sync))) {
@@ -539,6 +629,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
smp_processor_id(), duration,
current->comm, task_pid_nr(current));
+ report_cpu_status();
print_modules();
print_irqtrace_events(current);
if (regs)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 975a07f9f1cc..49f652674bd8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1029,6 +1029,19 @@ config SOFTLOCKUP_DETECTOR
chance to run. The current stack trace is displayed upon
detection and the system will stay locked up.

+config SOFTLOCKUP_DETECTOR_INTR_STORM
+ bool "Detect Interrupt Storm in Soft Lockups"
+ depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
+ default y if NR_CPUS <= 128
+ help
+ Say Y here to enable the kernel to detect interrupt storm
+ during "soft lockups".
+
+ "soft lockups" can be caused by a variety of reasons. If one is
+ caused by an interrupt storm, then the storming interrupts will not
+ be on the callstack. To detect this case, it is necessary to report
+ the CPU stats and the interrupt counts during the "soft lockups".
+
config BOOTPARAM_SOFTLOCKUP_PANIC
bool "Panic (Reboot) On Soft Lockups"
depends on SOFTLOCKUP_DETECTOR
--
2.37.1 (Apple Git-137.1)


2024-02-08 13:15:31

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv6 2/2] watchdog/softlockup: report the most frequent interrupts

When the watchdog determines that the current soft lockup is due
to an interrupt storm based on CPU utilization, reporting the
most frequent interrupts could be good enough for further
troubleshooting.

Below is an example of interrupt storm. The call tree does not
provide useful information, but we can analyze which interrupt
caused the soft lockup by comparing the counts of interrupts.

[ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9:1:214]
[ 2987.488607] CPU#9 Utilization every 4s during lockup:
[ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
[ 2987.491493] #1: 330985 irq#7(IPI)
[ 2987.491743] #2: 5000 irq#10(arch_timer)
[ 2987.492039] #3: 9 irq#91(nvme0q2)
[ 2987.492318] #4: 3 irq#118(virtio1-output.12)
..
[ 2987.492728] Call trace:
[ 2987.492729] __do_softirq+0xa8/0x364

Signed-off-by: Bitao Hu <[email protected]>
---
kernel/watchdog.c | 153 ++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 149 insertions(+), 4 deletions(-)

diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 380b60074f1d..e9e98ce5ff40 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -12,22 +12,25 @@

#define pr_fmt(fmt) "watchdog: " fmt

-#include <linux/mm.h>
#include <linux/cpu.h>
-#include <linux/nmi.h>
#include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
#include <linux/kernel_stat.h>
+#include <linux/kvm_para.h>
#include <linux/math64.h>
+#include <linux/mm.h>
#include <linux/module.h>
+#include <linux/nmi.h>
+#include <linux/stop_machine.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
+
#include <linux/sched/clock.h>
#include <linux/sched/debug.h>
#include <linux/sched/isolation.h>
-#include <linux/stop_machine.h>

#include <asm/irq_regs.h>
-#include <linux/kvm_para.h>

static DEFINE_MUTEX(watchdog_mutex);

@@ -412,13 +415,142 @@ static void print_cpustat(void)
}
}

+#define HARDIRQ_PERCENT_THRESH 50
+#define NUM_HARDIRQ_REPORT 5
+static DEFINE_PER_CPU(u32 *, hardirq_counts);
+static DEFINE_PER_CPU(int, actual_nr_irqs);
+struct irq_counts {
+ int irq;
+ u32 counts;
+};
+
+/* Tabulate the most frequent interrupts. */
+static void tabulate_irq_count(struct irq_counts *irq_counts, int irq, u32 counts, int rank)
+{
+ int i;
+ struct irq_counts new_count = {irq, counts};
+
+ for (i = 0; i < rank; i++) {
+ if (counts > irq_counts[i].counts)
+ swap(new_count, irq_counts[i]);
+ }
+}
+
+/*
+ * If the hardirq time exceeds HARDIRQ_PERCENT_THRESH% of the sample_period,
+ * then the cause of softlockup might be interrupt storm. In this case, it
+ * would be useful to start interrupt counting.
+ */
+static bool need_counting_irqs(void)
+{
+ u8 util;
+ int tail = __this_cpu_read(cpustat_tail);
+
+ tail = (tail + NUM_HARDIRQ_REPORT - 1) % NUM_HARDIRQ_REPORT;
+ util = __this_cpu_read(cpustat_util[tail][STATS_HARDIRQ]);
+ return util > HARDIRQ_PERCENT_THRESH;
+}
+
+static void start_counting_irqs(void)
+{
+ int i;
+ int local_nr_irqs;
+ struct irq_desc *desc;
+ u32 *counts = __this_cpu_read(hardirq_counts);
+
+ if (!counts) {
+ /*
+ * nr_irqs has the potential to grow at runtime. We should read
+ * it and store locally to avoid array out-of-bounds access.
+ */
+ local_nr_irqs = READ_ONCE(nr_irqs);
+ counts = kcalloc(local_nr_irqs, sizeof(u32), GFP_ATOMIC);
+ if (!counts)
+ return;
+ for (i = 0; i < local_nr_irqs; i++) {
+ desc = irq_to_desc(i);
+ if (!desc)
+ continue;
+ counts[i] = desc->kstat_irqs ?
+ *this_cpu_ptr(desc->kstat_irqs) : 0;
+ }
+ __this_cpu_write(actual_nr_irqs, local_nr_irqs);
+ __this_cpu_write(hardirq_counts, counts);
+ }
+}
+
+static void stop_counting_irqs(void)
+{
+ kfree(__this_cpu_read(hardirq_counts));
+ __this_cpu_write(hardirq_counts, NULL);
+}
+
+static void print_irq_counts(void)
+{
+ int i;
+ struct irq_desc *desc;
+ u32 counts_diff;
+ int local_nr_irqs = __this_cpu_read(actual_nr_irqs);
+ u32 *counts = __this_cpu_read(hardirq_counts);
+ struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
+ {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
+ };
+
+ if (counts) {
+ for_each_irq_desc(i, desc) {
+ /*
+ * We need to bounds-check in case someone on a different CPU
+ * expanded nr_irqs.
+ */
+ if (desc->kstat_irqs) {
+ counts_diff = *this_cpu_ptr(desc->kstat_irqs);
+ if (i < local_nr_irqs)
+ counts_diff -= counts[i];
+ } else {
+ counts_diff = 0;
+ }
+ tabulate_irq_count(irq_counts_sorted, i, counts_diff,
+ NUM_HARDIRQ_REPORT);
+ }
+ /*
+ * We do not want the "watchdog: " prefix on every line,
+ * hence we use "printk" instead of "pr_crit".
+ */
+ printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n",
+ smp_processor_id(), HARDIRQ_PERCENT_THRESH);
+ for (i = 0; i < NUM_HARDIRQ_REPORT; i++) {
+ if (irq_counts_sorted[i].irq == -1)
+ break;
+ desc = irq_to_desc(irq_counts_sorted[i].irq);
+ if (desc && desc->action)
+ printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n",
+ i + 1, irq_counts_sorted[i].counts,
+ irq_counts_sorted[i].irq, desc->action->name);
+ else
+ printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n",
+ i + 1, irq_counts_sorted[i].counts,
+ irq_counts_sorted[i].irq);
+ }
+ /*
+ * If the hardirq time is less than HARDIRQ_PERCENT_THRESH% in the last
+ * sample_period, then we suspect the interrupt storm might be subsiding.
+ */
+ if (!need_counting_irqs())
+ stop_counting_irqs();
+ }
+}
+
static void report_cpu_status(void)
{
print_cpustat();
+ print_irq_counts();
}
#else
static inline void update_cpustat(void) { }
static inline void report_cpu_status(void) { }
+static inline bool need_counting_irqs(void) { return false; }
+static inline void start_counting_irqs(void) { }
+static inline void stop_counting_irqs(void) { }
#endif

/*
@@ -522,6 +654,18 @@ static int is_softlockup(unsigned long touch_ts,
unsigned long now)
{
if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
+ /*
+ * If period_ts has not been updated during a sample_period, then
+ * in the subsequent few sample_periods, period_ts might also not
+ * be updated, which could indicate a potential softlockup. In
+ * this case, if we suspect the cause of the potential softlockup
+ * might be interrupt storm, then we need to count the interrupts
+ * to find which interrupt is storming.
+ */
+ if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5) &&
+ need_counting_irqs())
+ start_counting_irqs();
+
/* Warn about unreasonable delays. */
if (time_after(now, period_ts + get_softlockup_thresh()))
return now - touch_ts;
@@ -544,6 +688,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
static int softlockup_fn(void *data)
{
update_touch_ts();
+ stop_counting_irqs();
complete(this_cpu_ptr(&softlockup_completion));

return 0;
--
2.37.1 (Apple Git-137.1)


2024-02-08 16:06:39

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCHv6 1/2] watchdog/softlockup: low-overhead detection of interrupt

Hi,

On Thu, Feb 8, 2024 at 4:54 AM Bitao Hu <[email protected]> wrote:
>
> The following softlockup is caused by interrupt storm, but it cannot be
> identified from the call tree. Because the call tree is just a snapshot
> and doesn't fully capture the behavior of the CPU during the soft lockup.
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> ...
> Call trace:
> __do_softirq+0xa0/0x37c
> __irq_exit_rcu+0x108/0x140
> irq_exit+0x14/0x20
> __handle_domain_irq+0x84/0xe0
> gic_handle_irq+0x80/0x108
> el0_irq_naked+0x50/0x58
>
> Therefore,I think it is necessary to report CPU utilization during the
> softlockup_thresh period (report once every sample_period, for a total
> of 5 reportings), like this:
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> CPU#28 Utilization every 4s during lockup:
> #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> #5: 0% system, 0% softirq, 100% hardirq, 0% idle
> ...
>
> This would be helpful in determining whether an interrupt storm has
> occurred or in identifying the cause of the softlockup. The criteria for
> determination are as follows:
> a. If the hardirq utilization is high, then interrupt storm should be
> considered and the root cause cannot be determined from the call tree.
> b. If the softirq utilization is high, then we could analyze the call
> tree but it may cannot reflect the root cause.
> c. If the system utilization is high, then we could analyze the root
> cause from the call tree.
>
> The mechanism requires a considerable amount of global storage space
> when configured for the maximum number of CPUs. Therefore, adding a
> SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that defaults to "yes"
> if the max number of CPUs is <= 128.
>
> Signed-off-by: Bitao Hu <[email protected]>
> ---
> kernel/watchdog.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
> lib/Kconfig.debug | 13 +++++++
> 2 files changed, 104 insertions(+)

Thanks, this looks great now!

Reviewed-by: Douglas Anderson <[email protected]>

2024-02-08 16:06:51

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCHv6 2/2] watchdog/softlockup: report the most frequent interrupts

Hi,

On Thu, Feb 8, 2024 at 4:54 AM Bitao Hu <[email protected]> wrote:
>
> +static void start_counting_irqs(void)
> +{
> + int i;
> + int local_nr_irqs;
> + struct irq_desc *desc;
> + u32 *counts = __this_cpu_read(hardirq_counts);
> +
> + if (!counts) {
> + /*
> + * nr_irqs has the potential to grow at runtime. We should read
> + * it and store locally to avoid array out-of-bounds access.
> + */
> + local_nr_irqs = READ_ONCE(nr_irqs);

nit: I don't think the READ_ONCE() is actually needed above. All that
matters is that you're consistently using the same local variable
("local_nr_irqs") for allocating the array, looping, and then storing.
No matter what optimizations might be happening and what else might be
happening on other CPUs, once you put it in a local variable the
compiler _must_ keep it consistent.

That being said, I don't think it really matters, so I'm not sure it's
worth spinning your series just for that.

In any case, this patch looks good to me now. Thanks!

Reviewed-by: Douglas Anderson <[email protected]>

2024-02-09 13:41:44

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCHv6 1/2] watchdog/softlockup: low-overhead detection of interrupt

Hi,

I am sorry for jouning this game so late. But honestly, it went
forward too quickly. A good practice is to wait a week before
sending new version so that you give a chance more people
to provide some feedback.

The only exception might be when you know exactly who could
review it because the area in not interesting for anyone else.
But this is typicall not the case for kernel core code.


On Thu 2024-02-08 20:54:25, Bitao Hu wrote:
> The following softlockup is caused by interrupt storm, but it cannot be
> identified from the call tree. Because the call tree is just a snapshot
> and doesn't fully capture the behavior of the CPU during the soft lockup.
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> ...
> Call trace:
> __do_softirq+0xa0/0x37c
> __irq_exit_rcu+0x108/0x140
> irq_exit+0x14/0x20
> __handle_domain_irq+0x84/0xe0
> gic_handle_irq+0x80/0x108
> el0_irq_naked+0x50/0x58
>
> Therefore,I think it is necessary to report CPU utilization during the
> softlockup_thresh period (report once every sample_period, for a total
> of 5 reportings), like this:
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> CPU#28 Utilization every 4s during lockup:
> #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> #5: 0% system, 0% softirq, 100% hardirq, 0% idle

I like this. IMHO, it might be really useful.

> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -333,6 +335,92 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>
> static void __lockup_detector_cleanup(void);
>
> +#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
> +#define NUM_STATS_GROUPS 5

It would be nice to synchronize this with the hardcoded 5 in:

static void set_sample_period(void)
{
/*
* convert watchdog_thresh from seconds to ns
* the divide by 5 is to give hrtimer several chances (two
* or three with the current relation between the soft
* and hard thresholds) to increment before the
* hardlockup detector generates a warning
*/
sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);

For exmaple, define and use the following in both situations:

#define NUM_SAMPLE_PERIODS 5

> +enum stats_per_group {
> + STATS_SYSTEM,
> + STATS_SOFTIRQ,
> + STATS_HARDIRQ,
> + STATS_IDLE,
> + NUM_STATS_PER_GROUP,
> +};
> +
> +static const enum cpu_usage_stat tracked_stats[NUM_STATS_PER_GROUP] = {
> + CPUTIME_SYSTEM,
> + CPUTIME_SOFTIRQ,
> + CPUTIME_IRQ,
> + CPUTIME_IDLE,
> +};
> +
> +static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]);
> +static DEFINE_PER_CPU(u8, cpustat_util[NUM_STATS_GROUPS][NUM_STATS_PER_GROUP]);
> +static DEFINE_PER_CPU(u8, cpustat_tail);
> +
> +/*
> + * We don't need nanosecond resolution. A granularity of 16ms is
> + * sufficient for our precision, allowing us to use u16 to store
> + * cpustats, which will roll over roughly every ~1000 seconds.
> + * 2^24 ~= 16 * 10^6
> + */
> +static u16 get_16bit_precision(u64 data_ns)
> +{
> + return data_ns >> 24LL; /* 2^24ns ~= 16.8ms */

I would personally use

delta_ns >> 20 /* 2^20ns ~= 1ms */

to make it easier for debugging by a human. It would support
the sample period up to 65s which might be enough.

But I do not resirt on it. ">> 24" provides less granularity
but it supports longer sample periods.

> +static void print_cpustat(void)
> +{
> + int i, group;
> + u8 tail = __this_cpu_read(cpustat_tail);
> + u64 sample_period_second = sample_period;
> +
> + do_div(sample_period_second, NSEC_PER_SEC);
> + /*
> + * We do not want the "watchdog: " prefix on every line,
> + * hence we use "printk" instead of "pr_crit".
> + */
> + printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
> + smp_processor_id(), sample_period_second);
> + for (i = 0; i < NUM_STATS_GROUPS; i++) {

This starts with the 1st group in the array. Is it the oldest one?
It should take into account cpustat_tail.


> + group = (tail + i) % NUM_STATS_GROUPS;
> + printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
> + "%3u%% hardirq,\t%3u%% idle\n", i + 1,
> + __this_cpu_read(cpustat_util[group][STATS_SYSTEM]),
> + __this_cpu_read(cpustat_util[group][STATS_SOFTIRQ]),
> + __this_cpu_read(cpustat_util[group][STATS_HARDIRQ]),
> + __this_cpu_read(cpustat_util[group][STATS_IDLE]));
> + }
> +}
> +

Best Regards,
Petr

2024-02-09 14:58:14

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCHv6 2/2] watchdog/softlockup: report the most frequent interrupts

On Thu 2024-02-08 20:54:26, Bitao Hu wrote:
> When the watchdog determines that the current soft lockup is due
> to an interrupt storm based on CPU utilization, reporting the
> most frequent interrupts could be good enough for further
> troubleshooting.
>
> Below is an example of interrupt storm. The call tree does not
> provide useful information, but we can analyze which interrupt
> caused the soft lockup by comparing the counts of interrupts.
>
> [ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9:1:214]
> [ 2987.488607] CPU#9 Utilization every 4s during lockup:
> [ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
> [ 2987.491493] #1: 330985 irq#7(IPI)
> [ 2987.491743] #2: 5000 irq#10(arch_timer)
> [ 2987.492039] #3: 9 irq#91(nvme0q2)
> [ 2987.492318] #4: 3 irq#118(virtio1-output.12)

Nit: It might looks slightly better if it prints the last 5 HardIRQs ;-)
Maybe this version already does.

> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -412,13 +415,142 @@ static void print_cpustat(void)
> }
> }
>
> +#define HARDIRQ_PERCENT_THRESH 50
> +#define NUM_HARDIRQ_REPORT 5

It actually creates array for 5 IRQ entries.

> +static DEFINE_PER_CPU(u32 *, hardirq_counts);
> +static DEFINE_PER_CPU(int, actual_nr_irqs);
> +struct irq_counts {
> + int irq;
> + u32 counts;
> +};
> +
> +static void print_irq_counts(void)
> +{
> + int i;
> + struct irq_desc *desc;
> + u32 counts_diff;
> + int local_nr_irqs = __this_cpu_read(actual_nr_irqs);
> + u32 *counts = __this_cpu_read(hardirq_counts);
> + struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
> + {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
> + };
> +
> + if (counts) {
> + for_each_irq_desc(i, desc) {

I would use:

for (i = 0; i < local_nr_irqs; i++) {

It does not make sense to process IRQs where "counts_diff = 0;"

> +

> + /*
> + * We need to bounds-check in case someone on a different CPU
> + * expanded nr_irqs.
> + */
> + if (desc->kstat_irqs) {
> + counts_diff = *this_cpu_ptr(desc->kstat_irqs);
> + if (i < local_nr_irqs)
> + counts_diff -= counts[i];
> + } else {
> + counts_diff = 0;

And it would allow to remove this branch.

> + }
> + tabulate_irq_count(irq_counts_sorted, i, counts_diff,
> + NUM_HARDIRQ_REPORT);
> + }

Please, add an empty line here.

Empty lines helps to read the code. For example, they help to make
clear that a top-level comment describes a particular block of code.
Or they helps to see where { } blocks end.

Long blobs of core are hard to read for me. Maybe, I suffer with some
level of dislexia but I know many more people who prefer this.

Heh, I would personally add empty lines on several other locations.

> + /*
> + * We do not want the "watchdog: " prefix on every line,
> + * hence we use "printk" instead of "pr_crit".
> + */
> + printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n",
> + smp_processor_id(), HARDIRQ_PERCENT_THRESH);

for example here

> + for (i = 0; i < NUM_HARDIRQ_REPORT; i++) {
> + if (irq_counts_sorted[i].irq == -1)
> + break;

here

> + desc = irq_to_desc(irq_counts_sorted[i].irq);
> + if (desc && desc->action)
> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n",
> + i + 1, irq_counts_sorted[i].counts,
> + irq_counts_sorted[i].irq, desc->action->name);
> + else
> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n",
> + i + 1, irq_counts_sorted[i].counts,
> + irq_counts_sorted[i].irq);
> + }

end here ;-)

> + /*
> + * If the hardirq time is less than HARDIRQ_PERCENT_THRESH% in the last
> + * sample_period, then we suspect the interrupt storm might be subsiding.
> + */
> + if (!need_counting_irqs())
> + stop_counting_irqs();
> + }
> +}
> +
> @@ -522,6 +654,18 @@ static int is_softlockup(unsigned long touch_ts,
> unsigned long now)
> {
> if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
> + /*
> + * If period_ts has not been updated during a sample_period, then
> + * in the subsequent few sample_periods, period_ts might also not
> + * be updated, which could indicate a potential softlockup. In
> + * this case, if we suspect the cause of the potential softlockup
> + * might be interrupt storm, then we need to count the interrupts
> + * to find which interrupt is storming.
> + */
> + if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5) &&

(get_softlockup_thresh() / 5) might be replaced by sample_period.

Also it looks to strict. I would allow some small delay, e.g. 1 ms.

> + need_counting_irqs())
> + start_counting_irqs();
> +
> /* Warn about unreasonable delays. */
> if (time_after(now, period_ts + get_softlockup_thresh()))
> return now - touch_ts;

Great work!

Best Regards,
Petr

2024-02-09 15:04:41

by Petr Mladek

[permalink] [raw]
Subject: Re: [PATCHv6 0/2] *** Detect interrupt storm in softlockup ***

On Thu 2024-02-08 20:54:24, Bitao Hu wrote:
> Hi, guys.
> I have implemented a low-overhead method for detecting interrupt
> storm in softlockup. Please review it, all comments are welcome.

I like this work.

I wonder if you might be interested also in reporting problems
when soft IRQs are offloaded to the "ksoftirqd/X" kthreads
for too long.

The kthreads are processes with normal priority. As a result,
offloading soft IRQs to kthreads might cause huge difference
on loaded systems.

I have seen several problems when a flood of softIRQs triggered
offloading them. And it caused several second delays on networking
interfaces.

Best Regards,
Petr

2024-02-10 13:33:15

by Liu Song

[permalink] [raw]
Subject: Re: [PATCHv6 1/2] watchdog/softlockup: low-overhead detection of interrupt

Looks good!

Reviewed-by: Liu Song <[email protected]>

在 2024/2/8 20:54, Bitao Hu 写道:
> The following softlockup is caused by interrupt storm, but it cannot be
> identified from the call tree. Because the call tree is just a snapshot
> and doesn't fully capture the behavior of the CPU during the soft lockup.
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> ...
> Call trace:
> __do_softirq+0xa0/0x37c
> __irq_exit_rcu+0x108/0x140
> irq_exit+0x14/0x20
> __handle_domain_irq+0x84/0xe0
> gic_handle_irq+0x80/0x108
> el0_irq_naked+0x50/0x58
>
> Therefore,I think it is necessary to report CPU utilization during the
> softlockup_thresh period (report once every sample_period, for a total
> of 5 reportings), like this:
> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
> CPU#28 Utilization every 4s during lockup:
> #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> #5: 0% system, 0% softirq, 100% hardirq, 0% idle
> ...
>
> This would be helpful in determining whether an interrupt storm has
> occurred or in identifying the cause of the softlockup. The criteria for
> determination are as follows:
> a. If the hardirq utilization is high, then interrupt storm should be
> considered and the root cause cannot be determined from the call tree.
> b. If the softirq utilization is high, then we could analyze the call
> tree but it may cannot reflect the root cause.
> c. If the system utilization is high, then we could analyze the root
> cause from the call tree.
>
> The mechanism requires a considerable amount of global storage space
> when configured for the maximum number of CPUs. Therefore, adding a
> SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that defaults to "yes"
> if the max number of CPUs is <= 128.
>
> Signed-off-by: Bitao Hu <[email protected]>
> ---
> kernel/watchdog.c | 91 +++++++++++++++++++++++++++++++++++++++++++++++
> lib/Kconfig.debug | 13 +++++++
> 2 files changed, 104 insertions(+)
>
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index 81a8862295d6..380b60074f1d 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -16,6 +16,8 @@
> #include <linux/cpu.h>
> #include <linux/nmi.h>
> #include <linux/init.h>
> +#include <linux/kernel_stat.h>
> +#include <linux/math64.h>
> #include <linux/module.h>
> #include <linux/sysctl.h>
> #include <linux/tick.h>
> @@ -333,6 +335,92 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>
> static void __lockup_detector_cleanup(void);
>
> +#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
> +#define NUM_STATS_GROUPS 5
> +enum stats_per_group {
> + STATS_SYSTEM,
> + STATS_SOFTIRQ,
> + STATS_HARDIRQ,
> + STATS_IDLE,
> + NUM_STATS_PER_GROUP,
> +};
> +
> +static const enum cpu_usage_stat tracked_stats[NUM_STATS_PER_GROUP] = {
> + CPUTIME_SYSTEM,
> + CPUTIME_SOFTIRQ,
> + CPUTIME_IRQ,
> + CPUTIME_IDLE,
> +};
> +
> +static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]);
> +static DEFINE_PER_CPU(u8, cpustat_util[NUM_STATS_GROUPS][NUM_STATS_PER_GROUP]);
> +static DEFINE_PER_CPU(u8, cpustat_tail);
> +
> +/*
> + * We don't need nanosecond resolution. A granularity of 16ms is
> + * sufficient for our precision, allowing us to use u16 to store
> + * cpustats, which will roll over roughly every ~1000 seconds.
> + * 2^24 ~= 16 * 10^6
> + */
> +static u16 get_16bit_precision(u64 data_ns)
> +{
> + return data_ns >> 24LL; /* 2^24ns ~= 16.8ms */
> +}
> +
> +static void update_cpustat(void)
> +{
> + int i;
> + u8 util;
> + u16 old_stat, new_stat;
> + struct kernel_cpustat kcpustat;
> + u64 *cpustat = kcpustat.cpustat;
> + u8 tail = __this_cpu_read(cpustat_tail);
> + u16 sample_period_16 = get_16bit_precision(sample_period);
> +
> + kcpustat_cpu_fetch(&kcpustat, smp_processor_id());
> + for (i = 0; i < NUM_STATS_PER_GROUP; i++) {
> + old_stat = __this_cpu_read(cpustat_old[i]);
> + new_stat = get_16bit_precision(cpustat[tracked_stats[i]]);
> + util = DIV_ROUND_UP(100 * (new_stat - old_stat), sample_period_16);
> + __this_cpu_write(cpustat_util[tail][i], util);
> + __this_cpu_write(cpustat_old[i], new_stat);
> + }
> + __this_cpu_write(cpustat_tail, (tail + 1) % NUM_STATS_GROUPS);
> +}
> +
> +static void print_cpustat(void)
> +{
> + int i, group;
> + u8 tail = __this_cpu_read(cpustat_tail);
> + u64 sample_period_second = sample_period;
> +
> + do_div(sample_period_second, NSEC_PER_SEC);
> + /*
> + * We do not want the "watchdog: " prefix on every line,
> + * hence we use "printk" instead of "pr_crit".
> + */
> + printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
> + smp_processor_id(), sample_period_second);
> + for (i = 0; i < NUM_STATS_GROUPS; i++) {
> + group = (tail + i) % NUM_STATS_GROUPS;
> + printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
> + "%3u%% hardirq,\t%3u%% idle\n", i + 1,
> + __this_cpu_read(cpustat_util[group][STATS_SYSTEM]),
> + __this_cpu_read(cpustat_util[group][STATS_SOFTIRQ]),
> + __this_cpu_read(cpustat_util[group][STATS_HARDIRQ]),
> + __this_cpu_read(cpustat_util[group][STATS_IDLE]));
> + }
> +}
> +
> +static void report_cpu_status(void)
> +{
> + print_cpustat();
> +}
> +#else
> +static inline void update_cpustat(void) { }
> +static inline void report_cpu_status(void) { }
> +#endif
> +
> /*
> * Hard-lockup warnings should be triggered after just a few seconds. Soft-
> * lockups can have false positives under extreme conditions. So we generally
> @@ -504,6 +592,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
> */
> period_ts = READ_ONCE(*this_cpu_ptr(&watchdog_report_ts));
>
> + update_cpustat();
> +
> /* Reset the interval when touched by known problematic code. */
> if (period_ts == SOFTLOCKUP_DELAY_REPORT) {
> if (unlikely(__this_cpu_read(softlockup_touch_sync))) {
> @@ -539,6 +629,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
> pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
> smp_processor_id(), duration,
> current->comm, task_pid_nr(current));
> + report_cpu_status();
> print_modules();
> print_irqtrace_events(current);
> if (regs)
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 975a07f9f1cc..49f652674bd8 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1029,6 +1029,19 @@ config SOFTLOCKUP_DETECTOR
> chance to run. The current stack trace is displayed upon
> detection and the system will stay locked up.
>
> +config SOFTLOCKUP_DETECTOR_INTR_STORM
> + bool "Detect Interrupt Storm in Soft Lockups"
> + depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
> + default y if NR_CPUS <= 128
> + help
> + Say Y here to enable the kernel to detect interrupt storm
> + during "soft lockups".
> +
> + "soft lockups" can be caused by a variety of reasons. If one is
> + caused by an interrupt storm, then the storming interrupts will not
> + be on the callstack. To detect this case, it is necessary to report
> + the CPU stats and the interrupt counts during the "soft lockups".
> +
> config BOOTPARAM_SOFTLOCKUP_PANIC
> bool "Panic (Reboot) On Soft Lockups"
> depends on SOFTLOCKUP_DETECTOR

2024-02-10 13:33:53

by Liu Song

[permalink] [raw]
Subject: Re: [PATCHv6 2/2] watchdog/softlockup: report the most frequent interrupts

Looks good!

Reviewed-by: Liu Song <[email protected]>

在 2024/2/8 20:54, Bitao Hu 写道:
> When the watchdog determines that the current soft lockup is due
> to an interrupt storm based on CPU utilization, reporting the
> most frequent interrupts could be good enough for further
> troubleshooting.
>
> Below is an example of interrupt storm. The call tree does not
> provide useful information, but we can analyze which interrupt
> caused the soft lockup by comparing the counts of interrupts.
>
> [ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9:1:214]
> [ 2987.488607] CPU#9 Utilization every 4s during lockup:
> [ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
> [ 2987.491493] #1: 330985 irq#7(IPI)
> [ 2987.491743] #2: 5000 irq#10(arch_timer)
> [ 2987.492039] #3: 9 irq#91(nvme0q2)
> [ 2987.492318] #4: 3 irq#118(virtio1-output.12)
> ...
> [ 2987.492728] Call trace:
> [ 2987.492729] __do_softirq+0xa8/0x364
>
> Signed-off-by: Bitao Hu <[email protected]>
> ---
> kernel/watchdog.c | 153 ++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 149 insertions(+), 4 deletions(-)
>
> diff --git a/kernel/watchdog.c b/kernel/watchdog.c
> index 380b60074f1d..e9e98ce5ff40 100644
> --- a/kernel/watchdog.c
> +++ b/kernel/watchdog.c
> @@ -12,22 +12,25 @@
>
> #define pr_fmt(fmt) "watchdog: " fmt
>
> -#include <linux/mm.h>
> #include <linux/cpu.h>
> -#include <linux/nmi.h>
> #include <linux/init.h>
> +#include <linux/irq.h>
> +#include <linux/irqdesc.h>
> #include <linux/kernel_stat.h>
> +#include <linux/kvm_para.h>
> #include <linux/math64.h>
> +#include <linux/mm.h>
> #include <linux/module.h>
> +#include <linux/nmi.h>
> +#include <linux/stop_machine.h>
> #include <linux/sysctl.h>
> #include <linux/tick.h>
> +
> #include <linux/sched/clock.h>
> #include <linux/sched/debug.h>
> #include <linux/sched/isolation.h>
> -#include <linux/stop_machine.h>
>
> #include <asm/irq_regs.h>
> -#include <linux/kvm_para.h>
>
> static DEFINE_MUTEX(watchdog_mutex);
>
> @@ -412,13 +415,142 @@ static void print_cpustat(void)
> }
> }
>
> +#define HARDIRQ_PERCENT_THRESH 50
> +#define NUM_HARDIRQ_REPORT 5
> +static DEFINE_PER_CPU(u32 *, hardirq_counts);
> +static DEFINE_PER_CPU(int, actual_nr_irqs);
> +struct irq_counts {
> + int irq;
> + u32 counts;
> +};
> +
> +/* Tabulate the most frequent interrupts. */
> +static void tabulate_irq_count(struct irq_counts *irq_counts, int irq, u32 counts, int rank)
> +{
> + int i;
> + struct irq_counts new_count = {irq, counts};
> +
> + for (i = 0; i < rank; i++) {
> + if (counts > irq_counts[i].counts)
> + swap(new_count, irq_counts[i]);
> + }
> +}
> +
> +/*
> + * If the hardirq time exceeds HARDIRQ_PERCENT_THRESH% of the sample_period,
> + * then the cause of softlockup might be interrupt storm. In this case, it
> + * would be useful to start interrupt counting.
> + */
> +static bool need_counting_irqs(void)
> +{
> + u8 util;
> + int tail = __this_cpu_read(cpustat_tail);
> +
> + tail = (tail + NUM_HARDIRQ_REPORT - 1) % NUM_HARDIRQ_REPORT;
> + util = __this_cpu_read(cpustat_util[tail][STATS_HARDIRQ]);
> + return util > HARDIRQ_PERCENT_THRESH;
> +}
> +
> +static void start_counting_irqs(void)
> +{
> + int i;
> + int local_nr_irqs;
> + struct irq_desc *desc;
> + u32 *counts = __this_cpu_read(hardirq_counts);
> +
> + if (!counts) {
> + /*
> + * nr_irqs has the potential to grow at runtime. We should read
> + * it and store locally to avoid array out-of-bounds access.
> + */
> + local_nr_irqs = READ_ONCE(nr_irqs);
> + counts = kcalloc(local_nr_irqs, sizeof(u32), GFP_ATOMIC);
> + if (!counts)
> + return;
> + for (i = 0; i < local_nr_irqs; i++) {
> + desc = irq_to_desc(i);
> + if (!desc)
> + continue;
> + counts[i] = desc->kstat_irqs ?
> + *this_cpu_ptr(desc->kstat_irqs) : 0;
> + }
> + __this_cpu_write(actual_nr_irqs, local_nr_irqs);
> + __this_cpu_write(hardirq_counts, counts);
> + }
> +}
> +
> +static void stop_counting_irqs(void)
> +{
> + kfree(__this_cpu_read(hardirq_counts));
> + __this_cpu_write(hardirq_counts, NULL);
> +}
> +
> +static void print_irq_counts(void)
> +{
> + int i;
> + struct irq_desc *desc;
> + u32 counts_diff;
> + int local_nr_irqs = __this_cpu_read(actual_nr_irqs);
> + u32 *counts = __this_cpu_read(hardirq_counts);
> + struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
> + {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
> + };
> +
> + if (counts) {
> + for_each_irq_desc(i, desc) {
> + /*
> + * We need to bounds-check in case someone on a different CPU
> + * expanded nr_irqs.
> + */
> + if (desc->kstat_irqs) {
> + counts_diff = *this_cpu_ptr(desc->kstat_irqs);
> + if (i < local_nr_irqs)
> + counts_diff -= counts[i];
> + } else {
> + counts_diff = 0;
> + }
> + tabulate_irq_count(irq_counts_sorted, i, counts_diff,
> + NUM_HARDIRQ_REPORT);
> + }
> + /*
> + * We do not want the "watchdog: " prefix on every line,
> + * hence we use "printk" instead of "pr_crit".
> + */
> + printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n",
> + smp_processor_id(), HARDIRQ_PERCENT_THRESH);
> + for (i = 0; i < NUM_HARDIRQ_REPORT; i++) {
> + if (irq_counts_sorted[i].irq == -1)
> + break;
> + desc = irq_to_desc(irq_counts_sorted[i].irq);
> + if (desc && desc->action)
> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n",
> + i + 1, irq_counts_sorted[i].counts,
> + irq_counts_sorted[i].irq, desc->action->name);
> + else
> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n",
> + i + 1, irq_counts_sorted[i].counts,
> + irq_counts_sorted[i].irq);
> + }
> + /*
> + * If the hardirq time is less than HARDIRQ_PERCENT_THRESH% in the last
> + * sample_period, then we suspect the interrupt storm might be subsiding.
> + */
> + if (!need_counting_irqs())
> + stop_counting_irqs();
> + }
> +}
> +
> static void report_cpu_status(void)
> {
> print_cpustat();
> + print_irq_counts();
> }
> #else
> static inline void update_cpustat(void) { }
> static inline void report_cpu_status(void) { }
> +static inline bool need_counting_irqs(void) { return false; }
> +static inline void start_counting_irqs(void) { }
> +static inline void stop_counting_irqs(void) { }
> #endif
>
> /*
> @@ -522,6 +654,18 @@ static int is_softlockup(unsigned long touch_ts,
> unsigned long now)
> {
> if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
> + /*
> + * If period_ts has not been updated during a sample_period, then
> + * in the subsequent few sample_periods, period_ts might also not
> + * be updated, which could indicate a potential softlockup. In
> + * this case, if we suspect the cause of the potential softlockup
> + * might be interrupt storm, then we need to count the interrupts
> + * to find which interrupt is storming.
> + */
> + if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5) &&
> + need_counting_irqs())
> + start_counting_irqs();
> +
> /* Warn about unreasonable delays. */
> if (time_after(now, period_ts + get_softlockup_thresh()))
> return now - touch_ts;
> @@ -544,6 +688,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
> static int softlockup_fn(void *data)
> {
> update_touch_ts();
> + stop_counting_irqs();
> complete(this_cpu_ptr(&softlockup_completion));
>
> return 0;

2024-02-11 15:36:07

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv6 1/2] watchdog/softlockup: low-overhead detection of interrupt

Hi,

On 2024/2/9 21:35, Petr Mladek wrote:
> Hi,
>
> I am sorry for jouning this game so late. But honestly, it went
> forward too quickly. A good practice is to wait a week before
> sending new version so that you give a chance more people
> to provide some feedback.
>
> The only exception might be when you know exactly who could
> review it because the area in not interesting for anyone else.
> But this is typicall not the case for kernel core code.
Thanks for your reminder, I will be mindful of the pace.
>
>
> On Thu 2024-02-08 20:54:25, Bitao Hu wrote:
>> The following softlockup is caused by interrupt storm, but it cannot be
>> identified from the call tree. Because the call tree is just a snapshot
>> and doesn't fully capture the behavior of the CPU during the soft lockup.
>> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
>> ...
>> Call trace:
>> __do_softirq+0xa0/0x37c
>> __irq_exit_rcu+0x108/0x140
>> irq_exit+0x14/0x20
>> __handle_domain_irq+0x84/0xe0
>> gic_handle_irq+0x80/0x108
>> el0_irq_naked+0x50/0x58
>>
>> Therefore,I think it is necessary to report CPU utilization during the
>> softlockup_thresh period (report once every sample_period, for a total
>> of 5 reportings), like this:
>> watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
>> CPU#28 Utilization every 4s during lockup:
>> #1: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #2: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #3: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #4: 0% system, 0% softirq, 100% hardirq, 0% idle
>> #5: 0% system, 0% softirq, 100% hardirq, 0% idle
>
> I like this. IMHO, it might be really useful.
>
>> --- a/kernel/watchdog.c
>> +++ b/kernel/watchdog.c
>> @@ -333,6 +335,92 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);
>>
>> static void __lockup_detector_cleanup(void);
>>
>> +#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
>> +#define NUM_STATS_GROUPS 5
>
> It would be nice to synchronize this with the hardcoded 5 in:
>
> static void set_sample_period(void)
> {
> /*
> * convert watchdog_thresh from seconds to ns
> * the divide by 5 is to give hrtimer several chances (two
> * or three with the current relation between the soft
> * and hard thresholds) to increment before the
> * hardlockup detector generates a warning
> */
> sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);
OK, I've had the same thought.
>
> For exmaple, define and use the following in both situations:
>
> #define NUM_SAMPLE_PERIODS 5
>
>> +enum stats_per_group {
>> + STATS_SYSTEM,
>> + STATS_SOFTIRQ,
>> + STATS_HARDIRQ,
>> + STATS_IDLE,
>> + NUM_STATS_PER_GROUP,
>> +};
>> +
>> +static const enum cpu_usage_stat tracked_stats[NUM_STATS_PER_GROUP] = {
>> + CPUTIME_SYSTEM,
>> + CPUTIME_SOFTIRQ,
>> + CPUTIME_IRQ,
>> + CPUTIME_IDLE,
>> +};
>> +
>> +static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]);
>> +static DEFINE_PER_CPU(u8, cpustat_util[NUM_STATS_GROUPS][NUM_STATS_PER_GROUP]);
>> +static DEFINE_PER_CPU(u8, cpustat_tail);
>> +
>> +/*
>> + * We don't need nanosecond resolution. A granularity of 16ms is
>> + * sufficient for our precision, allowing us to use u16 to store
>> + * cpustats, which will roll over roughly every ~1000 seconds.
>> + * 2^24 ~= 16 * 10^6
>> + */
>> +static u16 get_16bit_precision(u64 data_ns)
>> +{
>> + return data_ns >> 24LL; /* 2^24ns ~= 16.8ms */
>
> I would personally use
>
> delta_ns >> 20 /* 2^20ns ~= 1ms */
>
> to make it easier for debugging by a human. It would support
> the sample period up to 65s which might be enough.
>
> But I do not resirt on it. ">> 24" provides less granularity
> but it supports longer sample periods.
I considered using ">>20" as it provides more intuitive granularity,
but I wanted to support longer sample periods. After weighing the
options, I chose ">>24".
>
>> +static void print_cpustat(void)
>> +{
>> + int i, group;
>> + u8 tail = __this_cpu_read(cpustat_tail);
>> + u64 sample_period_second = sample_period;
>> +
>> + do_div(sample_period_second, NSEC_PER_SEC);
>> + /*
>> + * We do not want the "watchdog: " prefix on every line,
>> + * hence we use "printk" instead of "pr_crit".
>> + */
>> + printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
>> + smp_processor_id(), sample_period_second);
>> + for (i = 0; i < NUM_STATS_GROUPS; i++) {
>
> This starts with the 1st group in the array. Is it the oldest one?
> It should take into account cpustat_tail.
Yes, It starts with the oldest one. After "update_cpustat" is completed,
"cpustat_tail" points to the oldest one. Here, I start accessing the
data pointed to by the "cpustat_tail".
>
>
>> + group = (tail + i) % NUM_STATS_GROUPS;
>> + printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
>> + "%3u%% hardirq,\t%3u%% idle\n", i + 1,
>> + __this_cpu_read(cpustat_util[group][STATS_SYSTEM]),
>> + __this_cpu_read(cpustat_util[group][STATS_SOFTIRQ]),
>> + __this_cpu_read(cpustat_util[group][STATS_HARDIRQ]),
>> + __this_cpu_read(cpustat_util[group][STATS_IDLE]));
>> + }
>> +}
>> +
>

Best Regards,
Bitao

2024-02-11 15:36:42

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv6 2/2] watchdog/softlockup: report the most frequent interrupts

Hi,

On 2024/2/9 22:39, Petr Mladek wrote:
> On Thu 2024-02-08 20:54:26, Bitao Hu wrote:
>> When the watchdog determines that the current soft lockup is due
>> to an interrupt storm based on CPU utilization, reporting the
>> most frequent interrupts could be good enough for further
>> troubleshooting.
>>
>> Below is an example of interrupt storm. The call tree does not
>> provide useful information, but we can analyze which interrupt
>> caused the soft lockup by comparing the counts of interrupts.
>>
>> [ 2987.488075] watchdog: BUG: soft lockup - CPU#9 stuck for 23s! [kworker/9:1:214]
>> [ 2987.488607] CPU#9 Utilization every 4s during lockup:
>> [ 2987.488941] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
>> [ 2987.489357] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
>> [ 2987.489771] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
>> [ 2987.490186] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
>> [ 2987.490601] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
>> [ 2987.491034] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
>> [ 2987.491493] #1: 330985 irq#7(IPI)
>> [ 2987.491743] #2: 5000 irq#10(arch_timer)
>> [ 2987.492039] #3: 9 irq#91(nvme0q2)
>> [ 2987.492318] #4: 3 irq#118(virtio1-output.12)
>
> Nit: It might looks slightly better if it prints the last 5 HardIRQs ;-)
> Maybe this version already does.
Yes, it can print the last 5 HardIRQs. And I ignore those HardIRQs with
a count of zero, so it can print between 1 to 5 HardIRQs.
>
>> --- a/kernel/watchdog.c
>> +++ b/kernel/watchdog.c
>> @@ -412,13 +415,142 @@ static void print_cpustat(void)
>> }
>> }
>>
>> +#define HARDIRQ_PERCENT_THRESH 50
>> +#define NUM_HARDIRQ_REPORT 5
>
> It actually creates array for 5 IRQ entries.
>
>> +static DEFINE_PER_CPU(u32 *, hardirq_counts);
>> +static DEFINE_PER_CPU(int, actual_nr_irqs);
>> +struct irq_counts {
>> + int irq;
>> + u32 counts;
>> +};
>> +
>> +static void print_irq_counts(void)
>> +{
>> + int i;
>> + struct irq_desc *desc;
>> + u32 counts_diff;
>> + int local_nr_irqs = __this_cpu_read(actual_nr_irqs);
>> + u32 *counts = __this_cpu_read(hardirq_counts);
>> + struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
>> + {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0},
>> + };
>> +
>> + if (counts) {
>> + for_each_irq_desc(i, desc) {
>
> I would use:
>
> for (i = 0; i < local_nr_irqs; i++) {
The number of HardIRQs has the potential to grow at runtime. And I
want to count these newly added HardIRQs. Therefore, I use
"for_each_irq_desc" here.

> It does not make sense to process IRQs where "counts_diff = 0;"
>
>> +
>
>> + /*
>> + * We need to bounds-check in case someone on a different CPU
>> + * expanded nr_irqs.
>> + */
>> + if (desc->kstat_irqs) {
>> + counts_diff = *this_cpu_ptr(desc->kstat_irqs);
>> + if (i < local_nr_irqs)
>> + counts_diff -= counts[i];
>> + } else {
>> + counts_diff = 0;
>
> And it would allow to remove this branch.
Agree.
>
>> + }
>> + tabulate_irq_count(irq_counts_sorted, i, counts_diff,
>> + NUM_HARDIRQ_REPORT);
>> + }
>
> Please, add an empty line here.
>
> Empty lines helps to read the code. For example, they help to make
> clear that a top-level comment describes a particular block of code.
> Or they helps to see where { } blocks end.
>
> Long blobs of core are hard to read for me. Maybe, I suffer with some
> level of dislexia but I know many more people who prefer this.
>
> Heh, I would personally add empty lines on several other locations.
>
>> + /*
>> + * We do not want the "watchdog: " prefix on every line,
>> + * hence we use "printk" instead of "pr_crit".
>> + */
>> + printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n",
>> + smp_processor_id(), HARDIRQ_PERCENT_THRESH);
>
> for example here
>
>> + for (i = 0; i < NUM_HARDIRQ_REPORT; i++) {
>> + if (irq_counts_sorted[i].irq == -1)
>> + break;
>
> here
>
>> + desc = irq_to_desc(irq_counts_sorted[i].irq);
>> + if (desc && desc->action)
>> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d(%s)\n",
>> + i + 1, irq_counts_sorted[i].counts,
>> + irq_counts_sorted[i].irq, desc->action->name);
>> + else
>> + printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n",
>> + i + 1, irq_counts_sorted[i].counts,
>> + irq_counts_sorted[i].irq);
>> + }
>
> end here ;-)
>
>> + /*
>> + * If the hardirq time is less than HARDIRQ_PERCENT_THRESH% in the last
>> + * sample_period, then we suspect the interrupt storm might be subsiding.
>> + */
>> + if (!need_counting_irqs())
>> + stop_counting_irqs();
>> + }
>> +}
OK, I will add empty lines for easier readability.
>> +
>> @@ -522,6 +654,18 @@ static int is_softlockup(unsigned long touch_ts,
>> unsigned long now)
>> {
>> if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
>> + /*
>> + * If period_ts has not been updated during a sample_period, then
>> + * in the subsequent few sample_periods, period_ts might also not
>> + * be updated, which could indicate a potential softlockup. In
>> + * this case, if we suspect the cause of the potential softlockup
>> + * might be interrupt storm, then we need to count the interrupts
>> + * to find which interrupt is storming.
>> + */
>> + if (time_after_eq(now, period_ts + get_softlockup_thresh() / 5) &&
>
> (get_softlockup_thresh() / 5) might be replaced by sample_period.
>
The "sample_period" is measured in nanoseconds and is represented
by a "u64" type. However, the "time_after_eq" here expects seconds
as a "u32" type, hence I refrained from using "sample_period" in
this instance.

> Also it looks to strict. I would allow some small delay, e.g. 1 ms.
This is second-level precision, and "now" is obtained by
"running_clock() >> 30LL", so it's not strict here.
>
>> + need_counting_irqs())
>> + start_counting_irqs();
>> +
>> /* Warn about unreasonable delays. */
>> if (time_after(now, period_ts + get_softlockup_thresh()))
>> return now - touch_ts;
>
> Great work!
Thanks.

Best Regards,
Bitao

2024-02-11 15:37:22

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv6 0/2] *** Detect interrupt storm in softlockup ***



On 2024/2/9 22:48, Petr Mladek wrote:
> On Thu 2024-02-08 20:54:24, Bitao Hu wrote:
>> Hi, guys.
>> I have implemented a low-overhead method for detecting interrupt
>> storm in softlockup. Please review it, all comments are welcome.
>
> I like this work.
>
> I wonder if you might be interested also in reporting problems
> when soft IRQs are offloaded to the "ksoftirqd/X" kthreads
> for too long.
>
> The kthreads are processes with normal priority. As a result,
> offloading soft IRQs to kthreads might cause huge difference
> on loaded systems.
>
> I have seen several problems when a flood of softIRQs triggered
> offloading them. And it caused several second delays on networking
> interfaces.
>
This is an interesting issue! I had considered the matter of softirq
while working on this, but since there were no actual issues at hand,
I didn't conduct an analysis. Your mention of this problem has
opened my eyes.

Best Regards,
Bitao

2024-02-11 15:41:44

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv6 2/2] watchdog/softlockup: report the most frequent interrupts



On 2024/2/9 00:03, Doug Anderson wrote:
> Hi,
>
> On Thu, Feb 8, 2024 at 4:54 AM Bitao Hu <[email protected]> wrote:
>>
>> +static void start_counting_irqs(void)
>> +{
>> + int i;
>> + int local_nr_irqs;
>> + struct irq_desc *desc;
>> + u32 *counts = __this_cpu_read(hardirq_counts);
>> +
>> + if (!counts) {
>> + /*
>> + * nr_irqs has the potential to grow at runtime. We should read
>> + * it and store locally to avoid array out-of-bounds access.
>> + */
>> + local_nr_irqs = READ_ONCE(nr_irqs);
>
> nit: I don't think the READ_ONCE() is actually needed above. All that
> matters is that you're consistently using the same local variable
> ("local_nr_irqs") for allocating the array, looping, and then storing.
> No matter what optimizations might be happening and what else might be
> happening on other CPUs, once you put it in a local variable the
> compiler _must_ keep it consistent.
Oh, yes, READ_ONCE() is not necessary here.
>
> That being said, I don't think it really matters, so I'm not sure it's
> worth spinning your series just for that.
>
> In any case, this patch looks good to me now. Thanks!
>
> Reviewed-by: Douglas Anderson <[email protected]>

2024-02-11 23:47:05

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCHv6 1/2] watchdog/softlockup: low-overhead detection of interrupt

Hi,

On Fri, Feb 9, 2024 at 5:35 AM Petr Mladek <[email protected]> wrote:
>
> Hi,
>
> I am sorry for jouning this game so late. But honestly, it went
> forward too quickly. A good practice is to wait a week before
> sending new version so that you give a chance more people
> to provide some feedback.
>
> The only exception might be when you know exactly who could
> review it because the area in not interesting for anyone else.
> But this is typicall not the case for kernel core code.

Just for the record, I am not personally a fan of the advice that you
need to unconditionally wait a week between spins.

FWIW, I _am_ totally sold on the idea of waiting a while if there is
still ongoing discussion about how to move forward. You don't want to
fragment the conversation with some replies against the old version
and some against the new. However, in this case there was no ongoing
discussion and I don't see any particular harm that was done with
Bitao spinning as often as he did. I actually find it quite nice not
to need to wait a week (or more) between versions because it means
that patches are still fresh in my mind when I review the next
version.

Is your concern that some of my advice to Bitao took the series in the
wrong direction and you wished you could have put a stop to it sooner?
..or is your concern that Andrew has already landed the current
patches in his "unstable" tree? ...or is there some other problem that
was caused by Biao's quick spins of this series?

In any case, I'm happy that you've found time to jump in and review
the code! My current understanding of Andrew's process is that since
things are only in his "unstable" branch that Bitao can still send new
versions of the series and Andrew can update the patches.

-Doug