2024-02-28 07:22:41

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv11 0/4] *** Detect interrupt storm in softlockup ***

Hi, guys.
I have implemented a low-overhead method for detecting interrupt
storm in softlockup. Please review it, all comments are welcome.

Changes from v10 to v11:

- Only patch #2 and patch #3 have been changed.

- Add comments to explain each field of 'struct irqstat' in patch #2.

- Split the inner summation logic out of kstat_irqs() and encapsulate
it into kstat_irqs_desc() in patch #3.

- Adopt Thomas's change log for patch #3.

- Add the 'Reviewed-by' tag of Liu Song.

Changes from v9 to v10:

- The two patches related to 'watchdog/softlockup' remain unchanged.

- The majority of the work related to 'genirq' is contributed by
Thomas, indicated by adding 'Originally-by' tag. And I'd like to
express my gratitude for Thomas's contributions and guidance here.

- Adopt Thomas's change log for the snapshot mechanism for interrupt
statistics.

- Split unrelated change in patch #2 into a separate patch #3.

Changes from v8 to v9:

- Patch #1 remains unchanged.

- From Thomas Gleixner, split patch #2 into two patches. Interrupt
infrastructure first and then the actual usage site in the
watchdog code.

Changes from v7 to v8:

- From Thomas Gleixner, implement statistics within the interrupt
core code and provide sensible interfaces for the watchdog code.

- Patch #1 remains unchanged. Patch #2 has significant changes
based on Thomas's suggestions, which is why I have removed
Liu Song and Douglas's Reviewed-by from patch #2. Please review
it again, and all comments are welcome.

Changes from v6 to v7:

- Remove "READ_ONCE" in "start_counting_irqs"

- Replace the hard-coded 5 with "NUM_SAMPLE_PERIODS" macro in
"set_sample_period".

- Add empty lines to help with reading the code.

- Remove the branch that processes IRQs where "counts_diff = 0".

- Add the Reviewed-by of Liu Song and Douglas.

Changes from v5 to v6:

- Use "./scripts/checkpatch.pl --strict" to get a few extra
style nits and fix them.

- Squash patch #3 into patch #1, and wrapp the help text to
80 columns.

- Sort existing headers alphabetically in watchdog.c

- Drop "softlockup_hardirq_cpus", just read "hardirq_counts"
and see if it's non-NULL.

- Store "nr_irqs" in a local variable.

- Simplify the calculation of "cpu_diff".

Changes from v4 to v5:

- Rearranging variable placement to make code look neater.

Changes from v3 to v4:

- Renaming some variable and function names to make the code logic
more readable.

- Change the code location to avoid predeclaring.

- Just swap rather than a double loop in tabulate_irq_count.

- Since nr_irqs has the potential to grow at runtime, bounds-check
logic has been implemented.

- Add SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob.

Changes from v2 to v3:

- From Liu Song, using enum instead of macro for cpu_stats, shortening
the name 'idx_to_stat' to 'stats', adding 'get_16bit_precesion' instead
of using right shift operations, and using 'struct irq_counts'.

- From kernel robot test, using '__this_cpu_read' and '__this_cpu_write'
instead of accessing to an per-cpu array directly, in order to avoid
this warning.
'sparse: incorrect type in initializer (different modifiers)'

Changes from v1 to v2:

- From Douglas, optimize the memory of cpustats. With the maximum number
of CPUs, that's now this.
2 * 8192 * 4 + 1 * 8192 * 5 * 4 + 1 * 8192 = 237,568 bytes.

- From Liu Song, refactor the code format and add necessary comments.

- From Douglas, use interrupt counts instead of interrupt time to
determine the cause of softlockup.

- Remove the cmdline parameter added in PATCHv1.

Bitao Hu (4):
watchdog/softlockup: low-overhead detection of interrupt storm
genirq: Provide a snapshot mechanism for interrupt statistics
genirq: Avoid summation loops for /proc/interrupts
watchdog/softlockup: report the most frequent interrupts

arch/mips/dec/setup.c | 2 +-
arch/parisc/kernel/smp.c | 2 +-
arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
include/linux/irqdesc.h | 14 +-
include/linux/kernel_stat.h | 3 +
kernel/irq/internals.h | 4 +-
kernel/irq/irqdesc.c | 50 +++++--
kernel/irq/proc.c | 9 +-
kernel/watchdog.c | 213 ++++++++++++++++++++++++++-
lib/Kconfig.debug | 13 ++
scripts/gdb/linux/interrupts.py | 6 +-
11 files changed, 286 insertions(+), 32 deletions(-)

--
2.37.1 (Apple Git-137.1)



2024-02-28 07:22:56

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv11 1/4] watchdog/softlockup: low-overhead detection of interrupt storm

The following softlockup is caused by interrupt storm, but it cannot be
identified from the call tree. Because the call tree is just a snapshot
and doesn't fully capture the behavior of the CPU during the soft lockup.
watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
...
Call trace:
__do_softirq+0xa0/0x37c
__irq_exit_rcu+0x108/0x140
irq_exit+0x14/0x20
__handle_domain_irq+0x84/0xe0
gic_handle_irq+0x80/0x108
el0_irq_naked+0x50/0x58

Therefore,I think it is necessary to report CPU utilization during the
softlockup_thresh period (report once every sample_period, for a total
of 5 reportings), like this:
watchdog: BUG: soft lockup - CPU#28 stuck for 23s! [fio:83921]
CPU#28 Utilization every 4s during lockup:
#1: 0% system, 0% softirq, 100% hardirq, 0% idle
#2: 0% system, 0% softirq, 100% hardirq, 0% idle
#3: 0% system, 0% softirq, 100% hardirq, 0% idle
#4: 0% system, 0% softirq, 100% hardirq, 0% idle
#5: 0% system, 0% softirq, 100% hardirq, 0% idle
...

This would be helpful in determining whether an interrupt storm has
occurred or in identifying the cause of the softlockup. The criteria for
determination are as follows:
a. If the hardirq utilization is high, then interrupt storm should be
considered and the root cause cannot be determined from the call tree.
b. If the softirq utilization is high, then we could analyze the call
tree but it may cannot reflect the root cause.
c. If the system utilization is high, then we could analyze the root
cause from the call tree.

The mechanism requires a considerable amount of global storage space
when configured for the maximum number of CPUs. Therefore, adding a
SOFTLOCKUP_DETECTOR_INTR_STORM Kconfig knob that defaults to "yes"
if the max number of CPUs is <= 128.

Signed-off-by: Bitao Hu <[email protected]>
Reviewed-by: Douglas Anderson <[email protected]>
Reviewed-by: Liu Song <[email protected]>
---
kernel/watchdog.c | 98 ++++++++++++++++++++++++++++++++++++++++++++++-
lib/Kconfig.debug | 13 +++++++
2 files changed, 110 insertions(+), 1 deletion(-)

diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 81a8862295d6..69e72d7e461d 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -16,6 +16,8 @@
#include <linux/cpu.h>
#include <linux/nmi.h>
#include <linux/init.h>
+#include <linux/kernel_stat.h>
+#include <linux/math64.h>
#include <linux/module.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
@@ -35,6 +37,8 @@ static DEFINE_MUTEX(watchdog_mutex);
# define WATCHDOG_HARDLOCKUP_DEFAULT 0
#endif

+#define NUM_SAMPLE_PERIODS 5
+
unsigned long __read_mostly watchdog_enabled;
int __read_mostly watchdog_user_enabled = 1;
static int __read_mostly watchdog_hardlockup_user_enabled = WATCHDOG_HARDLOCKUP_DEFAULT;
@@ -333,6 +337,95 @@ __setup("watchdog_thresh=", watchdog_thresh_setup);

static void __lockup_detector_cleanup(void);

+#ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
+enum stats_per_group {
+ STATS_SYSTEM,
+ STATS_SOFTIRQ,
+ STATS_HARDIRQ,
+ STATS_IDLE,
+ NUM_STATS_PER_GROUP,
+};
+
+static const enum cpu_usage_stat tracked_stats[NUM_STATS_PER_GROUP] = {
+ CPUTIME_SYSTEM,
+ CPUTIME_SOFTIRQ,
+ CPUTIME_IRQ,
+ CPUTIME_IDLE,
+};
+
+static DEFINE_PER_CPU(u16, cpustat_old[NUM_STATS_PER_GROUP]);
+static DEFINE_PER_CPU(u8, cpustat_util[NUM_SAMPLE_PERIODS][NUM_STATS_PER_GROUP]);
+static DEFINE_PER_CPU(u8, cpustat_tail);
+
+/*
+ * We don't need nanosecond resolution. A granularity of 16ms is
+ * sufficient for our precision, allowing us to use u16 to store
+ * cpustats, which will roll over roughly every ~1000 seconds.
+ * 2^24 ~= 16 * 10^6
+ */
+static u16 get_16bit_precision(u64 data_ns)
+{
+ return data_ns >> 24LL; /* 2^24ns ~= 16.8ms */
+}
+
+static void update_cpustat(void)
+{
+ int i;
+ u8 util;
+ u16 old_stat, new_stat;
+ struct kernel_cpustat kcpustat;
+ u64 *cpustat = kcpustat.cpustat;
+ u8 tail = __this_cpu_read(cpustat_tail);
+ u16 sample_period_16 = get_16bit_precision(sample_period);
+
+ kcpustat_cpu_fetch(&kcpustat, smp_processor_id());
+
+ for (i = 0; i < NUM_STATS_PER_GROUP; i++) {
+ old_stat = __this_cpu_read(cpustat_old[i]);
+ new_stat = get_16bit_precision(cpustat[tracked_stats[i]]);
+ util = DIV_ROUND_UP(100 * (new_stat - old_stat), sample_period_16);
+ __this_cpu_write(cpustat_util[tail][i], util);
+ __this_cpu_write(cpustat_old[i], new_stat);
+ }
+
+ __this_cpu_write(cpustat_tail, (tail + 1) % NUM_SAMPLE_PERIODS);
+}
+
+static void print_cpustat(void)
+{
+ int i, group;
+ u8 tail = __this_cpu_read(cpustat_tail);
+ u64 sample_period_second = sample_period;
+
+ do_div(sample_period_second, NSEC_PER_SEC);
+
+ /*
+ * We do not want the "watchdog: " prefix on every line,
+ * hence we use "printk" instead of "pr_crit".
+ */
+ printk(KERN_CRIT "CPU#%d Utilization every %llus during lockup:\n",
+ smp_processor_id(), sample_period_second);
+
+ for (i = 0; i < NUM_SAMPLE_PERIODS; i++) {
+ group = (tail + i) % NUM_SAMPLE_PERIODS;
+ printk(KERN_CRIT "\t#%d: %3u%% system,\t%3u%% softirq,\t"
+ "%3u%% hardirq,\t%3u%% idle\n", i + 1,
+ __this_cpu_read(cpustat_util[group][STATS_SYSTEM]),
+ __this_cpu_read(cpustat_util[group][STATS_SOFTIRQ]),
+ __this_cpu_read(cpustat_util[group][STATS_HARDIRQ]),
+ __this_cpu_read(cpustat_util[group][STATS_IDLE]));
+ }
+}
+
+static void report_cpu_status(void)
+{
+ print_cpustat();
+}
+#else
+static inline void update_cpustat(void) { }
+static inline void report_cpu_status(void) { }
+#endif
+
/*
* Hard-lockup warnings should be triggered after just a few seconds. Soft-
* lockups can have false positives under extreme conditions. So we generally
@@ -364,7 +457,7 @@ static void set_sample_period(void)
* and hard thresholds) to increment before the
* hardlockup detector generates a warning
*/
- sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / 5);
+ sample_period = get_softlockup_thresh() * ((u64)NSEC_PER_SEC / NUM_SAMPLE_PERIODS);
watchdog_update_hrtimer_threshold(sample_period);
}

@@ -504,6 +597,8 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
*/
period_ts = READ_ONCE(*this_cpu_ptr(&watchdog_report_ts));

+ update_cpustat();
+
/* Reset the interval when touched by known problematic code. */
if (period_ts == SOFTLOCKUP_DELAY_REPORT) {
if (unlikely(__this_cpu_read(softlockup_touch_sync))) {
@@ -539,6 +634,7 @@ static enum hrtimer_restart watchdog_timer_fn(struct hrtimer *hrtimer)
pr_emerg("BUG: soft lockup - CPU#%d stuck for %us! [%s:%d]\n",
smp_processor_id(), duration,
current->comm, task_pid_nr(current));
+ report_cpu_status();
print_modules();
print_irqtrace_events(current);
if (regs)
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 975a07f9f1cc..49f652674bd8 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1029,6 +1029,19 @@ config SOFTLOCKUP_DETECTOR
chance to run. The current stack trace is displayed upon
detection and the system will stay locked up.

+config SOFTLOCKUP_DETECTOR_INTR_STORM
+ bool "Detect Interrupt Storm in Soft Lockups"
+ depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
+ default y if NR_CPUS <= 128
+ help
+ Say Y here to enable the kernel to detect interrupt storm
+ during "soft lockups".
+
+ "soft lockups" can be caused by a variety of reasons. If one is
+ caused by an interrupt storm, then the storming interrupts will not
+ be on the callstack. To detect this case, it is necessary to report
+ the CPU stats and the interrupt counts during the "soft lockups".
+
config BOOTPARAM_SOFTLOCKUP_PANIC
bool "Panic (Reboot) On Soft Lockups"
depends on SOFTLOCKUP_DETECTOR
--
2.37.1 (Apple Git-137.1)


2024-02-28 07:23:08

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv11 3/4] genirq: Avoid summation loops for /proc/interrupts

show_interrupts() unconditionally accumulates the per CPU interrupt
statistics to determine whether an interrupt was ever raised.

This can be avoided for all interrupts which are not strictly per CPU
and not of type NMI because those interrupts provide already an
accumulated counter. The required logic is already implemented in
kstat_irqs().

Split the inner access logic out of kstat_irqs() and use it for
kstat_irqs() and show_interrupts() to avoid the accumulation loop
when possible.

Originally-by: Thomas Gleixner <[email protected]>
Signed-off-by: Bitao Hu <[email protected]>
Reviewed-by: Liu Song <[email protected]>
---
kernel/irq/internals.h | 2 ++
kernel/irq/irqdesc.c | 16 +++++++++++-----
kernel/irq/proc.c | 6 ++----
3 files changed, 15 insertions(+), 9 deletions(-)

diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 1d92532c2aae..6c43ef3e7308 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -98,6 +98,8 @@ extern void mask_irq(struct irq_desc *desc);
extern void unmask_irq(struct irq_desc *desc);
extern void unmask_threaded_irq(struct irq_desc *desc);

+extern unsigned int kstat_irqs_desc(struct irq_desc *desc, const struct cpumask *cpumask);
+
#ifdef CONFIG_SPARSE_IRQ
static inline void irq_mark_irq(unsigned int irq) { }
#else
diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 9cd17080b2d8..65a7f2dcd17b 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -960,24 +960,30 @@ static bool irq_is_nmi(struct irq_desc *desc)
return desc->istate & IRQS_NMI;
}

-static unsigned int kstat_irqs(unsigned int irq)
+unsigned int kstat_irqs_desc(struct irq_desc *desc, const struct cpumask *cpumask)
{
- struct irq_desc *desc = irq_to_desc(irq);
unsigned int sum = 0;
int cpu;

- if (!desc || !desc->kstat_irqs)
- return 0;
if (!irq_settings_is_per_cpu_devid(desc) &&
!irq_settings_is_per_cpu(desc) &&
!irq_is_nmi(desc))
return data_race(desc->tot_count);

- for_each_possible_cpu(cpu)
+ for_each_cpu(cpu, cpumask)
sum += data_race(per_cpu(desc->kstat_irqs->cnt, cpu));
return sum;
}

+static unsigned int kstat_irqs(unsigned int irq)
+{
+ struct irq_desc *desc = irq_to_desc(irq);
+
+ if (!desc || !desc->kstat_irqs)
+ return 0;
+ return kstat_irqs_desc(desc, cpu_possible_mask);
+}
+
void kstat_snapshot_irqs(void)
{
struct irq_desc *desc;
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 6954e0a02047..5c320c3f10a7 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -488,10 +488,8 @@ int show_interrupts(struct seq_file *p, void *v)
if (!desc || irq_settings_is_hidden(desc))
goto outsparse;

- if (desc->kstat_irqs) {
- for_each_online_cpu(j)
- any_count |= data_race(per_cpu(desc->kstat_irqs->cnt, j));
- }
+ if (desc->kstat_irqs)
+ any_count = kstat_irqs_desc(desc, cpu_online_mask);

if ((!desc->action || irq_desc_is_chained(desc)) && !any_count)
goto outsparse;
--
2.37.1 (Apple Git-137.1)


2024-02-28 07:23:29

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv11 4/4] watchdog/softlockup: report the most frequent interrupts

When the watchdog determines that the current soft lockup is due
to an interrupt storm based on CPU utilization, reporting the
most frequent interrupts could be good enough for further
troubleshooting.

Below is an example of interrupt storm. The call tree does not
provide useful information, but we can analyze which interrupt
caused the soft lockup by comparing the counts of interrupts.

[ 638.870231] watchdog: BUG: soft lockup - CPU#9 stuck for 26s! [swapper/9:0]
[ 638.870825] CPU#9 Utilization every 4s during lockup:
[ 638.871194] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 638.871652] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 638.872107] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 638.872563] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 638.873018] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
[ 638.873494] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
[ 638.873994] #1: 330945 irq#7
[ 638.874236] #2: 31 irq#82
[ 638.874493] #3: 10 irq#10
[ 638.874744] #4: 2 irq#89
[ 638.874992] #5: 1 irq#102
..
[ 638.875313] Call trace:
[ 638.875315] __do_softirq+0xa8/0x364

Signed-off-by: Bitao Hu <[email protected]>
Reviewed-by: Liu Song <[email protected]>
---
kernel/watchdog.c | 115 ++++++++++++++++++++++++++++++++++++++++++++--
1 file changed, 111 insertions(+), 4 deletions(-)

diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 69e72d7e461d..c9d49ae8d045 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -12,22 +12,25 @@

#define pr_fmt(fmt) "watchdog: " fmt

-#include <linux/mm.h>
#include <linux/cpu.h>
-#include <linux/nmi.h>
#include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/irqdesc.h>
#include <linux/kernel_stat.h>
+#include <linux/kvm_para.h>
#include <linux/math64.h>
+#include <linux/mm.h>
#include <linux/module.h>
+#include <linux/nmi.h>
+#include <linux/stop_machine.h>
#include <linux/sysctl.h>
#include <linux/tick.h>
+
#include <linux/sched/clock.h>
#include <linux/sched/debug.h>
#include <linux/sched/isolation.h>
-#include <linux/stop_machine.h>

#include <asm/irq_regs.h>
-#include <linux/kvm_para.h>

static DEFINE_MUTEX(watchdog_mutex);

@@ -417,13 +420,104 @@ static void print_cpustat(void)
}
}

+#define HARDIRQ_PERCENT_THRESH 50
+#define NUM_HARDIRQ_REPORT 5
+struct irq_counts {
+ int irq;
+ u32 counts;
+};
+
+static DEFINE_PER_CPU(bool, snapshot_taken);
+
+/* Tabulate the most frequent interrupts. */
+static void tabulate_irq_count(struct irq_counts *irq_counts, int irq, u32 counts, int rank)
+{
+ int i;
+ struct irq_counts new_count = {irq, counts};
+
+ for (i = 0; i < rank; i++) {
+ if (counts > irq_counts[i].counts)
+ swap(new_count, irq_counts[i]);
+ }
+}
+
+/*
+ * If the hardirq time exceeds HARDIRQ_PERCENT_THRESH% of the sample_period,
+ * then the cause of softlockup might be interrupt storm. In this case, it
+ * would be useful to start interrupt counting.
+ */
+static bool need_counting_irqs(void)
+{
+ u8 util;
+ int tail = __this_cpu_read(cpustat_tail);
+
+ tail = (tail + NUM_HARDIRQ_REPORT - 1) % NUM_HARDIRQ_REPORT;
+ util = __this_cpu_read(cpustat_util[tail][STATS_HARDIRQ]);
+ return util > HARDIRQ_PERCENT_THRESH;
+}
+
+static void start_counting_irqs(void)
+{
+ if (!__this_cpu_read(snapshot_taken)) {
+ kstat_snapshot_irqs();
+ __this_cpu_write(snapshot_taken, true);
+ }
+}
+
+static void stop_counting_irqs(void)
+{
+ __this_cpu_write(snapshot_taken, false);
+}
+
+static void print_irq_counts(void)
+{
+ unsigned int i, count;
+ struct irq_counts irq_counts_sorted[NUM_HARDIRQ_REPORT] = {
+ {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0}, {-1, 0}
+ };
+
+ if (__this_cpu_read(snapshot_taken)) {
+ for_each_active_irq(i) {
+ count = kstat_get_irq_since_snapshot(i);
+ tabulate_irq_count(irq_counts_sorted, i, count, NUM_HARDIRQ_REPORT);
+ }
+
+ /*
+ * We do not want the "watchdog: " prefix on every line,
+ * hence we use "printk" instead of "pr_crit".
+ */
+ printk(KERN_CRIT "CPU#%d Detect HardIRQ Time exceeds %d%%. Most frequent HardIRQs:\n",
+ smp_processor_id(), HARDIRQ_PERCENT_THRESH);
+
+ for (i = 0; i < NUM_HARDIRQ_REPORT; i++) {
+ if (irq_counts_sorted[i].irq == -1)
+ break;
+
+ printk(KERN_CRIT "\t#%u: %-10u\tirq#%d\n",
+ i + 1, irq_counts_sorted[i].counts,
+ irq_counts_sorted[i].irq);
+ }
+
+ /*
+ * If the hardirq time is less than HARDIRQ_PERCENT_THRESH% in the last
+ * sample_period, then we suspect the interrupt storm might be subsiding.
+ */
+ if (!need_counting_irqs())
+ stop_counting_irqs();
+ }
+}
+
static void report_cpu_status(void)
{
print_cpustat();
+ print_irq_counts();
}
#else
static inline void update_cpustat(void) { }
static inline void report_cpu_status(void) { }
+static inline bool need_counting_irqs(void) { return false; }
+static inline void start_counting_irqs(void) { }
+static inline void stop_counting_irqs(void) { }
#endif

/*
@@ -527,6 +621,18 @@ static int is_softlockup(unsigned long touch_ts,
unsigned long now)
{
if ((watchdog_enabled & WATCHDOG_SOFTOCKUP_ENABLED) && watchdog_thresh) {
+ /*
+ * If period_ts has not been updated during a sample_period, then
+ * in the subsequent few sample_periods, period_ts might also not
+ * be updated, which could indicate a potential softlockup. In
+ * this case, if we suspect the cause of the potential softlockup
+ * might be interrupt storm, then we need to count the interrupts
+ * to find which interrupt is storming.
+ */
+ if (time_after_eq(now, period_ts + get_softlockup_thresh() / NUM_SAMPLE_PERIODS) &&
+ need_counting_irqs())
+ start_counting_irqs();
+
/* Warn about unreasonable delays. */
if (time_after(now, period_ts + get_softlockup_thresh()))
return now - touch_ts;
@@ -549,6 +655,7 @@ static DEFINE_PER_CPU(struct cpu_stop_work, softlockup_stop_work);
static int softlockup_fn(void *data)
{
update_touch_ts();
+ stop_counting_irqs();
complete(this_cpu_ptr(&softlockup_completion));

return 0;
--
2.37.1 (Apple Git-137.1)


2024-02-28 07:25:33

by Bitao Hu

[permalink] [raw]
Subject: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

The soft lockup detector lacks a mechanism to identify interrupt storms
as root cause of a lockup. To enable this the detector needs a
mechanism to snapshot the interrupt count statistics on a CPU when the
detector observes a potential lockup scenario and compare that against
the interrupt count when it warns about the lockup later on. The number
of interrupts in that period give a hint whether the lockup might be
caused by an interrupt storm.

Instead of having extra storage in the lockup detector and accessing
the internals of the interrupt descriptor directly, convert the per CPU
irq_desc::kstat_irq member to a data structure which contains the
counter plus a snapshot member and provide interfaces to take a
snapshot of all interrupts on the current CPU and to retrieve the delta
of a specific interrupt later on.

Originally-by: Thomas Gleixner <[email protected]>
Signed-off-by: Bitao Hu <[email protected]>
Reviewed-by: Liu Song <[email protected]>
---
arch/mips/dec/setup.c | 2 +-
arch/parisc/kernel/smp.c | 2 +-
arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
include/linux/irqdesc.h | 14 ++++++++++--
include/linux/kernel_stat.h | 3 +++
kernel/irq/internals.h | 2 +-
kernel/irq/irqdesc.c | 34 ++++++++++++++++++++++------
kernel/irq/proc.c | 5 ++--
scripts/gdb/linux/interrupts.py | 6 ++---
9 files changed, 51 insertions(+), 19 deletions(-)

diff --git a/arch/mips/dec/setup.c b/arch/mips/dec/setup.c
index 6c3704f51d0d..87f0a1436bf9 100644
--- a/arch/mips/dec/setup.c
+++ b/arch/mips/dec/setup.c
@@ -756,7 +756,7 @@ void __init arch_init_irq(void)
NULL))
pr_err("Failed to register fpu interrupt\n");
desc_fpu = irq_to_desc(irq_fpu);
- fpu_kstat_irq = this_cpu_ptr(desc_fpu->kstat_irqs);
+ fpu_kstat_irq = this_cpu_ptr(&desc_fpu->kstat_irqs->cnt);
}
if (dec_interrupt[DEC_IRQ_CASCADE] >= 0) {
if (request_irq(dec_interrupt[DEC_IRQ_CASCADE], no_action,
diff --git a/arch/parisc/kernel/smp.c b/arch/parisc/kernel/smp.c
index 444154271f23..800eb64e91ad 100644
--- a/arch/parisc/kernel/smp.c
+++ b/arch/parisc/kernel/smp.c
@@ -344,7 +344,7 @@ static int smp_boot_one_cpu(int cpuid, struct task_struct *idle)
struct irq_desc *desc = irq_to_desc(i);

if (desc && desc->kstat_irqs)
- *per_cpu_ptr(desc->kstat_irqs, cpuid) = 0;
+ *per_cpu_ptr(desc->kstat_irqs, cpuid) = (struct irqstat) { };
}
#endif

diff --git a/arch/powerpc/kvm/book3s_hv_rm_xics.c b/arch/powerpc/kvm/book3s_hv_rm_xics.c
index e42984878503..f2636414d82a 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_xics.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_xics.c
@@ -837,7 +837,7 @@ static inline void this_cpu_inc_rm(unsigned int __percpu *addr)
*/
static void kvmppc_rm_handle_irq_desc(struct irq_desc *desc)
{
- this_cpu_inc_rm(desc->kstat_irqs);
+ this_cpu_inc_rm(&desc->kstat_irqs->cnt);
__this_cpu_inc(kstat.irqs_sum);
}

diff --git a/include/linux/irqdesc.h b/include/linux/irqdesc.h
index d9451d456a73..95d8def84471 100644
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -17,6 +17,16 @@ struct irq_desc;
struct irq_domain;
struct pt_regs;

+/**
+ * struct irqstat - interrupt statistics
+ * @cnt: real-time interrupt count
+ * @ref: snapshot of interrupt count
+ */
+struct irqstat {
+ unsigned int cnt;
+ unsigned int ref;
+};
+
/**
* struct irq_desc - interrupt descriptor
* @irq_common_data: per irq and chip data passed down to chip functions
@@ -55,7 +65,7 @@ struct pt_regs;
struct irq_desc {
struct irq_common_data irq_common_data;
struct irq_data irq_data;
- unsigned int __percpu *kstat_irqs;
+ struct irqstat __percpu *kstat_irqs;
irq_flow_handler_t handle_irq;
struct irqaction *action; /* IRQ action list */
unsigned int status_use_accessors;
@@ -119,7 +129,7 @@ extern struct irq_desc irq_desc[NR_IRQS];
static inline unsigned int irq_desc_kstat_cpu(struct irq_desc *desc,
unsigned int cpu)
{
- return desc->kstat_irqs ? *per_cpu_ptr(desc->kstat_irqs, cpu) : 0;
+ return desc->kstat_irqs ? per_cpu(desc->kstat_irqs->cnt, cpu) : 0;
}

static inline struct irq_desc *irq_data_to_desc(struct irq_data *data)
diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
index 9935f7ecbfb9..98b3043ea5e6 100644
--- a/include/linux/kernel_stat.h
+++ b/include/linux/kernel_stat.h
@@ -79,6 +79,9 @@ static inline unsigned int kstat_cpu_softirqs_sum(int cpu)
return sum;
}

+extern void kstat_snapshot_irqs(void);
+extern unsigned int kstat_get_irq_since_snapshot(unsigned int irq);
+
/*
* Number of interrupts per specific IRQ source, since bootup
*/
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index bcc7f21db9ee..1d92532c2aae 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -258,7 +258,7 @@ static inline void irq_state_set_masked(struct irq_desc *desc)

static inline void __kstat_incr_irqs_this_cpu(struct irq_desc *desc)
{
- __this_cpu_inc(*desc->kstat_irqs);
+ __this_cpu_inc(desc->kstat_irqs->cnt);
__this_cpu_inc(kstat.irqs_sum);
}

diff --git a/kernel/irq/irqdesc.c b/kernel/irq/irqdesc.c
index 27ca1c866f29..9cd17080b2d8 100644
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -122,7 +122,7 @@ static void desc_set_defaults(unsigned int irq, struct irq_desc *desc, int node,
desc->name = NULL;
desc->owner = owner;
for_each_possible_cpu(cpu)
- *per_cpu_ptr(desc->kstat_irqs, cpu) = 0;
+ *per_cpu_ptr(desc->kstat_irqs, cpu) = (struct irqstat) { };
desc_smp_init(desc, node, affinity);
}

@@ -418,8 +418,8 @@ static struct irq_desc *alloc_desc(int irq, int node, unsigned int flags,
desc = kzalloc_node(sizeof(*desc), GFP_KERNEL, node);
if (!desc)
return NULL;
- /* allocate based on nr_cpu_ids */
- desc->kstat_irqs = alloc_percpu(unsigned int);
+
+ desc->kstat_irqs = alloc_percpu(struct irqstat);
if (!desc->kstat_irqs)
goto err_desc;

@@ -593,7 +593,7 @@ int __init early_irq_init(void)
count = ARRAY_SIZE(irq_desc);

for (i = 0; i < count; i++) {
- desc[i].kstat_irqs = alloc_percpu(unsigned int);
+ desc[i].kstat_irqs = alloc_percpu(struct irqstat);
alloc_masks(&desc[i], node);
raw_spin_lock_init(&desc[i].lock);
lockdep_set_class(&desc[i].lock, &irq_desc_lock_class);
@@ -952,8 +952,7 @@ unsigned int kstat_irqs_cpu(unsigned int irq, int cpu)
{
struct irq_desc *desc = irq_to_desc(irq);

- return desc && desc->kstat_irqs ?
- *per_cpu_ptr(desc->kstat_irqs, cpu) : 0;
+ return desc && desc->kstat_irqs ? per_cpu(desc->kstat_irqs->cnt, cpu) : 0;
}

static bool irq_is_nmi(struct irq_desc *desc)
@@ -975,10 +974,31 @@ static unsigned int kstat_irqs(unsigned int irq)
return data_race(desc->tot_count);

for_each_possible_cpu(cpu)
- sum += data_race(*per_cpu_ptr(desc->kstat_irqs, cpu));
+ sum += data_race(per_cpu(desc->kstat_irqs->cnt, cpu));
return sum;
}

+void kstat_snapshot_irqs(void)
+{
+ struct irq_desc *desc;
+ unsigned int irq;
+
+ for_each_irq_desc(irq, desc) {
+ if (!desc->kstat_irqs)
+ continue;
+ this_cpu_write(desc->kstat_irqs->ref, this_cpu_read(desc->kstat_irqs->cnt));
+ }
+}
+
+unsigned int kstat_get_irq_since_snapshot(unsigned int irq)
+{
+ struct irq_desc *desc = irq_to_desc(irq);
+
+ if (!desc || !desc->kstat_irqs)
+ return 0;
+ return this_cpu_read(desc->kstat_irqs->cnt) - this_cpu_read(desc->kstat_irqs->ref);
+}
+
/**
* kstat_irqs_usr - Get the statistics for an interrupt from thread context
* @irq: The interrupt number
diff --git a/kernel/irq/proc.c b/kernel/irq/proc.c
index 623b8136e9af..6954e0a02047 100644
--- a/kernel/irq/proc.c
+++ b/kernel/irq/proc.c
@@ -490,7 +490,7 @@ int show_interrupts(struct seq_file *p, void *v)

if (desc->kstat_irqs) {
for_each_online_cpu(j)
- any_count |= data_race(*per_cpu_ptr(desc->kstat_irqs, j));
+ any_count |= data_race(per_cpu(desc->kstat_irqs->cnt, j));
}

if ((!desc->action || irq_desc_is_chained(desc)) && !any_count)
@@ -498,8 +498,7 @@ int show_interrupts(struct seq_file *p, void *v)

seq_printf(p, "%*d: ", prec, i);
for_each_online_cpu(j)
- seq_printf(p, "%10u ", desc->kstat_irqs ?
- *per_cpu_ptr(desc->kstat_irqs, j) : 0);
+ seq_printf(p, "%10u ", desc->kstat_irqs ? per_cpu(desc->kstat_irqs->cnt, j) : 0);

raw_spin_lock_irqsave(&desc->lock, flags);
if (desc->irq_data.chip) {
diff --git a/scripts/gdb/linux/interrupts.py b/scripts/gdb/linux/interrupts.py
index ef478e273791..7e50f3b9dfad 100644
--- a/scripts/gdb/linux/interrupts.py
+++ b/scripts/gdb/linux/interrupts.py
@@ -37,7 +37,7 @@ def show_irq_desc(prec, irq):
any_count = 0
if desc['kstat_irqs']:
for cpu in cpus.each_online_cpu():
- any_count += cpus.per_cpu(desc['kstat_irqs'], cpu)
+ any_count += cpus.per_cpu(desc['kstat_irqs'], cpu)['cnt']

if (desc['action'] == 0 or irq_desc_is_chained(desc)) and any_count == 0:
return text;
@@ -45,7 +45,7 @@ def show_irq_desc(prec, irq):
text += "%*d: " % (prec, irq)
for cpu in cpus.each_online_cpu():
if desc['kstat_irqs']:
- count = cpus.per_cpu(desc['kstat_irqs'], cpu)
+ count = cpus.per_cpu(desc['kstat_irqs'], cpu)['cnt']
else:
count = 0
text += "%10u" % (count)
@@ -177,7 +177,7 @@ def arm_common_show_interrupts(prec):
if desc == 0:
continue
for cpu in cpus.each_online_cpu():
- text += "%10u" % (cpus.per_cpu(desc['kstat_irqs'], cpu))
+ text += "%10u" % (cpus.per_cpu(desc['kstat_irqs'], cpu)['cnt'])
text += " %s" % (ipi_types[ipi].string())
text += "\n"
return text
--
2.37.1 (Apple Git-137.1)


2024-02-28 22:45:13

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

Hi,

On Tue, Feb 27, 2024 at 11:22 PM Bitao Hu <[email protected]> wrote:
>
> The soft lockup detector lacks a mechanism to identify interrupt storms
> as root cause of a lockup. To enable this the detector needs a
> mechanism to snapshot the interrupt count statistics on a CPU when the
> detector observes a potential lockup scenario and compare that against
> the interrupt count when it warns about the lockup later on. The number
> of interrupts in that period give a hint whether the lockup might be
> caused by an interrupt storm.
>
> Instead of having extra storage in the lockup detector and accessing
> the internals of the interrupt descriptor directly, convert the per CPU
> irq_desc::kstat_irq member to a data structure which contains the
> counter plus a snapshot member and provide interfaces to take a
> snapshot of all interrupts on the current CPU and to retrieve the delta
> of a specific interrupt later on.
>
> Originally-by: Thomas Gleixner <[email protected]>
> Signed-off-by: Bitao Hu <[email protected]>
> Reviewed-by: Liu Song <[email protected]>
> ---
> arch/mips/dec/setup.c | 2 +-
> arch/parisc/kernel/smp.c | 2 +-
> arch/powerpc/kvm/book3s_hv_rm_xics.c | 2 +-
> include/linux/irqdesc.h | 14 ++++++++++--
> include/linux/kernel_stat.h | 3 +++
> kernel/irq/internals.h | 2 +-
> kernel/irq/irqdesc.c | 34 ++++++++++++++++++++++------
> kernel/irq/proc.c | 5 ++--
> scripts/gdb/linux/interrupts.py | 6 ++---
> 9 files changed, 51 insertions(+), 19 deletions(-)

I won't insist on it, but I continue to worry about memory
implications with large numbers of CPUs. With a 4-byte int, 8192 max
CPUs, and 100 IRQs the extra "ref" value takes up over 3MB of memory
(8192 * 4 bytes * 100).

Technically, you could add a new symbol like "config
NEED_IRQ_SNAPSHOTS". This wouldn't be a symbol selectable by the end
user but would automatically be selected by "config
SOFTLOCKUP_DETECTOR_INTR_STORM". If the config wasn't defined then the
struct wouldn't contain "ref" and the snapshot routines would just be
static inline stubs.

Maybe Thomas has an opinion about whether this is something to worry
about. Worst case it wouldn't be hard to do in a follow-up patch.

Everything else looks good to me. Given that I'm not insisting on
adding the extra CONFIG, I'm OK w/:

Reviewed-by: Douglas Anderson <[email protected]>

2024-02-28 22:45:27

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCHv11 3/4] genirq: Avoid summation loops for /proc/interrupts

Hi,

On Tue, Feb 27, 2024 at 11:22 PM Bitao Hu <[email protected]> wrote:
>
> show_interrupts() unconditionally accumulates the per CPU interrupt
> statistics to determine whether an interrupt was ever raised.
>
> This can be avoided for all interrupts which are not strictly per CPU
> and not of type NMI because those interrupts provide already an
> accumulated counter. The required logic is already implemented in
> kstat_irqs().
>
> Split the inner access logic out of kstat_irqs() and use it for
> kstat_irqs() and show_interrupts() to avoid the accumulation loop
> when possible.
>
> Originally-by: Thomas Gleixner <[email protected]>
> Signed-off-by: Bitao Hu <[email protected]>
> Reviewed-by: Liu Song <[email protected]>
> ---
> kernel/irq/internals.h | 2 ++
> kernel/irq/irqdesc.c | 16 +++++++++++-----
> kernel/irq/proc.c | 6 ++----
> 3 files changed, 15 insertions(+), 9 deletions(-)

Reviewed-by: Douglas Anderson <[email protected]>

2024-02-28 22:45:41

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCHv11 4/4] watchdog/softlockup: report the most frequent interrupts

Hi,

On Tue, Feb 27, 2024 at 11:22 PM Bitao Hu <[email protected]> wrote:
>
> When the watchdog determines that the current soft lockup is due
> to an interrupt storm based on CPU utilization, reporting the
> most frequent interrupts could be good enough for further
> troubleshooting.
>
> Below is an example of interrupt storm. The call tree does not
> provide useful information, but we can analyze which interrupt
> caused the soft lockup by comparing the counts of interrupts.
>
> [ 638.870231] watchdog: BUG: soft lockup - CPU#9 stuck for 26s! [swapper/9:0]
> [ 638.870825] CPU#9 Utilization every 4s during lockup:
> [ 638.871194] #1: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 638.871652] #2: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 638.872107] #3: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 638.872563] #4: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 638.873018] #5: 0% system, 0% softirq, 100% hardirq, 0% idle
> [ 638.873494] CPU#9 Detect HardIRQ Time exceeds 50%. Most frequent HardIRQs:
> [ 638.873994] #1: 330945 irq#7
> [ 638.874236] #2: 31 irq#82
> [ 638.874493] #3: 10 irq#10
> [ 638.874744] #4: 2 irq#89
> [ 638.874992] #5: 1 irq#102
> ...
> [ 638.875313] Call trace:
> [ 638.875315] __do_softirq+0xa8/0x364
>
> Signed-off-by: Bitao Hu <[email protected]>
> Reviewed-by: Liu Song <[email protected]>
> ---
> kernel/watchdog.c | 115 ++++++++++++++++++++++++++++++++++++++++++++--
> 1 file changed, 111 insertions(+), 4 deletions(-)

Reviewed-by: Douglas Anderson <[email protected]>

2024-03-01 19:23:06

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

Doug!

On Wed, Feb 28 2024 at 14:44, Doug Anderson wrote:
> I won't insist on it, but I continue to worry about memory
> implications with large numbers of CPUs. With a 4-byte int, 8192 max
> CPUs, and 100 IRQs the extra "ref" value takes up over 3MB of memory
> (8192 * 4 bytes * 100).
>
> Technically, you could add a new symbol like "config
> NEED_IRQ_SNAPSHOTS". This wouldn't be a symbol selectable by the end
> user but would automatically be selected by "config
> SOFTLOCKUP_DETECTOR_INTR_STORM". If the config wasn't defined then the
> struct wouldn't contain "ref" and the snapshot routines would just be
> static inline stubs.
>
> Maybe Thomas has an opinion about whether this is something to worry
> about. Worst case it wouldn't be hard to do in a follow-up patch.

I'd say it makes sense to give people a choice to save memory especially
when the softlock detector code is not enabled.

It's rather straight forward to do.

Thanks,

tglx
---
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -24,7 +24,9 @@ struct pt_regs;
*/
struct irqstat {
unsigned int cnt;
+#ifdef CONFIG_GENIRQ_STAT_SNAPSHOT
unsigned int ref;
+#endif
};

/**
--- a/kernel/irq/irqdesc.c
+++ b/kernel/irq/irqdesc.c
@@ -978,6 +978,7 @@ static unsigned int kstat_irqs(unsigned
return sum;
}

+#ifdef CONFIG_GENIRQ_STAT_SNAPSHOT
void kstat_snapshot_irqs(void)
{
struct irq_desc *desc;
@@ -998,6 +999,7 @@ unsigned int kstat_get_irq_since_snapsho
return 0;
return this_cpu_read(desc->kstat_irqs->cnt) - this_cpu_read(desc->kstat_irqs->ref);
}
+#endif

/**
* kstat_irqs_usr - Get the statistics for an interrupt from thread context
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -108,6 +108,10 @@ config GENERIC_IRQ_MATRIX_ALLOCATOR
config GENERIC_IRQ_RESERVATION_MODE
bool

+# Snapshot for interrupt statistics
+config GENERIC_IRQ_STAT_SNAPSHOT
+ bool
+
# Support forced irq threading
config IRQ_FORCED_THREADING
bool

2024-03-04 12:01:08

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

Hi,

On 2024/3/2 03:22, Thomas Gleixner wrote:
> Doug!
>
> On Wed, Feb 28 2024 at 14:44, Doug Anderson wrote:
>> I won't insist on it, but I continue to worry about memory
>> implications with large numbers of CPUs. With a 4-byte int, 8192 max
>> CPUs, and 100 IRQs the extra "ref" value takes up over 3MB of memory
>> (8192 * 4 bytes * 100).
>>
>> Technically, you could add a new symbol like "config
>> NEED_IRQ_SNAPSHOTS". This wouldn't be a symbol selectable by the end
>> user but would automatically be selected by "config
>> SOFTLOCKUP_DETECTOR_INTR_STORM". If the config wasn't defined then the
>> struct wouldn't contain "ref" and the snapshot routines would just be
>> static inline stubs.
>>
>> Maybe Thomas has an opinion about whether this is something to worry
>> about. Worst case it wouldn't be hard to do in a follow-up patch.
>
> I'd say it makes sense to give people a choice to save memory especially
> when the softlock detector code is not enabled.
>
> It's rather straight forward to do.
>
> Thanks,
>
> tglx
> ---
> --- a/include/linux/irqdesc.h
> +++ b/include/linux/irqdesc.h
> @@ -24,7 +24,9 @@ struct pt_regs;
> */
> struct irqstat {
> unsigned int cnt;
> +#ifdef CONFIG_GENIRQ_STAT_SNAPSHOT
> unsigned int ref;
> +#endif
> };
>
> /**
> --- a/kernel/irq/irqdesc.c
> +++ b/kernel/irq/irqdesc.c
> @@ -978,6 +978,7 @@ static unsigned int kstat_irqs(unsigned
> return sum;
> }
>
> +#ifdef CONFIG_GENIRQ_STAT_SNAPSHOT
> void kstat_snapshot_irqs(void)
> {
> struct irq_desc *desc;
> @@ -998,6 +999,7 @@ unsigned int kstat_get_irq_since_snapsho
> return 0;
> return this_cpu_read(desc->kstat_irqs->cnt) - this_cpu_read(desc->kstat_irqs->ref);
> }
> +#endif
>
> /**
> * kstat_irqs_usr - Get the statistics for an interrupt from thread context
> --- a/kernel/irq/Kconfig
> +++ b/kernel/irq/Kconfig
> @@ -108,6 +108,10 @@ config GENERIC_IRQ_MATRIX_ALLOCATOR
> config GENERIC_IRQ_RESERVATION_MODE
> bool
>
> +# Snapshot for interrupt statistics
> +config GENERIC_IRQ_STAT_SNAPSHOT
> + bool
> +
> # Support forced irq threading
> config IRQ_FORCED_THREADING
> bool

I think we should follow Douglas's suggestion by making
"config GENERIC_IRQ_STAT_SNAPSHOT" automatically selectable by
"config SOFTLOCKUP_DETECTOR_INTR_STORM". This can prevent users
from inadvertently disabling "config GENERIC_IRQ_STAT_SNAPSHOT"
while enabling "config SOFTLOCKUP_DETECTOR_INTR_STORM".

Best Regards,
Bitao Hu

diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 2531f3496ab6..9cf3b2d4c2a8 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -108,6 +108,15 @@ config GENERIC_IRQ_MATRIX_ALLOCATOR
config GENERIC_IRQ_RESERVATION_MODE
bool

+# Snapshot for interrupt statistics
+config GENERIC_IRQ_STAT_SNAPSHOT
+ bool
+ help
+
+ Say Y here to enable the kernel to provide a snapshot mechanism
+ for interrupt statistics.
+
+
# Support forced irq threading
config IRQ_FORCED_THREADING
bool
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 49f652674bd8..899b69fcb598 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1032,6 +1032,7 @@ config SOFTLOCKUP_DETECTOR
config SOFTLOCKUP_DETECTOR_INTR_STORM
bool "Detect Interrupt Storm in Soft Lockups"
depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
+ select GENERIC_IRQ_STAT_SNAPSHOT
default y if NR_CPUS <= 128
help
Say Y here to enable the kernel to detect interrupt storm

2024-03-04 14:26:23

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

On Mon, Mar 04 2024 at 20:00, Bitao Hu wrote:
>> +# Snapshot for interrupt statistics
>> +config GENERIC_IRQ_STAT_SNAPSHOT
>> + bool
>> +
>> # Support forced irq threading
>> config IRQ_FORCED_THREADING
>> bool
>
> I think we should follow Douglas's suggestion by making
> "config GENERIC_IRQ_STAT_SNAPSHOT" automatically selectable by
> "config SOFTLOCKUP_DETECTOR_INTR_STORM". This can prevent users
> from inadvertently disabling "config GENERIC_IRQ_STAT_SNAPSHOT"
> while enabling "config SOFTLOCKUP_DETECTOR_INTR_STORM".

The above is not even configurable by the user. It's only selectable by
some other config option.

> +# Snapshot for interrupt statistics
> +config GENERIC_IRQ_STAT_SNAPSHOT
> + bool
> + help
> +
> + Say Y here to enable the kernel to provide a snapshot mechanism
> + for interrupt statistics.

That makes is visible which is pointless because it's only relevant when
there is an actual user.

Thanks,

tglx

2024-03-05 11:11:21

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

Hi,

On 2024/3/4 22:24, Thomas Gleixner wrote:
> The above is not even configurable by the user. It's only selectable by
> some other config option.
>
>> +# Snapshot for interrupt statistics
>> +config GENERIC_IRQ_STAT_SNAPSHOT
>> + bool
>> + help
>> +
>> + Say Y here to enable the kernel to provide a snapshot mechanism
>> + for interrupt statistics.
>
> That makes is visible which is pointless because it's only relevant when
> there is an actual user.
I guess I may have misunderstood your intentions earlier. Initially, I
thought you were suggesting that when "SOFTLOCKUP_DETECTOR_INTR_STORM"
is not enabled, people should be able to choose
"GENERIC_IRQ_STAT_SNAPSHOT" through menuconfig, so I attempted to make
"GENERIC_IRQ_STAT_SNAPSHOT" visible to the user. However, after
analyzing the previous emails, it seems that what you were actually
proposing was to directly disable "GENERIC_IRQ_STAT_SNAPSHOT" when
"SOFTLOCKUP_DETECTOR_INTR_STORM" is not enabled, as a way to save
memory. If my current understanding is correct, then the code for that
part would look something like the following.

Does this align with your expectations?

Best Regards,
Bitao Hu

diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
index 2531f3496ab6..a28e5ac5fc79 100644
--- a/kernel/irq/Kconfig
+++ b/kernel/irq/Kconfig
@@ -108,6 +108,10 @@ config GENERIC_IRQ_MATRIX_ALLOCATOR
config GENERIC_IRQ_RESERVATION_MODE
bool

+# Snapshot for interrupt statistics
+config GENERIC_IRQ_STAT_SNAPSHOT
+ bool
+
# Support forced irq threading
config IRQ_FORCED_THREADING
bool
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 49f652674bd8..899b69fcb598 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1032,6 +1032,7 @@ config SOFTLOCKUP_DETECTOR
config SOFTLOCKUP_DETECTOR_INTR_STORM
bool "Detect Interrupt Storm in Soft Lockups"
depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
+ select GENERIC_IRQ_STAT_SNAPSHOT
default y if NR_CPUS <= 128
help
Say Y here to enable the kernel to detect interrupt storm







2024-03-05 17:06:41

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

On Tue, Mar 05 2024 at 18:57, Bitao Hu wrote:
> On 2024/3/4 22:24, Thomas Gleixner wrote:
> "GENERIC_IRQ_STAT_SNAPSHOT" visible to the user. However, after
> analyzing the previous emails, it seems that what you were actually
> proposing was to directly disable "GENERIC_IRQ_STAT_SNAPSHOT" when
> "SOFTLOCKUP_DETECTOR_INTR_STORM" is not enabled, as a way to save
> memory. If my current understanding is correct, then the code for that
> part would look something like the following.

Correct.

> diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
> index 2531f3496ab6..a28e5ac5fc79 100644
> --- a/kernel/irq/Kconfig
> +++ b/kernel/irq/Kconfig
> @@ -108,6 +108,10 @@ config GENERIC_IRQ_MATRIX_ALLOCATOR
> config GENERIC_IRQ_RESERVATION_MODE
> bool
>
> +# Snapshot for interrupt statistics
> +config GENERIC_IRQ_STAT_SNAPSHOT
> + bool
> +
> # Support forced irq threading
> config IRQ_FORCED_THREADING
> bool
> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> index 49f652674bd8..899b69fcb598 100644
> --- a/lib/Kconfig.debug
> +++ b/lib/Kconfig.debug
> @@ -1032,6 +1032,7 @@ config SOFTLOCKUP_DETECTOR
> config SOFTLOCKUP_DETECTOR_INTR_STORM
> bool "Detect Interrupt Storm in Soft Lockups"
> depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
> + select GENERIC_IRQ_STAT_SNAPSHOT

This goes into the patch which adds the lockup detector parts.

Thanks,

tglx

2024-03-06 11:09:44

by Bitao Hu

[permalink] [raw]
Subject: Re: [PATCHv11 2/4] genirq: Provide a snapshot mechanism for interrupt statistics

Hi,

On 2024/3/6 00:57, Thomas Gleixner wrote:
>
>> diff --git a/kernel/irq/Kconfig b/kernel/irq/Kconfig
>> index 2531f3496ab6..a28e5ac5fc79 100644
>> --- a/kernel/irq/Kconfig
>> +++ b/kernel/irq/Kconfig
>> @@ -108,6 +108,10 @@ config GENERIC_IRQ_MATRIX_ALLOCATOR
>> config GENERIC_IRQ_RESERVATION_MODE
>> bool
>>
>> +# Snapshot for interrupt statistics
>> +config GENERIC_IRQ_STAT_SNAPSHOT
>> + bool
>> +
>> # Support forced irq threading
>> config IRQ_FORCED_THREADING
>> bool
>> diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
>> index 49f652674bd8..899b69fcb598 100644
>> --- a/lib/Kconfig.debug
>> +++ b/lib/Kconfig.debug
>> @@ -1032,6 +1032,7 @@ config SOFTLOCKUP_DETECTOR
>> config SOFTLOCKUP_DETECTOR_INTR_STORM
>> bool "Detect Interrupt Storm in Soft Lockups"
>> depends on SOFTLOCKUP_DETECTOR && IRQ_TIME_ACCOUNTING
>> + select GENERIC_IRQ_STAT_SNAPSHOT
>
> This goes into the patch which adds the lockup detector parts.
>

OK, I will implement this in the next version.

Best Regards,
Bitao Hu