2022-09-03 10:05:06

by Lecopzer Chen

[permalink] [raw]
Subject: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for

As we already used hld internally for arm64 since 2020, there still
doesn't have a proper commit on the upstream and we badly need it.

This serise rework on 5.17 from [1] and the origin author is
Pingfan Liu <[email protected]>
Sumit Garg <[email protected]>

Qoute from [1]:

> Hard lockup detector is helpful to diagnose unpaired irq
> enable/disable.
> But the current watchdog framework can not cope with arm64 hw perf
> event
> easily.

> On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is
> not
> ready until device_initcall(armv8_pmu_driver_init). And it is deeply
> integrated with the driver model and cpuhp. Hence it is hard to push
> the
> initialization of armv8_pmu_driver_init() before smp_init().

> But it is easy to take an opposite approach by enabling watchdog_hld
> to
> get the capability of PMU async.
> The async model is achieved by expanding watchdog_nmi_probe() with
> -EBUSY, and a re-initializing work_struct which waits on a
> wait_queue_head.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector.

The original assumption is: nobody should use delayed probe after
lockup_detector_check() (which has __init attribute).
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

The delayed init flow is:
1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun,
then set allow_lockup_detector_init_retry to true which means it's
able to do delayed probe later.

2. PMU arch code init done, call retry_lockup_detector_init().

3. retry_lockup_detector_init() queue the work only when
allow_lockup_detector_init_retry is true which means nobody should
call
this before lockup_detector_init().

4. the work lockup_detector_delay_init() is doing without wait event.
if probe success, set allow_lockup_detector_init_retry to false.

5. at late_initcall_sync(), lockup_detector_check() set
allow_lockup_detector_init_retry to false first to avoid any later
retry,
and then flush_work() to make sure the __init section won't be freed
before the work done.

[1]
https://lore.kernel.org/lkml/[email protected]/

v7:
rebase on v6.0-rc3

v6:
fix build failed reported by kernel test robot <[email protected]>
https://lore.kernel.org/lkml/[email protected]/

v5:
1. rebase on v5.19-rc2
2. change to proper schedule api
3. return value checking before retry_lockup_detector_init()
https://lore.kernel.org/lkml/[email protected]/

v4:
1. remove -EBUSY protocal, let all the non-zero value from
watchdog_nmi_probe() be able to retry.
2. separate arm64 part patch into hw_nmi_get_sample_period and retry
delayed init
3. tweak commit msg that we don't have to limit to -EBUSY
4. rebase on v5.18-rc4
https://lore.kernel.org/lkml/[email protected]/

v3:
1. Tweak commit message in patch 04
2. Remove wait event
3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
4. provide api retry_lockup_detector_init()
https://lore.kernel.org/lkml/[email protected]/

v2:
1. Tweak commit message in patch 01/02/04/05
2. Remove vobose WARN in patch 04 within watchdog core.
3. Change from three states variable: detector_delay_init_state to
two states variable: allow_lockup_detector_init_retry

Thanks Petr Mladek <[email protected]> for the idea.
> 1. lockup_detector_work() called before lockup_detector_check().
> In this case, wait_event() will wait until
> lockup_detector_check()
> clears detector_delay_pending_init and calls wake_up().

> 2. lockup_detector_check() called before lockup_detector_work().
> In this case, wait_even() will immediately continue because
> it will see cleared detector_delay_pending_init.
4. Add comment in code in patch 04/05 for two states variable
changing.
https://lore.kernel.org/lkml/[email protected]/


Lecopzer Chen (5):
kernel/watchdog: remove WATCHDOG_DEFAULT
kernel/watchdog: change watchdog_nmi_enable() to void
kernel/watchdog: Adapt the watchdog_hld interface for async model
arm64: add hw_nmi_get_sample_period for preparation of lockup detector
arm64: Enable perf events based hard lockup detector

Pingfan Liu (1):
kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
detector event

arch/arm64/Kconfig | 2 +
arch/arm64/kernel/Makefile | 1 +
arch/arm64/kernel/perf_event.c | 12 +++++-
arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
arch/sparc/kernel/nmi.c | 8 ++--
drivers/perf/arm_pmu.c | 5 +++
include/linux/nmi.h | 4 +-
include/linux/perf/arm_pmu.h | 2 +
kernel/watchdog.c | 72 +++++++++++++++++++++++++++++---
kernel/watchdog_hld.c | 8 +++-
10 files changed, 139 insertions(+), 14 deletions(-)
create mode 100644 arch/arm64/kernel/watchdog_hld.c

--
2.25.1


2022-09-03 10:28:52

by Lecopzer Chen

[permalink] [raw]
Subject: [PATCH v7 2/6] kernel/watchdog: change watchdog_nmi_enable() to void

Nobody cares about the return value of watchdog_nmi_enable(),
changing its prototype to void.

Signed-off-by: Pingfan Liu <[email protected]>
Signed-off-by: Lecopzer Chen <[email protected]>
Reviewed-by: Petr Mladek <[email protected]>
---
arch/sparc/kernel/nmi.c | 8 +++-----
include/linux/nmi.h | 2 +-
kernel/watchdog.c | 3 +--
3 files changed, 5 insertions(+), 8 deletions(-)

diff --git a/arch/sparc/kernel/nmi.c b/arch/sparc/kernel/nmi.c
index 060fff95a305..5dcf31f7e81f 100644
--- a/arch/sparc/kernel/nmi.c
+++ b/arch/sparc/kernel/nmi.c
@@ -282,11 +282,11 @@ __setup("nmi_watchdog=", setup_nmi_watchdog);
* sparc specific NMI watchdog enable function.
* Enables watchdog if it is not enabled already.
*/
-int watchdog_nmi_enable(unsigned int cpu)
+void watchdog_nmi_enable(unsigned int cpu)
{
if (atomic_read(&nmi_active) == -1) {
pr_warn("NMI watchdog cannot be enabled or disabled\n");
- return -1;
+ return;
}

/*
@@ -295,11 +295,9 @@ int watchdog_nmi_enable(unsigned int cpu)
* process first.
*/
if (!nmi_init_done)
- return 0;
+ return;

smp_call_function_single(cpu, start_nmi_watchdog, NULL, 1);
-
- return 0;
}
/*
* sparc specific NMI watchdog disable function.
diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index f700ff2df074..81217ebbc4bd 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -119,7 +119,7 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }
void watchdog_nmi_stop(void);
void watchdog_nmi_start(void);
int watchdog_nmi_probe(void);
-int watchdog_nmi_enable(unsigned int cpu);
+void watchdog_nmi_enable(unsigned int cpu);
void watchdog_nmi_disable(unsigned int cpu);

void lockup_detector_reconfigure(void);
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index 582d572e1379..c705a18b26bf 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -93,10 +93,9 @@ __setup("nmi_watchdog=", hardlockup_panic_setup);
* softlockup watchdog start and stop. The arch must select the
* SOFTLOCKUP_DETECTOR Kconfig.
*/
-int __weak watchdog_nmi_enable(unsigned int cpu)
+void __weak watchdog_nmi_enable(unsigned int cpu)
{
hardlockup_detector_perf_enable();
- return 0;
}

void __weak watchdog_nmi_disable(unsigned int cpu)
--
2.34.1

2022-09-03 10:32:20

by Lecopzer Chen

[permalink] [raw]
Subject: [PATCH v7 4/6] kernel/watchdog: Adapt the watchdog_hld interface for async model

When lockup_detector_init()->watchdog_nmi_probe(), PMU may be not ready
yet. E.g. on arm64, PMU is not ready until
device_initcall(armv8_pmu_driver_init). And it is deeply integrated
with the driver model and cpuhp. Hence it is hard to push this
initialization before smp_init().

But it is easy to take an opposite approach and try to initialize
the watchdog once again later.
The delayed probe is called using workqueues. It need to allocate
memory and must be proceed in a normal context.
The delayed probe is able to use if watchdog_nmi_probe() returns
non-zero which means the return code returned when PMU is not ready yet.

Provide an API - retry_lockup_detector_init() for anyone who needs
to delayed init lockup detector if they had ever failed at
lockup_detector_init().

The original assumption is: nobody should use delayed probe after
lockup_detector_check() which has __init attribute.
That is, anyone uses this API must call between lockup_detector_init()
and lockup_detector_check(), and the caller must have __init attribute

Reviewed-by: Petr Mladek <[email protected]>
Co-developed-by: Pingfan Liu <[email protected]>
Signed-off-by: Pingfan Liu <[email protected]>
Signed-off-by: Lecopzer Chen <[email protected]>
Suggested-by: Petr Mladek <[email protected]>
Reported-by: kernel test robot <[email protected]>
---
include/linux/nmi.h | 2 ++
kernel/watchdog.c | 67 ++++++++++++++++++++++++++++++++++++++++++++-
2 files changed, 68 insertions(+), 1 deletion(-)

diff --git a/include/linux/nmi.h b/include/linux/nmi.h
index 81217ebbc4bd..7f128e3aae38 100644
--- a/include/linux/nmi.h
+++ b/include/linux/nmi.h
@@ -118,6 +118,8 @@ static inline int hardlockup_detector_perf_init(void) { return 0; }

void watchdog_nmi_stop(void);
void watchdog_nmi_start(void);
+
+void retry_lockup_detector_init(void);
int watchdog_nmi_probe(void);
void watchdog_nmi_enable(unsigned int cpu);
void watchdog_nmi_disable(unsigned int cpu);
diff --git a/kernel/watchdog.c b/kernel/watchdog.c
index c705a18b26bf..0b650d726e50 100644
--- a/kernel/watchdog.c
+++ b/kernel/watchdog.c
@@ -103,7 +103,13 @@ void __weak watchdog_nmi_disable(unsigned int cpu)
hardlockup_detector_perf_disable();
}

-/* Return 0, if a NMI watchdog is available. Error code otherwise */
+/*
+ * Arch specific API.
+ *
+ * Return 0 when NMI watchdog is available, negative value otherwise.
+ * Note that the negative value means that a delayed probe might
+ * succeed later.
+ */
int __weak __init watchdog_nmi_probe(void)
{
return hardlockup_detector_perf_init();
@@ -850,6 +856,62 @@ static void __init watchdog_sysctl_init(void)
#define watchdog_sysctl_init() do { } while (0)
#endif /* CONFIG_SYSCTL */

+static void __init lockup_detector_delay_init(struct work_struct *work);
+static bool allow_lockup_detector_init_retry __initdata;
+
+static struct work_struct detector_work __initdata =
+ __WORK_INITIALIZER(detector_work, lockup_detector_delay_init);
+
+static void __init lockup_detector_delay_init(struct work_struct *work)
+{
+ int ret;
+
+ ret = watchdog_nmi_probe();
+ if (ret) {
+ pr_info("Delayed init of the lockup detector failed: %d\n", ret);
+ pr_info("Perf NMI watchdog permanently disabled\n");
+ return;
+ }
+
+ allow_lockup_detector_init_retry = false;
+
+ nmi_watchdog_available = true;
+ lockup_detector_setup();
+}
+
+/*
+ * retry_lockup_detector_init - retry init lockup detector if possible.
+ *
+ * Retry hardlockup detector init. It is useful when it requires some
+ * functionality that has to be initialized later on a particular
+ * platform.
+ */
+void __init retry_lockup_detector_init(void)
+{
+ /* Must be called before late init calls */
+ if (!allow_lockup_detector_init_retry)
+ return;
+
+ schedule_work(&detector_work);
+}
+
+/*
+ * Ensure that optional delayed hardlockup init is proceed before
+ * the init code and memory is freed.
+ */
+static int __init lockup_detector_check(void)
+{
+ /* Prevent any later retry. */
+ allow_lockup_detector_init_retry = false;
+
+ /* Make sure no work is pending. */
+ flush_work(&detector_work);
+
+ return 0;
+
+}
+late_initcall_sync(lockup_detector_check);
+
void __init lockup_detector_init(void)
{
if (tick_nohz_full_enabled())
@@ -860,6 +922,9 @@ void __init lockup_detector_init(void)

if (!watchdog_nmi_probe())
nmi_watchdog_available = true;
+ else
+ allow_lockup_detector_init_retry = true;
+
lockup_detector_setup();
watchdog_sysctl_init();
}
--
2.34.1

2022-11-07 16:30:01

by Will Deacon

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for

On Sat, Sep 03, 2022 at 05:34:09PM +0800, Lecopzer Chen wrote:
> As we already used hld internally for arm64 since 2020, there still
> doesn't have a proper commit on the upstream and we badly need it.
>
> This serise rework on 5.17 from [1] and the origin author is
> Pingfan Liu <[email protected]>
> Sumit Garg <[email protected]>

I'd definitely want Mark's ack on this, as he previously had suggestions
when we reverted the old broken code back in:

https://lore.kernel.org/r/[email protected]

Will

2023-05-04 22:55:12

by Doug Anderson

[permalink] [raw]
Subject: Re: [PATCH v7 0/6] Support hld delayed init based on Pseudo-NMI for

Hi,

On Sat, Sep 3, 2022 at 2:35 AM Lecopzer Chen <[email protected]> wrote:
>
> As we already used hld internally for arm64 since 2020, there still
> doesn't have a proper commit on the upstream and we badly need it.
>
> This serise rework on 5.17 from [1] and the origin author is
> Pingfan Liu <[email protected]>
> Sumit Garg <[email protected]>
>
> Qoute from [1]:
>
> > Hard lockup detector is helpful to diagnose unpaired irq
> > enable/disable.
> > But the current watchdog framework can not cope with arm64 hw perf
> > event
> > easily.
>
> > On arm64, when lockup_detector_init()->watchdog_nmi_probe(), PMU is
> > not
> > ready until device_initcall(armv8_pmu_driver_init). And it is deeply
> > integrated with the driver model and cpuhp. Hence it is hard to push
> > the
> > initialization of armv8_pmu_driver_init() before smp_init().
>
> > But it is easy to take an opposite approach by enabling watchdog_hld
> > to
> > get the capability of PMU async.
> > The async model is achieved by expanding watchdog_nmi_probe() with
> > -EBUSY, and a re-initializing work_struct which waits on a
> > wait_queue_head.
>
> Provide an API - retry_lockup_detector_init() for anyone who needs
> to delayed init lockup detector.
>
> The original assumption is: nobody should use delayed probe after
> lockup_detector_check() (which has __init attribute).
> That is, anyone uses this API must call between lockup_detector_init()
> and lockup_detector_check(), and the caller must have __init attribute
>
> The delayed init flow is:
> 1. lockup_detector_init() -> watchdog_nmi_probe() get non-zero retun,
> then set allow_lockup_detector_init_retry to true which means it's
> able to do delayed probe later.
>
> 2. PMU arch code init done, call retry_lockup_detector_init().
>
> 3. retry_lockup_detector_init() queue the work only when
> allow_lockup_detector_init_retry is true which means nobody should
> call
> this before lockup_detector_init().
>
> 4. the work lockup_detector_delay_init() is doing without wait event.
> if probe success, set allow_lockup_detector_init_retry to false.
>
> 5. at late_initcall_sync(), lockup_detector_check() set
> allow_lockup_detector_init_retry to false first to avoid any later
> retry,
> and then flush_work() to make sure the __init section won't be freed
> before the work done.
>
> [1]
> https://lore.kernel.org/lkml/[email protected]/
>
> v7:
> rebase on v6.0-rc3
>
> v6:
> fix build failed reported by kernel test robot <[email protected]>
> https://lore.kernel.org/lkml/[email protected]/
>
> v5:
> 1. rebase on v5.19-rc2
> 2. change to proper schedule api
> 3. return value checking before retry_lockup_detector_init()
> https://lore.kernel.org/lkml/[email protected]/
>
> v4:
> 1. remove -EBUSY protocal, let all the non-zero value from
> watchdog_nmi_probe() be able to retry.
> 2. separate arm64 part patch into hw_nmi_get_sample_period and retry
> delayed init
> 3. tweak commit msg that we don't have to limit to -EBUSY
> 4. rebase on v5.18-rc4
> https://lore.kernel.org/lkml/[email protected]/
>
> v3:
> 1. Tweak commit message in patch 04
> 2. Remove wait event
> 3. s/lockup_detector_pending_init/allow_lockup_detector_init_retry/
> 4. provide api retry_lockup_detector_init()
> https://lore.kernel.org/lkml/[email protected]/
>
> v2:
> 1. Tweak commit message in patch 01/02/04/05
> 2. Remove vobose WARN in patch 04 within watchdog core.
> 3. Change from three states variable: detector_delay_init_state to
> two states variable: allow_lockup_detector_init_retry
>
> Thanks Petr Mladek <[email protected]> for the idea.
> > 1. lockup_detector_work() called before lockup_detector_check().
> > In this case, wait_event() will wait until
> > lockup_detector_check()
> > clears detector_delay_pending_init and calls wake_up().
>
> > 2. lockup_detector_check() called before lockup_detector_work().
> > In this case, wait_even() will immediately continue because
> > it will see cleared detector_delay_pending_init.
> 4. Add comment in code in patch 04/05 for two states variable
> changing.
> https://lore.kernel.org/lkml/[email protected]/
>
>
> Lecopzer Chen (5):
> kernel/watchdog: remove WATCHDOG_DEFAULT
> kernel/watchdog: change watchdog_nmi_enable() to void
> kernel/watchdog: Adapt the watchdog_hld interface for async model
> arm64: add hw_nmi_get_sample_period for preparation of lockup detector
> arm64: Enable perf events based hard lockup detector
>
> Pingfan Liu (1):
> kernel/watchdog_hld: Ensure CPU-bound context when creating hardlockup
> detector event
>
> arch/arm64/Kconfig | 2 +
> arch/arm64/kernel/Makefile | 1 +
> arch/arm64/kernel/perf_event.c | 12 +++++-
> arch/arm64/kernel/watchdog_hld.c | 39 +++++++++++++++++
> arch/sparc/kernel/nmi.c | 8 ++--
> drivers/perf/arm_pmu.c | 5 +++
> include/linux/nmi.h | 4 +-
> include/linux/perf/arm_pmu.h | 2 +
> kernel/watchdog.c | 72 +++++++++++++++++++++++++++++---
> kernel/watchdog_hld.c | 8 +++-
> 10 files changed, 139 insertions(+), 14 deletions(-)

To leave some breadcrumbs here, I've included all the patches here in
my latest "buddy" hardlockup detector series. I'm hoping that the
cleanup patches that were part of your series can land as part of my
series. I'm not necessarily expecting the the arm64 perf hardlockup
detector patches will land as part of my series, though. See the cover
letter and "after-the-cut" notes on the later patches in my series for
details.

https://lore.kernel.org/r/[email protected]