2024-01-29 13:53:48

by Feng Tang

[permalink] [raw]
Subject: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

There was a bug on one 8-socket server that the TSC is wrongly marked as
'unstable' and disabled during boot time (reproduce rate is about every
120 rounds of reboot tests), with log:

clocksource: timekeeping watchdog on CPU227: wd-tsc-wd excessive read-back delay of 153560ns vs. limit of 125000ns,
wd-wd read-back delay only 11440ns, attempt 3, marking tsc unstable
tsc: Marking TSC unstable due to clocksource watchdog
TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
sched_clock: Marking unstable (119294969739, 159204297)<-(125446229205, -5992055152)
clocksource: Checking clocksource tsc synchronization from CPU 319 to CPUs 0,99,136,180,210,542,601,896.
clocksource: Switched to clocksource hpet

The reason is for platform with lots of CPU, there are sporadic big or huge
read latency of read watchog/clocksource during boot or when system is under
stress work load, and the frequency and maximum value of the latency goes up
with the increasing of CPU numbers. Current code already has logic to detect
and filter such high latency case by reading 3 times of watchdog, and check
the 2 deltas. Due to the randomness of the latency, there is a low possibility
situation that the first delta (latency) is big, but the second delta is small
and looks valid, which can escape from the check, and there is a
'max_cswd_read_retries' for retrying that check covering this case, whose
default value is only 2 and may be not enough for machines with huge number
of CPUs.

So scale and enlarge the max retry number according to CPU number to better
filter those latency noise for large systems, which has been verified fine
in 4 days and 670 rounds of reboot test on the 8-socket machine.

Also add sanity check for user input value for 'max_cswd_read_retries', make
it self-adaptive by default, and provide a general helper for getting this
max retry number as suggested by Paul and Waiman.

Cc: Paul E. McKenney <[email protected]>
Cc: Waiman Long <[email protected]>
Signed-off-by: Feng Tang <[email protected]>
Tested-by: Jin Wang <[email protected]>
---
Changelog:

since v2:
* Fix the unexported symbol of helper function being used by
kernel module issue (Waiman)

since v1:
* Add santity check for user input value of 'max_cswd_read_retries'
and a helper function for getting max retry nubmer (Paul)
* Apply the same logic to watchdog test code (Waiman)

include/linux/clocksource.h | 18 +++++++++++++++++-
kernel/time/clocksource-wdtest.c | 12 +++++++-----
kernel/time/clocksource.c | 10 ++++++----
3 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
index 1d42d4b17327..0483f7dd66a3 100644
--- a/include/linux/clocksource.h
+++ b/include/linux/clocksource.h
@@ -291,7 +291,23 @@ static inline void timer_probe(void) {}
#define TIMER_ACPI_DECLARE(name, table_id, fn) \
ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn)

-extern ulong max_cswd_read_retries;
+extern long max_cswd_read_retries;
+
+static inline long clocksource_max_watchdog_read_retries(void)
+{
+ long max_retries = max_cswd_read_retries;
+
+ if (max_cswd_read_retries <= 0) {
+ /* santity check for user input value */
+ if (max_cswd_read_retries != -1)
+ pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
+ max_cswd_read_retries);
+
+ max_retries = ilog2(num_online_cpus()) + 1;
+ }
+ return max_retries;
+}
+
void clocksource_verify_percpu(struct clocksource *cs);

#endif /* _LINUX_CLOCKSOURCE_H */
diff --git a/kernel/time/clocksource-wdtest.c b/kernel/time/clocksource-wdtest.c
index df922f49d171..c70cea3c44a1 100644
--- a/kernel/time/clocksource-wdtest.c
+++ b/kernel/time/clocksource-wdtest.c
@@ -106,6 +106,7 @@ static int wdtest_func(void *arg)
unsigned long j1, j2;
char *s;
int i;
+ long max_retries;

schedule_timeout_uninterruptible(holdoff * HZ);

@@ -139,18 +140,19 @@ static int wdtest_func(void *arg)
WARN_ON_ONCE(time_before(j2, j1 + NSEC_PER_USEC));

/* Verify tsc-like stability with various numbers of errors injected. */
- for (i = 0; i <= max_cswd_read_retries + 1; i++) {
- if (i <= 1 && i < max_cswd_read_retries)
+ max_retries = clocksource_max_watchdog_read_retries();
+ for (i = 0; i <= max_retries + 1; i++) {
+ if (i <= 1 && i < max_retries)
s = "";
- else if (i <= max_cswd_read_retries)
+ else if (i <= max_retries)
s = ", expect message";
else
s = ", expect clock skew";
- pr_info("--- Watchdog with %dx error injection, %lu retries%s.\n", i, max_cswd_read_retries, s);
+ pr_info("--- Watchdog with %dx error injection, %ld retries%s.\n", i, max_retries, s);
WRITE_ONCE(wdtest_ktime_read_ndelays, i);
schedule_timeout_uninterruptible(2 * HZ);
WARN_ON_ONCE(READ_ONCE(wdtest_ktime_read_ndelays));
- WARN_ON_ONCE((i <= max_cswd_read_retries) !=
+ WARN_ON_ONCE((i <= max_retries) !=
!(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE));
wdtest_ktime_clocksource_reset();
}
diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
index c108ed8a9804..2e5a1d6c6712 100644
--- a/kernel/time/clocksource.c
+++ b/kernel/time/clocksource.c
@@ -208,8 +208,8 @@ void clocksource_mark_unstable(struct clocksource *cs)
spin_unlock_irqrestore(&watchdog_lock, flags);
}

-ulong max_cswd_read_retries = 2;
-module_param(max_cswd_read_retries, ulong, 0644);
+long max_cswd_read_retries = -1;
+module_param(max_cswd_read_retries, long, 0644);
EXPORT_SYMBOL_GPL(max_cswd_read_retries);
static int verify_n_cpus = 8;
module_param(verify_n_cpus, int, 0644);
@@ -225,8 +225,10 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
unsigned int nretries;
u64 wd_end, wd_end2, wd_delta;
int64_t wd_delay, wd_seq_delay;
+ long max_retries;

- for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
+ max_retries = clocksource_max_watchdog_read_retries();
+ for (nretries = 0; nretries <= max_retries; nretries++) {
local_irq_disable();
*wdnow = watchdog->read(watchdog);
*csnow = cs->read(cs);
@@ -238,7 +240,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
watchdog->shift);
if (wd_delay <= WATCHDOG_MAX_SKEW) {
- if (nretries > 1 || nretries >= max_cswd_read_retries) {
+ if (nretries > 1 || nretries >= max_retries) {
pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
smp_processor_id(), watchdog->name, nretries);
}
--
2.34.1



2024-01-30 13:55:43

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

On Mon, Jan 29, 2024 at 09:45:05PM +0800, Feng Tang wrote:
> There was a bug on one 8-socket server that the TSC is wrongly marked as
> 'unstable' and disabled during boot time (reproduce rate is about every
> 120 rounds of reboot tests), with log:
>
> clocksource: timekeeping watchdog on CPU227: wd-tsc-wd excessive read-back delay of 153560ns vs. limit of 125000ns,
> wd-wd read-back delay only 11440ns, attempt 3, marking tsc unstable
> tsc: Marking TSC unstable due to clocksource watchdog
> TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
> sched_clock: Marking unstable (119294969739, 159204297)<-(125446229205, -5992055152)
> clocksource: Checking clocksource tsc synchronization from CPU 319 to CPUs 0,99,136,180,210,542,601,896.
> clocksource: Switched to clocksource hpet
>
> The reason is for platform with lots of CPU, there are sporadic big or huge
> read latency of read watchog/clocksource during boot or when system is under
> stress work load, and the frequency and maximum value of the latency goes up
> with the increasing of CPU numbers. Current code already has logic to detect
> and filter such high latency case by reading 3 times of watchdog, and check
> the 2 deltas. Due to the randomness of the latency, there is a low possibility
> situation that the first delta (latency) is big, but the second delta is small
> and looks valid, which can escape from the check, and there is a
> 'max_cswd_read_retries' for retrying that check covering this case, whose
> default value is only 2 and may be not enough for machines with huge number
> of CPUs.
>
> So scale and enlarge the max retry number according to CPU number to better
> filter those latency noise for large systems, which has been verified fine
> in 4 days and 670 rounds of reboot test on the 8-socket machine.
>
> Also add sanity check for user input value for 'max_cswd_read_retries', make
> it self-adaptive by default, and provide a general helper for getting this
> max retry number as suggested by Paul and Waiman.
>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Waiman Long <[email protected]>
> Signed-off-by: Feng Tang <[email protected]>
> Tested-by: Jin Wang <[email protected]>

Tested-by: Paul E. McKenney <[email protected]>

> ---
> Changelog:
>
> since v2:
> * Fix the unexported symbol of helper function being used by
> kernel module issue (Waiman)
>
> since v1:
> * Add santity check for user input value of 'max_cswd_read_retries'
> and a helper function for getting max retry nubmer (Paul)
> * Apply the same logic to watchdog test code (Waiman)
>
> include/linux/clocksource.h | 18 +++++++++++++++++-
> kernel/time/clocksource-wdtest.c | 12 +++++++-----
> kernel/time/clocksource.c | 10 ++++++----
> 3 files changed, 30 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
> index 1d42d4b17327..0483f7dd66a3 100644
> --- a/include/linux/clocksource.h
> +++ b/include/linux/clocksource.h
> @@ -291,7 +291,23 @@ static inline void timer_probe(void) {}
> #define TIMER_ACPI_DECLARE(name, table_id, fn) \
> ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn)
>
> -extern ulong max_cswd_read_retries;
> +extern long max_cswd_read_retries;
> +
> +static inline long clocksource_max_watchdog_read_retries(void)
> +{
> + long max_retries = max_cswd_read_retries;
> +
> + if (max_cswd_read_retries <= 0) {
> + /* santity check for user input value */
> + if (max_cswd_read_retries != -1)
> + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> + max_cswd_read_retries);
> +
> + max_retries = ilog2(num_online_cpus()) + 1;
> + }
> + return max_retries;
> +}
> +
> void clocksource_verify_percpu(struct clocksource *cs);
>
> #endif /* _LINUX_CLOCKSOURCE_H */
> diff --git a/kernel/time/clocksource-wdtest.c b/kernel/time/clocksource-wdtest.c
> index df922f49d171..c70cea3c44a1 100644
> --- a/kernel/time/clocksource-wdtest.c
> +++ b/kernel/time/clocksource-wdtest.c
> @@ -106,6 +106,7 @@ static int wdtest_func(void *arg)
> unsigned long j1, j2;
> char *s;
> int i;
> + long max_retries;
>
> schedule_timeout_uninterruptible(holdoff * HZ);
>
> @@ -139,18 +140,19 @@ static int wdtest_func(void *arg)
> WARN_ON_ONCE(time_before(j2, j1 + NSEC_PER_USEC));
>
> /* Verify tsc-like stability with various numbers of errors injected. */
> - for (i = 0; i <= max_cswd_read_retries + 1; i++) {
> - if (i <= 1 && i < max_cswd_read_retries)
> + max_retries = clocksource_max_watchdog_read_retries();
> + for (i = 0; i <= max_retries + 1; i++) {
> + if (i <= 1 && i < max_retries)
> s = "";
> - else if (i <= max_cswd_read_retries)
> + else if (i <= max_retries)
> s = ", expect message";
> else
> s = ", expect clock skew";
> - pr_info("--- Watchdog with %dx error injection, %lu retries%s.\n", i, max_cswd_read_retries, s);
> + pr_info("--- Watchdog with %dx error injection, %ld retries%s.\n", i, max_retries, s);
> WRITE_ONCE(wdtest_ktime_read_ndelays, i);
> schedule_timeout_uninterruptible(2 * HZ);
> WARN_ON_ONCE(READ_ONCE(wdtest_ktime_read_ndelays));
> - WARN_ON_ONCE((i <= max_cswd_read_retries) !=
> + WARN_ON_ONCE((i <= max_retries) !=
> !(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE));
> wdtest_ktime_clocksource_reset();
> }
> diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
> index c108ed8a9804..2e5a1d6c6712 100644
> --- a/kernel/time/clocksource.c
> +++ b/kernel/time/clocksource.c
> @@ -208,8 +208,8 @@ void clocksource_mark_unstable(struct clocksource *cs)
> spin_unlock_irqrestore(&watchdog_lock, flags);
> }
>
> -ulong max_cswd_read_retries = 2;
> -module_param(max_cswd_read_retries, ulong, 0644);
> +long max_cswd_read_retries = -1;
> +module_param(max_cswd_read_retries, long, 0644);
> EXPORT_SYMBOL_GPL(max_cswd_read_retries);
> static int verify_n_cpus = 8;
> module_param(verify_n_cpus, int, 0644);
> @@ -225,8 +225,10 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
> unsigned int nretries;
> u64 wd_end, wd_end2, wd_delta;
> int64_t wd_delay, wd_seq_delay;
> + long max_retries;
>
> - for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
> + max_retries = clocksource_max_watchdog_read_retries();
> + for (nretries = 0; nretries <= max_retries; nretries++) {
> local_irq_disable();
> *wdnow = watchdog->read(watchdog);
> *csnow = cs->read(cs);
> @@ -238,7 +240,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
> wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
> watchdog->shift);
> if (wd_delay <= WATCHDOG_MAX_SKEW) {
> - if (nretries > 1 || nretries >= max_cswd_read_retries) {
> + if (nretries > 1 || nretries >= max_retries) {
> pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
> smp_processor_id(), watchdog->name, nretries);
> }
> --
> 2.34.1
>

2024-01-31 00:54:01

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

On 1/29/24 08:45, Feng Tang wrote:
> There was a bug on one 8-socket server that the TSC is wrongly marked as
> 'unstable' and disabled during boot time (reproduce rate is about every
> 120 rounds of reboot tests), with log:
>
> clocksource: timekeeping watchdog on CPU227: wd-tsc-wd excessive read-back delay of 153560ns vs. limit of 125000ns,
> wd-wd read-back delay only 11440ns, attempt 3, marking tsc unstable
> tsc: Marking TSC unstable due to clocksource watchdog
> TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
> sched_clock: Marking unstable (119294969739, 159204297)<-(125446229205, -5992055152)
> clocksource: Checking clocksource tsc synchronization from CPU 319 to CPUs 0,99,136,180,210,542,601,896.
> clocksource: Switched to clocksource hpet
>
> The reason is for platform with lots of CPU, there are sporadic big or huge
> read latency of read watchog/clocksource during boot or when system is under
> stress work load, and the frequency and maximum value of the latency goes up
> with the increasing of CPU numbers. Current code already has logic to detect
> and filter such high latency case by reading 3 times of watchdog, and check
> the 2 deltas. Due to the randomness of the latency, there is a low possibility
> situation that the first delta (latency) is big, but the second delta is small
> and looks valid, which can escape from the check, and there is a
> 'max_cswd_read_retries' for retrying that check covering this case, whose
> default value is only 2 and may be not enough for machines with huge number
> of CPUs.
>
> So scale and enlarge the max retry number according to CPU number to better
> filter those latency noise for large systems, which has been verified fine
> in 4 days and 670 rounds of reboot test on the 8-socket machine.
>
> Also add sanity check for user input value for 'max_cswd_read_retries', make
> it self-adaptive by default, and provide a general helper for getting this
> max retry number as suggested by Paul and Waiman.
>
> Cc: Paul E. McKenney <[email protected]>
> Cc: Waiman Long <[email protected]>
> Signed-off-by: Feng Tang <[email protected]>
> Tested-by: Jin Wang <[email protected]>
> ---
> Changelog:
>
> since v2:
> * Fix the unexported symbol of helper function being used by
> kernel module issue (Waiman)
>
> since v1:
> * Add santity check for user input value of 'max_cswd_read_retries'
> and a helper function for getting max retry nubmer (Paul)
> * Apply the same logic to watchdog test code (Waiman)
>
> include/linux/clocksource.h | 18 +++++++++++++++++-
> kernel/time/clocksource-wdtest.c | 12 +++++++-----
> kernel/time/clocksource.c | 10 ++++++----
> 3 files changed, 30 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
> index 1d42d4b17327..0483f7dd66a3 100644
> --- a/include/linux/clocksource.h
> +++ b/include/linux/clocksource.h
> @@ -291,7 +291,23 @@ static inline void timer_probe(void) {}
> #define TIMER_ACPI_DECLARE(name, table_id, fn) \
> ACPI_DECLARE_PROBE_ENTRY(timer, name, table_id, 0, NULL, 0, fn)
>
> -extern ulong max_cswd_read_retries;
> +extern long max_cswd_read_retries;
> +
> +static inline long clocksource_max_watchdog_read_retries(void)
> +{
> + long max_retries = max_cswd_read_retries;
> +
> + if (max_cswd_read_retries <= 0) {
> + /* santity check for user input value */
> + if (max_cswd_read_retries != -1)
> + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> + max_cswd_read_retries);
> +
> + max_retries = ilog2(num_online_cpus()) + 1;
> + }
> + return max_retries;
> +}
> +
> void clocksource_verify_percpu(struct clocksource *cs);
>
> #endif /* _LINUX_CLOCKSOURCE_H */
> diff --git a/kernel/time/clocksource-wdtest.c b/kernel/time/clocksource-wdtest.c
> index df922f49d171..c70cea3c44a1 100644
> --- a/kernel/time/clocksource-wdtest.c
> +++ b/kernel/time/clocksource-wdtest.c
> @@ -106,6 +106,7 @@ static int wdtest_func(void *arg)
> unsigned long j1, j2;
> char *s;
> int i;
> + long max_retries;
>
> schedule_timeout_uninterruptible(holdoff * HZ);
>
> @@ -139,18 +140,19 @@ static int wdtest_func(void *arg)
> WARN_ON_ONCE(time_before(j2, j1 + NSEC_PER_USEC));
>
> /* Verify tsc-like stability with various numbers of errors injected. */
> - for (i = 0; i <= max_cswd_read_retries + 1; i++) {
> - if (i <= 1 && i < max_cswd_read_retries)
> + max_retries = clocksource_max_watchdog_read_retries();
> + for (i = 0; i <= max_retries + 1; i++) {
> + if (i <= 1 && i < max_retries)
> s = "";
> - else if (i <= max_cswd_read_retries)
> + else if (i <= max_retries)
> s = ", expect message";
> else
> s = ", expect clock skew";
> - pr_info("--- Watchdog with %dx error injection, %lu retries%s.\n", i, max_cswd_read_retries, s);
> + pr_info("--- Watchdog with %dx error injection, %ld retries%s.\n", i, max_retries, s);
> WRITE_ONCE(wdtest_ktime_read_ndelays, i);
> schedule_timeout_uninterruptible(2 * HZ);
> WARN_ON_ONCE(READ_ONCE(wdtest_ktime_read_ndelays));
> - WARN_ON_ONCE((i <= max_cswd_read_retries) !=
> + WARN_ON_ONCE((i <= max_retries) !=
> !(clocksource_wdtest_ktime.flags & CLOCK_SOURCE_UNSTABLE));
> wdtest_ktime_clocksource_reset();
> }
> diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
> index c108ed8a9804..2e5a1d6c6712 100644
> --- a/kernel/time/clocksource.c
> +++ b/kernel/time/clocksource.c
> @@ -208,8 +208,8 @@ void clocksource_mark_unstable(struct clocksource *cs)
> spin_unlock_irqrestore(&watchdog_lock, flags);
> }
>
> -ulong max_cswd_read_retries = 2;
> -module_param(max_cswd_read_retries, ulong, 0644);
> +long max_cswd_read_retries = -1;
> +module_param(max_cswd_read_retries, long, 0644);
> EXPORT_SYMBOL_GPL(max_cswd_read_retries);
> static int verify_n_cpus = 8;
> module_param(verify_n_cpus, int, 0644);
> @@ -225,8 +225,10 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
> unsigned int nretries;
> u64 wd_end, wd_end2, wd_delta;
> int64_t wd_delay, wd_seq_delay;
> + long max_retries;
>
> - for (nretries = 0; nretries <= max_cswd_read_retries; nretries++) {
> + max_retries = clocksource_max_watchdog_read_retries();
> + for (nretries = 0; nretries <= max_retries; nretries++) {
> local_irq_disable();
> *wdnow = watchdog->read(watchdog);
> *csnow = cs->read(cs);
> @@ -238,7 +240,7 @@ static enum wd_read_status cs_watchdog_read(struct clocksource *cs, u64 *csnow,
> wd_delay = clocksource_cyc2ns(wd_delta, watchdog->mult,
> watchdog->shift);
> if (wd_delay <= WATCHDOG_MAX_SKEW) {
> - if (nretries > 1 || nretries >= max_cswd_read_retries) {
> + if (nretries > 1 || nretries >= max_retries) {
> pr_warn("timekeeping watchdog on CPU%d: %s retried %d times before success\n",
> smp_processor_id(), watchdog->name, nretries);
> }
Reviewed-by: Waiman Long <[email protected]>


2024-02-19 11:32:13

by Thomas Gleixner

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

On Mon, Jan 29 2024 at 21:45, Feng Tang wrote:
> +static inline long clocksource_max_watchdog_read_retries(void)
> +{
> + long max_retries = max_cswd_read_retries;
> +
> + if (max_cswd_read_retries <= 0) {
> + /* santity check for user input value */
> + if (max_cswd_read_retries != -1)
> + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> + max_cswd_read_retries);
> +
> + max_retries = ilog2(num_online_cpus()) + 1;

I'm getting tired of these knobs and the horrors behind them. Why not
simply doing the obvious:

retries = ilog2(num_online_cpus()) + 1;

and remove the knob alltogether?

Thanks,

tglx

2024-02-19 14:51:31

by Feng Tang

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

Hi Thomas,

On Mon, Feb 19, 2024 at 12:32:05PM +0100, Thomas Gleixner wrote:
> On Mon, Jan 29 2024 at 21:45, Feng Tang wrote:
> > +static inline long clocksource_max_watchdog_read_retries(void)
> > +{
> > + long max_retries = max_cswd_read_retries;
> > +
> > + if (max_cswd_read_retries <= 0) {
> > + /* santity check for user input value */
> > + if (max_cswd_read_retries != -1)
> > + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> > + max_cswd_read_retries);
> > +
> > + max_retries = ilog2(num_online_cpus()) + 1;
>
> I'm getting tired of these knobs and the horrors behind them. Why not
> simply doing the obvious:
>
> retries = ilog2(num_online_cpus()) + 1;
>
> and remove the knob alltogether?

Thanks for the suggestion! Yes, this makes sense to me. IIUC, the
'max_cswd_read_retries' was introduced mainly to cover different
platforms' requirement, which could now be covered by the new
self-adaptive number.

If there is no concern from other developers, I will send a new
version in this direction.

Thanks,
Feng

>
> Thanks,
>
> tglx

2024-02-20 02:20:46

by Waiman Long

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers


On 2/19/24 09:37, Feng Tang wrote:
> Hi Thomas,
>
> On Mon, Feb 19, 2024 at 12:32:05PM +0100, Thomas Gleixner wrote:
>> On Mon, Jan 29 2024 at 21:45, Feng Tang wrote:
>>> +static inline long clocksource_max_watchdog_read_retries(void)
>>> +{
>>> + long max_retries = max_cswd_read_retries;
>>> +
>>> + if (max_cswd_read_retries <= 0) {
>>> + /* santity check for user input value */
>>> + if (max_cswd_read_retries != -1)
>>> + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
>>> + max_cswd_read_retries);
>>> +
>>> + max_retries = ilog2(num_online_cpus()) + 1;
>> I'm getting tired of these knobs and the horrors behind them. Why not
>> simply doing the obvious:
>>
>> retries = ilog2(num_online_cpus()) + 1;
>>
>> and remove the knob alltogether?
> Thanks for the suggestion! Yes, this makes sense to me. IIUC, the
> 'max_cswd_read_retries' was introduced mainly to cover different
> platforms' requirement, which could now be covered by the new
> self-adaptive number.
>
> If there is no concern from other developers, I will send a new
> version in this direction.

I see no problem simplifying it.

Cheers,
Longman


2024-02-20 15:24:33

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

On Mon, Feb 19, 2024 at 09:20:31PM -0500, Waiman Long wrote:
>
> On 2/19/24 09:37, Feng Tang wrote:
> > Hi Thomas,
> >
> > On Mon, Feb 19, 2024 at 12:32:05PM +0100, Thomas Gleixner wrote:
> > > On Mon, Jan 29 2024 at 21:45, Feng Tang wrote:
> > > > +static inline long clocksource_max_watchdog_read_retries(void)
> > > > +{
> > > > + long max_retries = max_cswd_read_retries;
> > > > +
> > > > + if (max_cswd_read_retries <= 0) {
> > > > + /* santity check for user input value */
> > > > + if (max_cswd_read_retries != -1)
> > > > + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> > > > + max_cswd_read_retries);
> > > > +
> > > > + max_retries = ilog2(num_online_cpus()) + 1;
> > > I'm getting tired of these knobs and the horrors behind them. Why not
> > > simply doing the obvious:
> > >
> > > retries = ilog2(num_online_cpus()) + 1;
> > >
> > > and remove the knob alltogether?
> > Thanks for the suggestion! Yes, this makes sense to me. IIUC, the
> > 'max_cswd_read_retries' was introduced mainly to cover different
> > platforms' requirement, which could now be covered by the new
> > self-adaptive number.
> >
> > If there is no concern from other developers, I will send a new
> > version in this direction.
>
> I see no problem simplifying it.

My guess is that we will eventually end up with something like this:

retries = ilog2(num_online_cpus()) / 2 + 1;

but I am not at all opposed to starting without the division by 2.

Thanx, Paul

2024-02-20 15:41:40

by Feng Tang

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

On Tue, Feb 20, 2024 at 07:24:27AM -0800, Paul E. McKenney wrote:
> On Mon, Feb 19, 2024 at 09:20:31PM -0500, Waiman Long wrote:
> >
> > On 2/19/24 09:37, Feng Tang wrote:
> > > Hi Thomas,
> > >
> > > On Mon, Feb 19, 2024 at 12:32:05PM +0100, Thomas Gleixner wrote:
> > > > On Mon, Jan 29 2024 at 21:45, Feng Tang wrote:
> > > > > +static inline long clocksource_max_watchdog_read_retries(void)
> > > > > +{
> > > > > + long max_retries = max_cswd_read_retries;
> > > > > +
> > > > > + if (max_cswd_read_retries <= 0) {
> > > > > + /* santity check for user input value */
> > > > > + if (max_cswd_read_retries != -1)
> > > > > + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> > > > > + max_cswd_read_retries);
> > > > > +
> > > > > + max_retries = ilog2(num_online_cpus()) + 1;
> > > > I'm getting tired of these knobs and the horrors behind them. Why not
> > > > simply doing the obvious:
> > > >
> > > > retries = ilog2(num_online_cpus()) + 1;
> > > >
> > > > and remove the knob alltogether?
> > > Thanks for the suggestion! Yes, this makes sense to me. IIUC, the
> > > 'max_cswd_read_retries' was introduced mainly to cover different
> > > platforms' requirement, which could now be covered by the new
> > > self-adaptive number.
> > >
> > > If there is no concern from other developers, I will send a new
> > > version in this direction.
> >
> > I see no problem simplifying it.
>
> My guess is that we will eventually end up with something like this:
>
> retries = ilog2(num_online_cpus()) / 2 + 1;

Good point! Initially when writing the patch, I did try to search if
there is a 'ilog4' api :) as the ilog2 of that 8-socket machine is
about 10, which is more than enough.

Thanks,
Feng

>
> but I am not at all opposed to starting without the division by 2.
>
> Thanx, Paul

2024-02-20 17:10:42

by Paul E. McKenney

[permalink] [raw]
Subject: Re: [PATCH v3] clocksource: Scale the max retry number of watchdog read according to CPU numbers

On Tue, Feb 20, 2024 at 11:27:21PM +0800, Feng Tang wrote:
> On Tue, Feb 20, 2024 at 07:24:27AM -0800, Paul E. McKenney wrote:
> > On Mon, Feb 19, 2024 at 09:20:31PM -0500, Waiman Long wrote:
> > >
> > > On 2/19/24 09:37, Feng Tang wrote:
> > > > Hi Thomas,
> > > >
> > > > On Mon, Feb 19, 2024 at 12:32:05PM +0100, Thomas Gleixner wrote:
> > > > > On Mon, Jan 29 2024 at 21:45, Feng Tang wrote:
> > > > > > +static inline long clocksource_max_watchdog_read_retries(void)
> > > > > > +{
> > > > > > + long max_retries = max_cswd_read_retries;
> > > > > > +
> > > > > > + if (max_cswd_read_retries <= 0) {
> > > > > > + /* santity check for user input value */
> > > > > > + if (max_cswd_read_retries != -1)
> > > > > > + pr_warn_once("max_cswd_read_retries was set with an invalid number: %ld\n",
> > > > > > + max_cswd_read_retries);
> > > > > > +
> > > > > > + max_retries = ilog2(num_online_cpus()) + 1;
> > > > > I'm getting tired of these knobs and the horrors behind them. Why not
> > > > > simply doing the obvious:
> > > > >
> > > > > retries = ilog2(num_online_cpus()) + 1;
> > > > >
> > > > > and remove the knob alltogether?
> > > > Thanks for the suggestion! Yes, this makes sense to me. IIUC, the
> > > > 'max_cswd_read_retries' was introduced mainly to cover different
> > > > platforms' requirement, which could now be covered by the new
> > > > self-adaptive number.
> > > >
> > > > If there is no concern from other developers, I will send a new
> > > > version in this direction.
> > >
> > > I see no problem simplifying it.
> >
> > My guess is that we will eventually end up with something like this:
> >
> > retries = ilog2(num_online_cpus()) / 2 + 1;
>
> Good point! Initially when writing the patch, I did try to search if
> there is a 'ilog4' api :) as the ilog2 of that 8-socket machine is
> about 10, which is more than enough.

I am also not averse to starting with the above, either. ;-)

Thanx, Paul

> Thanks,
> Feng
>
> >
> > but I am not at all opposed to starting without the division by 2.
> >
> > Thanx, Paul