2023-04-04 02:01:10

by Ye Bin

[permalink] [raw]
Subject: [PATCH 0/2] fix dying cpu compare race

From: Ye Bin <[email protected]>

Ye Bin (2):
cpu/hotplug: introduce 'num_dying_cpus' to get dying CPUs count
lib/percpu_counter: fix dying cpu compare race

include/linux/cpumask.h | 20 ++++++++++++++++----
kernel/cpu.c | 2 ++
lib/percpu_counter.c | 11 ++++++++++-
3 files changed, 28 insertions(+), 5 deletions(-)

--
2.31.1


2023-04-04 02:01:55

by Ye Bin

[permalink] [raw]
Subject: [PATCH 2/2] lib/percpu_counter: fix dying cpu compare race

From: Ye Bin <[email protected]>

In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race
condition between a cpu dying and percpu_counter_sum() iterating online CPUs
was identified.
Acctually, there's the same race condition between a cpu dying and
__percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment.
But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()',
then maybe return incorrect result.
To solve above issue, also need to add dying CPUs count when do quick judgment
in __percpu_counter_compare().

Signed-off-by: Ye Bin <[email protected]>
---
lib/percpu_counter.c | 11 ++++++++++-
1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
index 5004463c4f9f..399840cb0012 100644
--- a/lib/percpu_counter.c
+++ b/lib/percpu_counter.c
@@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu)
return 0;
}

+static __always_inline unsigned int num_count_cpus(void)
+{
+#ifdef CONFIG_HOTPLUG_CPU
+ return (num_online_cpus() + num_dying_cpus());
+#else
+ return num_online_cpus();
+#endif
+}
+
/*
* Compare counter against given value.
* Return 1 if greater, 0 if equal and -1 if less
@@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)

count = percpu_counter_read(fbc);
/* Check to see if rough count will be sufficient for comparison */
- if (abs(count - rhs) > (batch * num_online_cpus())) {
+ if (abs(count - rhs) > (batch * num_count_cpus())) {
if (count > rhs)
return 1;
else
--
2.31.1

2023-04-04 02:01:55

by Ye Bin

[permalink] [raw]
Subject: [PATCH 1/2] cpu/hotplug: introduce 'num_dying_cpus' to get dying CPUs count

From: Ye Bin <[email protected]>

Introduce '__num_dying_cpus' variable to cache the number of dying CPUs
in the core and just return the cached variable.

Signed-off-by: Ye Bin <[email protected]>
---
include/linux/cpumask.h | 20 ++++++++++++++++----
kernel/cpu.c | 2 ++
2 files changed, 18 insertions(+), 4 deletions(-)

diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
index 2a61ddcf8321..8127fd598f51 100644
--- a/include/linux/cpumask.h
+++ b/include/linux/cpumask.h
@@ -135,6 +135,8 @@ extern struct cpumask __cpu_dying_mask;

extern atomic_t __num_online_cpus;

+extern atomic_t __num_dying_cpus;
+
extern cpumask_t cpus_booted_once_mask;

static __always_inline void cpu_max_bits_warn(unsigned int cpu, unsigned int bits)
@@ -1018,10 +1020,14 @@ set_cpu_active(unsigned int cpu, bool active)
static __always_inline void
set_cpu_dying(unsigned int cpu, bool dying)
{
- if (dying)
- cpumask_set_cpu(cpu, &__cpu_dying_mask);
- else
- cpumask_clear_cpu(cpu, &__cpu_dying_mask);
+ if (dying) {
+ if (!cpumask_test_and_set_cpu(cpu, &__cpu_dying_mask))
+ atomic_inc(&__num_dying_cpus);
+ }
+ else {
+ if (cpumask_test_and_clear_cpu(cpu, &__cpu_dying_mask))
+ atomic_dec(&__num_dying_cpus);
+ }
}

/**
@@ -1073,6 +1079,11 @@ static __always_inline unsigned int num_online_cpus(void)
{
return arch_atomic_read(&__num_online_cpus);
}
+
+static __always_inline unsigned int num_dying_cpus(void)
+{
+ return arch_atomic_read(&__num_dying_cpus);
+}
#define num_possible_cpus() cpumask_weight(cpu_possible_mask)
#define num_present_cpus() cpumask_weight(cpu_present_mask)
#define num_active_cpus() cpumask_weight(cpu_active_mask)
@@ -1108,6 +1119,7 @@ static __always_inline bool cpu_dying(unsigned int cpu)
#define num_possible_cpus() 1U
#define num_present_cpus() 1U
#define num_active_cpus() 1U
+#define num_dying_cpus() 0U

static __always_inline bool cpu_online(unsigned int cpu)
{
diff --git a/kernel/cpu.c b/kernel/cpu.c
index f4a2c5845bcb..1c96c04cb259 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2662,6 +2662,8 @@ EXPORT_SYMBOL(__cpu_dying_mask);
atomic_t __num_online_cpus __read_mostly;
EXPORT_SYMBOL(__num_online_cpus);

+atomic_t __num_dying_cpus __read_mostly;
+
void init_cpu_present(const struct cpumask *src)
{
cpumask_copy(&__cpu_present_mask, src);
--
2.31.1

2023-04-04 02:12:40

by Yury Norov

[permalink] [raw]
Subject: Re: [PATCH 0/2] fix dying cpu compare race

On Tue, Apr 04, 2023 at 09:42:04AM +0800, Ye Bin wrote:
> From: Ye Bin <[email protected]>

Here should be a description of your series.

> Ye Bin (2):
> cpu/hotplug: introduce 'num_dying_cpus' to get dying CPUs count
> lib/percpu_counter: fix dying cpu compare race
>
> include/linux/cpumask.h | 20 ++++++++++++++++----
> kernel/cpu.c | 2 ++
> lib/percpu_counter.c | 11 ++++++++++-
> 3 files changed, 28 insertions(+), 5 deletions(-)
>
> --
> 2.31.1

2023-04-04 02:27:23

by Yury Norov

[permalink] [raw]
Subject: Re: [PATCH 1/2] cpu/hotplug: introduce 'num_dying_cpus' to get dying CPUs count

On Tue, Apr 04, 2023 at 09:42:05AM +0800, Ye Bin wrote:
> From: Ye Bin <[email protected]>
>
> Introduce '__num_dying_cpus' variable to cache the number of dying CPUs
> in the core and just return the cached variable.
>
> Signed-off-by: Ye Bin <[email protected]>
> ---
> include/linux/cpumask.h | 20 ++++++++++++++++----
> kernel/cpu.c | 2 ++
> 2 files changed, 18 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/cpumask.h b/include/linux/cpumask.h
> index 2a61ddcf8321..8127fd598f51 100644
> --- a/include/linux/cpumask.h
> +++ b/include/linux/cpumask.h
> @@ -135,6 +135,8 @@ extern struct cpumask __cpu_dying_mask;
>
> extern atomic_t __num_online_cpus;
>
> +extern atomic_t __num_dying_cpus;
> +
> extern cpumask_t cpus_booted_once_mask;
>
> static __always_inline void cpu_max_bits_warn(unsigned int cpu, unsigned int bits)
> @@ -1018,10 +1020,14 @@ set_cpu_active(unsigned int cpu, bool active)
> static __always_inline void
> set_cpu_dying(unsigned int cpu, bool dying)
> {
> - if (dying)
> - cpumask_set_cpu(cpu, &__cpu_dying_mask);
> - else
> - cpumask_clear_cpu(cpu, &__cpu_dying_mask);
> + if (dying) {
> + if (!cpumask_test_and_set_cpu(cpu, &__cpu_dying_mask))
> + atomic_inc(&__num_dying_cpus);
> + }
> + else {
> + if (cpumask_test_and_clear_cpu(cpu, &__cpu_dying_mask))
> + atomic_dec(&__num_dying_cpus);
> + }
> }

Corresponding set_cpu_online() is implemented in C-file probably for a
reason. Are you sure that similar function for dying mask should
reside in a header? If so, can you share your reasoning?

Regardless, now that you added the identical function to
set_cpu_online, I think it's worth to make it a general approach:

void set_cpu_counted(unsigned int cpu, bool set,
struct cpumask *mask, atomic_t *cnt);

void __always_inline set_cpu_online(unsigned int cpu, bool online)
{
set_cpu_counted(cpu, online, &__cpu_online_mask, &__num_online_cpus);
}

void __always_inline set_cpu_dying(unsigned int cpu, bool dying)
{
set_cpu_counted(cpu, dying, &__cpu_dying_mask, &__num_dying_cpus);
}

2023-04-04 03:01:46

by Yury Norov

[permalink] [raw]
Subject: Re: [PATCH 2/2] lib/percpu_counter: fix dying cpu compare race

On Tue, Apr 04, 2023 at 09:42:06AM +0800, Ye Bin wrote:
> From: Ye Bin <[email protected]>
>
> In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race
> condition between a cpu dying and percpu_counter_sum() iterating online CPUs
> was identified.
> Acctually, there's the same race condition between a cpu dying and
> __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment.
> But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()',
> then maybe return incorrect result.
> To solve above issue, also need to add dying CPUs count when do quick judgment
> in __percpu_counter_compare().

Not sure I completely understood the race you are describing. All CPU
accounting is protected with percpu_counters_lock. Is it a real race
that you've faced, or hypothetical? If it's real, can you share stack
traces?

> Signed-off-by: Ye Bin <[email protected]>
> ---
> lib/percpu_counter.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
> index 5004463c4f9f..399840cb0012 100644
> --- a/lib/percpu_counter.c
> +++ b/lib/percpu_counter.c
> @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu)
> return 0;
> }
>
> +static __always_inline unsigned int num_count_cpus(void)

This doesn't look like a good name. Maybe num_offline_cpus?

> +{
> +#ifdef CONFIG_HOTPLUG_CPU
> + return (num_online_cpus() + num_dying_cpus());

^ ^
'return' is not a function. Braces are not needed

Generally speaking, a sequence of atomic operations is not an atomic
operation, so the above doesn't look correct. I don't think that it
would be possible to implement raceless accounting based on 2 separate
counters.

Most probably, you'd have to use the same approach as in 8b57b11cca88:

lock();
for_each_cpu_or(cpu, cpu_online_mask, cpu_dying_mask)
cnt++;
unlock();

And if so, I'd suggest to implement cpumask_weight_or() for that.

> +#else
> + return num_online_cpus();
> +#endif
> +}
> +
> /*
> * Compare counter against given value.
> * Return 1 if greater, 0 if equal and -1 if less
> @@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
>
> count = percpu_counter_read(fbc);
> /* Check to see if rough count will be sufficient for comparison */
> - if (abs(count - rhs) > (batch * num_online_cpus())) {
> + if (abs(count - rhs) > (batch * num_count_cpus())) {
> if (count > rhs)
> return 1;
> else
> --
> 2.31.1

2023-04-04 06:09:39

by Dave Chinner

[permalink] [raw]
Subject: Re: [PATCH 2/2] lib/percpu_counter: fix dying cpu compare race

On Tue, Apr 04, 2023 at 09:42:06AM +0800, Ye Bin wrote:
> From: Ye Bin <[email protected]>
>
> In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race
> condition between a cpu dying and percpu_counter_sum() iterating online CPUs
> was identified.
> Acctually, there's the same race condition between a cpu dying and
> __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment.
> But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()',
> then maybe return incorrect result.
> To solve above issue, also need to add dying CPUs count when do quick judgment
> in __percpu_counter_compare().
>
> Signed-off-by: Ye Bin <[email protected]>
> ---
> lib/percpu_counter.c | 11 ++++++++++-
> 1 file changed, 10 insertions(+), 1 deletion(-)
>
> diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
> index 5004463c4f9f..399840cb0012 100644
> --- a/lib/percpu_counter.c
> +++ b/lib/percpu_counter.c
> @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu)
> return 0;
> }
>
> +static __always_inline unsigned int num_count_cpus(void)
> +{
> +#ifdef CONFIG_HOTPLUG_CPU
> + return (num_online_cpus() + num_dying_cpus());
> +#else
> + return num_online_cpus();
> +#endif
> +}
> +
> /*
> * Compare counter against given value.
> * Return 1 if greater, 0 if equal and -1 if less
> @@ -237,7 +246,7 @@ int __percpu_counter_compare(struct percpu_counter *fbc, s64 rhs, s32 batch)
>
> count = percpu_counter_read(fbc);
> /* Check to see if rough count will be sufficient for comparison */
> - if (abs(count - rhs) > (batch * num_online_cpus())) {
> + if (abs(count - rhs) > (batch * num_count_cpus())) {

What problem is this actually fixing? You haven't explained how the
problem you are fixing manifests in the commit message or the cover
letter.

We generally don't care about the accuracy of the comparison here
because we've used percpu_counter_read() which is completely racy
against on-going updates. e.g. we can get preempted between
percpu_counter_read() and the check and so the value can be
completely wrong by the time we actually check it. Hence checking
online vs online+dying really doesn't fix any of the common race
conditions that occur here.

Even if we fall through to using percpu_counter_sum() for the
comparison value, that is still not accurate in the face of racing
updates to the counter because percpu_counter_sum only prevents
the percpu counter from being folded back into the global sum
while it is running. The comparison is still not precise or accurate.

IOWs, the result of this whole function is not guaranteed to be
precise or accurate; percpu counters cannot ever be relied on for
exact threshold detection unless there is some form of external
global counter synchronisation being used for those comparisons
(e.g. a global spinlock held around all the percpu_counter_add()
modifications as well as the __percpu_counter_compare() call).

That's always been the issue with unsynchronised percpu counters -
cpus dying just don't matter here because there are many other more
common race conditions that prevent accurate, race free comparison
of per-cpu counters.

Cheers,

Dave.
--
Dave Chinner
[email protected]

2023-04-10 17:50:07

by Yury Norov

[permalink] [raw]
Subject: Re: [PATCH 2/2] lib/percpu_counter: fix dying cpu compare race

On Tue, Apr 04, 2023 at 02:54:25PM +0800, yebin (H) wrote:
>
>
> On 2023/4/4 10:50, Yury Norov wrote:
> > On Tue, Apr 04, 2023 at 09:42:06AM +0800, Ye Bin wrote:
> > > From: Ye Bin <[email protected]>
> > >
> > > In commit 8b57b11cca88 ("pcpcntrs: fix dying cpu summation race") a race
> > > condition between a cpu dying and percpu_counter_sum() iterating online CPUs
> > > was identified.
> > > Acctually, there's the same race condition between a cpu dying and
> > > __percpu_counter_compare(). Here, use 'num_online_cpus()' for quick judgment.
> > > But 'num_online_cpus()' will be decreased before call 'percpu_counter_cpu_dead()',
> > > then maybe return incorrect result.
> > > To solve above issue, also need to add dying CPUs count when do quick judgment
> > > in __percpu_counter_compare().
> > Not sure I completely understood the race you are describing. All CPU
> > accounting is protected with percpu_counters_lock. Is it a real race
> > that you've faced, or hypothetical? If it's real, can you share stack
> > traces?
> > > Signed-off-by: Ye Bin <[email protected]>
> > > ---
> > > lib/percpu_counter.c | 11 ++++++++++-
> > > 1 file changed, 10 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/percpu_counter.c b/lib/percpu_counter.c
> > > index 5004463c4f9f..399840cb0012 100644
> > > --- a/lib/percpu_counter.c
> > > +++ b/lib/percpu_counter.c
> > > @@ -227,6 +227,15 @@ static int percpu_counter_cpu_dead(unsigned int cpu)
> > > return 0;
> > > }
> > > +static __always_inline unsigned int num_count_cpus(void)
> > This doesn't look like a good name. Maybe num_offline_cpus?
> >
> > > +{
> > > +#ifdef CONFIG_HOTPLUG_CPU
> > > + return (num_online_cpus() + num_dying_cpus());
> > ^ ^
> > 'return' is not a function. Braces are not needed
> >
> > Generally speaking, a sequence of atomic operations is not an atomic
> > operation, so the above doesn't look correct. I don't think that it
> > would be possible to implement raceless accounting based on 2 separate
> > counters.
> Yes, there is indeed a concurrency issue with doing so here. But I saw that
> the process was first
> set up dying_mask and then reduce the number of online CPUs. The total
> quantity maybe is larger
> than the actual value and may fall back to a slow path.But this won't cause
> any problems.

This sounds like an implementation detail. If it will change in
future, your accounting will get broken.

If you think it's a consistent behavior and will be preserved in
future, then it must be properly commented in your patch.

Thanks,
Yury