2023-07-21 13:43:17

by Baoquan He

[permalink] [raw]
Subject: [PATCH 0/3] percpu: some trivial cleanup patches

There's a left issue in my mailbox about percpu code at below. When
I rechecked it and considered Dennis's comment, I made an attmept
to fix it with patch 3.

https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u

Patch 1 and 2 are trivial clean up patches when reading percpu code.

Baoquan He (3):
mm/percpu.c: remove redundant check
mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
bit
mm/percpu.c: print error message too if atomic alloc failed

mm/percpu.c | 39 +++++++++++++++++----------------------
1 file changed, 17 insertions(+), 22 deletions(-)

--
2.34.1



2023-07-21 13:48:04

by Baoquan He

[permalink] [raw]
Subject: [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed

The variable 'err' is assgigned to an error message if atomic alloc
failed, while it has no chance to be printed if is_atomic is true.

Here change to print error message too if atomic alloc failed, while
avoid to call dump_stack() if that case.

Signed-off-by: Baoquan He <[email protected]>
---
mm/percpu.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index c25b058a46ad..74f75ef0ad58 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -1890,13 +1890,15 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
fail:
trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align);

- if (!is_atomic && do_warn && warn_limit) {
+ if (do_warn && warn_limit) {
pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n",
size, align, is_atomic, err);
- dump_stack();
+ if (is_atomic)
+ dump_stack();
if (!--warn_limit)
pr_info("limit reached, disable warning\n");
}
+
if (is_atomic) {
/* see the flag handling in pcpu_balance_workfn() */
pcpu_atomic_alloc_failed = true;
--
2.34.1


2023-07-21 13:49:51

by Baoquan He

[permalink] [raw]
Subject: [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit

This removes the need of local varibale 'chunk', and optimize the code
calling pcpu_alloc_first_chunk() to initialize reserved chunk and
dynamic chunk to make it simpler.

Signed-off-by: Baoquan He <[email protected]>
---
mm/percpu.c | 32 +++++++++++++-------------------
1 file changed, 13 insertions(+), 19 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 1480bf283d11..c25b058a46ad 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2581,7 +2581,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
{
size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
size_t static_size, dyn_size;
- struct pcpu_chunk *chunk;
unsigned long *group_offsets;
size_t *group_sizes;
unsigned long *unit_off;
@@ -2697,7 +2696,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
pcpu_atom_size = ai->atom_size;
- pcpu_chunk_struct_size = struct_size(chunk, populated,
+ pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
BITS_TO_LONGS(pcpu_unit_pages));

pcpu_stats_save_ai(ai);
@@ -2735,28 +2734,23 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,

/*
* Initialize first chunk.
- * If the reserved_size is non-zero, this initializes the reserved
- * chunk. If the reserved_size is zero, the reserved chunk is NULL
- * and the dynamic region is initialized here. The first chunk,
- * pcpu_first_chunk, will always point to the chunk that serves
- * the dynamic region.
+ * If the reserved_size is non-zero, initializes the reserved chunk
+ * firstly. If the reserved_size is zero, the reserved chunk is NULL
+ * and the dynamic region is initialized directly. The first chunk,
+ * pcpu_first_chunk, will always point to the chunk that serves the
+ * dynamic region.
*/
tmp_addr = (unsigned long)base_addr + static_size;
- map_size = ai->reserved_size ?: dyn_size;
- chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
-
- /* init dynamic chunk if necessary */
if (ai->reserved_size) {
- pcpu_reserved_chunk = chunk;
-
- tmp_addr = (unsigned long)base_addr + static_size +
- ai->reserved_size;
- map_size = dyn_size;
- chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
+ map_size = ai->reserved_size;
+ pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
}

- /* link the first chunk in */
- pcpu_first_chunk = chunk;
+ /* init dynamic chunk if necessary */
+ tmp_addr += (unsigned long)ai->reserved_size;
+ map_size = dyn_size;
+ pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
+
pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
pcpu_chunk_relocate(pcpu_first_chunk, -1);

--
2.34.1


2023-07-21 21:17:10

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed

On Fri, Jul 21, 2023 at 09:18:00PM +0800, Baoquan He wrote:
> The variable 'err' is assgigned to an error message if atomic alloc
> failed, while it has no chance to be printed if is_atomic is true.
>
> Here change to print error message too if atomic alloc failed, while
> avoid to call dump_stack() if that case.
>
> Signed-off-by: Baoquan He <[email protected]>
> ---
> mm/percpu.c | 6 ++++--
> 1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index c25b058a46ad..74f75ef0ad58 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -1890,13 +1890,15 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
> fail:
> trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align);
>
> - if (!is_atomic && do_warn && warn_limit) {
> + if (do_warn && warn_limit) {
> pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n",
> size, align, is_atomic, err);
> - dump_stack();
> + if (is_atomic)
> + dump_stack();

This should be (!is_atomic) to preserve the current logic?

> if (!--warn_limit)
> pr_info("limit reached, disable warning\n");
> }
> +
> if (is_atomic) {
> /* see the flag handling in pcpu_balance_workfn() */
> pcpu_atomic_alloc_failed = true;
> --
> 2.34.1
>

Thanks,
Dennis

2023-07-21 21:24:26

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit

Hello,

On Fri, Jul 21, 2023 at 09:17:59PM +0800, Baoquan He wrote:
> This removes the need of local varibale 'chunk', and optimize the code
> calling pcpu_alloc_first_chunk() to initialize reserved chunk and
> dynamic chunk to make it simpler.
>
> Signed-off-by: Baoquan He <[email protected]>
> ---
> mm/percpu.c | 32 +++++++++++++-------------------
> 1 file changed, 13 insertions(+), 19 deletions(-)
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 1480bf283d11..c25b058a46ad 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -2581,7 +2581,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> {
> size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
> size_t static_size, dyn_size;
> - struct pcpu_chunk *chunk;
> unsigned long *group_offsets;
> size_t *group_sizes;
> unsigned long *unit_off;
> @@ -2697,7 +2696,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
> pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
> pcpu_atom_size = ai->atom_size;
> - pcpu_chunk_struct_size = struct_size(chunk, populated,
> + pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
> BITS_TO_LONGS(pcpu_unit_pages));
>
> pcpu_stats_save_ai(ai);
> @@ -2735,28 +2734,23 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
>
> /*
> * Initialize first chunk.
> - * If the reserved_size is non-zero, this initializes the reserved
> - * chunk. If the reserved_size is zero, the reserved chunk is NULL
> - * and the dynamic region is initialized here. The first chunk,
> - * pcpu_first_chunk, will always point to the chunk that serves
> - * the dynamic region.
> + * If the reserved_size is non-zero, initializes the reserved chunk
^initialize
> + * firstly. If the reserved_size is zero, the reserved chunk is NULL
^ can remove firstly.
> + * and the dynamic region is initialized directly. The first chunk,
> + * pcpu_first_chunk, will always point to the chunk that serves the
> + * dynamic region.

Reading this, I'll probably reword this comment to explain the reserved
chunk better.

> */
> tmp_addr = (unsigned long)base_addr + static_size;
> - map_size = ai->reserved_size ?: dyn_size;
> - chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> -
> - /* init dynamic chunk if necessary */
> if (ai->reserved_size) {
> - pcpu_reserved_chunk = chunk;
> -
> - tmp_addr = (unsigned long)base_addr + static_size +
> - ai->reserved_size;
> - map_size = dyn_size;
> - chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> + map_size = ai->reserved_size;
> + pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> }
>
> - /* link the first chunk in */
> - pcpu_first_chunk = chunk;
> + /* init dynamic chunk if necessary */
> + tmp_addr += (unsigned long)ai->reserved_size;

I'm not a big fan of += the tmp_addr as I personally find it easier to
read if it's just laid out explicitly.

> + map_size = dyn_size;
> + pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> +
> pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
> pcpu_chunk_relocate(pcpu_first_chunk, -1);
>
> --
> 2.34.1
>

Overall, I think this is good, but I'd go 1 step further and get rid of
map_size. Regarding tmp_addr, I'd prefer if we kept all the math
together.

Thanks,
Dennis

2023-07-21 21:25:07

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH 0/3] percpu: some trivial cleanup patches

Hello,

On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> There's a left issue in my mailbox about percpu code at below. When
> I rechecked it and considered Dennis's comment, I made an attmept
> to fix it with patch 3.
>
> https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
>
> Patch 1 and 2 are trivial clean up patches when reading percpu code.
>
> Baoquan He (3):
> mm/percpu.c: remove redundant check
> mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> bit
> mm/percpu.c: print error message too if atomic alloc failed
>
> mm/percpu.c | 39 +++++++++++++++++----------------------
> 1 file changed, 17 insertions(+), 22 deletions(-)
>
> --
> 2.34.1
>

Thanks for these. I left a few comments. I think I might have some stuff
for v6.6, I'll figure that out in a couple days. If that's so, I can
pull 1, probably massage 2 and send that out again, and then I think
you'll need to resend 3.

Thanks,
Dennis

2023-07-22 01:38:39

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 2/3] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little bit

On 07/21/23 at 02:01pm, Dennis Zhou wrote:
> Hello,
>
> On Fri, Jul 21, 2023 at 09:17:59PM +0800, Baoquan He wrote:
> > This removes the need of local varibale 'chunk', and optimize the code
> > calling pcpu_alloc_first_chunk() to initialize reserved chunk and
> > dynamic chunk to make it simpler.
> >
> > Signed-off-by: Baoquan He <[email protected]>
> > ---
> > mm/percpu.c | 32 +++++++++++++-------------------
> > 1 file changed, 13 insertions(+), 19 deletions(-)
> >
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index 1480bf283d11..c25b058a46ad 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -2581,7 +2581,6 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> > {
> > size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
> > size_t static_size, dyn_size;
> > - struct pcpu_chunk *chunk;
> > unsigned long *group_offsets;
> > size_t *group_sizes;
> > unsigned long *unit_off;
> > @@ -2697,7 +2696,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> > pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
> > pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
> > pcpu_atom_size = ai->atom_size;
> > - pcpu_chunk_struct_size = struct_size(chunk, populated,
> > + pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
> > BITS_TO_LONGS(pcpu_unit_pages));
> >
> > pcpu_stats_save_ai(ai);
> > @@ -2735,28 +2734,23 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
> >
> > /*
> > * Initialize first chunk.
> > - * If the reserved_size is non-zero, this initializes the reserved
> > - * chunk. If the reserved_size is zero, the reserved chunk is NULL
> > - * and the dynamic region is initialized here. The first chunk,
> > - * pcpu_first_chunk, will always point to the chunk that serves
> > - * the dynamic region.
> > + * If the reserved_size is non-zero, initializes the reserved chunk
> ^initialize
> > + * firstly. If the reserved_size is zero, the reserved chunk is NULL
> ^ can remove firstly.
> > + * and the dynamic region is initialized directly. The first chunk,
> > + * pcpu_first_chunk, will always point to the chunk that serves the
> > + * dynamic region.
>
> Reading this, I'll probably reword this comment to explain the reserved
> chunk better.

Agree. The expression is a little messy and too colloquial.

>
> > */
> > tmp_addr = (unsigned long)base_addr + static_size;
> > - map_size = ai->reserved_size ?: dyn_size;
> > - chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > -
> > - /* init dynamic chunk if necessary */
> > if (ai->reserved_size) {
> > - pcpu_reserved_chunk = chunk;
> > -
> > - tmp_addr = (unsigned long)base_addr + static_size +
> > - ai->reserved_size;
> > - map_size = dyn_size;
> > - chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > + map_size = ai->reserved_size;
> > + pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > }
> >
> > - /* link the first chunk in */
> > - pcpu_first_chunk = chunk;
> > + /* init dynamic chunk if necessary */
> > + tmp_addr += (unsigned long)ai->reserved_size;
>
> I'm not a big fan of += the tmp_addr as I personally find it easier to
> read if it's just laid out explicitly.

OK, will change.

>
> > + map_size = dyn_size;
> > + pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
> > +
> > pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
> > pcpu_chunk_relocate(pcpu_first_chunk, -1);
> >
> > --
> > 2.34.1
> >
>
> Overall, I think this is good, but I'd go 1 step further and get rid of
> map_size. Regarding tmp_addr, I'd prefer if we kept all the math
> together.

Makes sense. Thanks a lot for your careful review and great suggestions.

According to your comments, I made a draft v2. Please help check if I
have got them correctly and if the new change is OK to you.

From 17832ce8a755d8327b853a18c6f1cc00c9f93e50 Mon Sep 17 00:00:00 2001
From: Baoquan He <[email protected]>
Date: Tue, 27 Jun 2023 09:33:28 +0800
Subject: [PATCH] mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a
little bit
Content-type: text/plain

This removes the need of local varibale 'chunk', and optimize the code
calling pcpu_alloc_first_chunk() to initialize reserved chunk and
dynamic chunk to make it simpler.

Signed-off-by: Baoquan He <[email protected]>
---
mm/percpu.c | 36 +++++++++++++-----------------------
1 file changed, 13 insertions(+), 23 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 1480bf283d11..83fc47206680 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2581,14 +2581,12 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
{
size_t size_sum = ai->static_size + ai->reserved_size + ai->dyn_size;
size_t static_size, dyn_size;
- struct pcpu_chunk *chunk;
unsigned long *group_offsets;
size_t *group_sizes;
unsigned long *unit_off;
unsigned int cpu;
int *unit_map;
int group, unit, i;
- int map_size;
unsigned long tmp_addr;
size_t alloc_size;

@@ -2697,7 +2695,7 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
pcpu_unit_pages = ai->unit_size >> PAGE_SHIFT;
pcpu_unit_size = pcpu_unit_pages << PAGE_SHIFT;
pcpu_atom_size = ai->atom_size;
- pcpu_chunk_struct_size = struct_size(chunk, populated,
+ pcpu_chunk_struct_size = struct_size((struct pcpu_chunk *)0, populated,
BITS_TO_LONGS(pcpu_unit_pages));

pcpu_stats_save_ai(ai);
@@ -2734,29 +2732,21 @@ void __init pcpu_setup_first_chunk(const struct pcpu_alloc_info *ai,
dyn_size = ai->dyn_size - (static_size - ai->static_size);

/*
- * Initialize first chunk.
- * If the reserved_size is non-zero, this initializes the reserved
- * chunk. If the reserved_size is zero, the reserved chunk is NULL
- * and the dynamic region is initialized here. The first chunk,
- * pcpu_first_chunk, will always point to the chunk that serves
- * the dynamic region.
+ * Initialize first chunk:
+ *
+ * - If the reserved_size is non-zero, initialize the reserved
+ * chunk firstly. Otherwise, the reserved chunk is NULL.
+ *
+ * - The first chunk, pcpu_first_chunk, always points to the
+ * chunk that serves the dynamic region.
*/
tmp_addr = (unsigned long)base_addr + static_size;
- map_size = ai->reserved_size ?: dyn_size;
- chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
-
- /* init dynamic chunk if necessary */
- if (ai->reserved_size) {
- pcpu_reserved_chunk = chunk;
-
- tmp_addr = (unsigned long)base_addr + static_size +
- ai->reserved_size;
- map_size = dyn_size;
- chunk = pcpu_alloc_first_chunk(tmp_addr, map_size);
- }
+ if (ai->reserved_size)
+ pcpu_reserved_chunk = pcpu_alloc_first_chunk(tmp_addr,
+ ai->reserved_size);
+ tmp_addr = (unsigned long)base_addr + static_size + ai->reserved_size;
+ pcpu_first_chunk = pcpu_alloc_first_chunk(tmp_addr, dyn_size);

- /* link the first chunk in */
- pcpu_first_chunk = chunk;
pcpu_nr_empty_pop_pages = pcpu_first_chunk->nr_empty_pop_pages;
pcpu_chunk_relocate(pcpu_first_chunk, -1);

--
2.34.1


2023-07-22 02:48:10

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 3/3] mm/percpu.c: print error message too if atomic alloc failed

On 07/21/23 at 02:03pm, Dennis Zhou wrote:
> On Fri, Jul 21, 2023 at 09:18:00PM +0800, Baoquan He wrote:
> > The variable 'err' is assgigned to an error message if atomic alloc
> > failed, while it has no chance to be printed if is_atomic is true.
> >
> > Here change to print error message too if atomic alloc failed, while
> > avoid to call dump_stack() if that case.
> >
> > Signed-off-by: Baoquan He <[email protected]>
> > ---
> > mm/percpu.c | 6 ++++--
> > 1 file changed, 4 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/percpu.c b/mm/percpu.c
> > index c25b058a46ad..74f75ef0ad58 100644
> > --- a/mm/percpu.c
> > +++ b/mm/percpu.c
> > @@ -1890,13 +1890,15 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved,
> > fail:
> > trace_percpu_alloc_percpu_fail(reserved, is_atomic, size, align);
> >
> > - if (!is_atomic && do_warn && warn_limit) {
> > + if (do_warn && warn_limit) {
> > pr_warn("allocation failed, size=%zu align=%zu atomic=%d, %s\n",
> > size, align, is_atomic, err);
> > - dump_stack();
> > + if (is_atomic)
> > + dump_stack();
>
> This should be (!is_atomic) to preserve the current logic?

You are quite right, I must be dizzy at the moment when making change.
Will fix this. Thanks for reviewing.

>
> > if (!--warn_limit)
> > pr_info("limit reached, disable warning\n");
> > }
> > +
> > if (is_atomic) {
> > /* see the flag handling in pcpu_balance_workfn() */
> > pcpu_atomic_alloc_failed = true;
> > --
> > 2.34.1
> >
>
> Thanks,
> Dennis
>


2023-07-22 03:57:15

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 0/3] percpu: some trivial cleanup patches

On 07/21/23 at 02:04pm, Dennis Zhou wrote:
> Hello,
>
> On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> > There's a left issue in my mailbox about percpu code at below. When
> > I rechecked it and considered Dennis's comment, I made an attmept
> > to fix it with patch 3.
> >
> > https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> >
> > Patch 1 and 2 are trivial clean up patches when reading percpu code.
> >
> > Baoquan He (3):
> > mm/percpu.c: remove redundant check
> > mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> > bit
> > mm/percpu.c: print error message too if atomic alloc failed
> >
> > mm/percpu.c | 39 +++++++++++++++++----------------------
> > 1 file changed, 17 insertions(+), 22 deletions(-)
> >
> > --
> > 2.34.1
> >
>
> Thanks for these. I left a few comments. I think I might have some stuff
> for v6.6, I'll figure that out in a couple days. If that's so, I can
> pull 1, probably massage 2 and send that out again, and then I think
> you'll need to resend 3.

Sure, thanks for careful reviewing and great suggestion. So I only need
to send v2 of patch 3, right? Or I should change and send v2 of the
whold series? I may not get it clear.


2023-07-27 23:20:03

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH 0/3] percpu: some trivial cleanup patches

Hi,

On Sat, Jul 22, 2023 at 11:30:14AM +0800, Baoquan He wrote:
> On 07/21/23 at 02:04pm, Dennis Zhou wrote:
> > Hello,
> >
> > On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> > > There's a left issue in my mailbox about percpu code at below. When
> > > I rechecked it and considered Dennis's comment, I made an attmept
> > > to fix it with patch 3.
> > >
> > > https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> > >
> > > Patch 1 and 2 are trivial clean up patches when reading percpu code.
> > >
> > > Baoquan He (3):
> > > mm/percpu.c: remove redundant check
> > > mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> > > bit
> > > mm/percpu.c: print error message too if atomic alloc failed
> > >
> > > mm/percpu.c | 39 +++++++++++++++++----------------------
> > > 1 file changed, 17 insertions(+), 22 deletions(-)
> > >
> > > --
> > > 2.34.1
> > >
> >
> > Thanks for these. I left a few comments. I think I might have some stuff
> > for v6.6, I'll figure that out in a couple days. If that's so, I can
> > pull 1, probably massage 2 and send that out again, and then I think
> > you'll need to resend 3.
>
> Sure, thanks for careful reviewing and great suggestion. So I only need
> to send v2 of patch 3, right? Or I should change and send v2 of the
> whold series? I may not get it clear.
>

Sorry for the delay. I've pulled 1 and 2 (reworded the comment). Can you
please resend patch 3.

Thanks,
Dennis

2023-07-28 05:00:31

by Baoquan He

[permalink] [raw]
Subject: Re: [PATCH 0/3] percpu: some trivial cleanup patches

On 07/27/23 at 03:50pm, Dennis Zhou wrote:
> Hi,
>
> On Sat, Jul 22, 2023 at 11:30:14AM +0800, Baoquan He wrote:
> > On 07/21/23 at 02:04pm, Dennis Zhou wrote:
> > > Hello,
> > >
> > > On Fri, Jul 21, 2023 at 09:17:57PM +0800, Baoquan He wrote:
> > > > There's a left issue in my mailbox about percpu code at below. When
> > > > I rechecked it and considered Dennis's comment, I made an attmept
> > > > to fix it with patch 3.
> > > >
> > > > https://lore.kernel.org/all/Y407wDMKq5ibE9sc@fedora/T/#u
> > > >
> > > > Patch 1 and 2 are trivial clean up patches when reading percpu code.
> > > >
> > > > Baoquan He (3):
> > > > mm/percpu.c: remove redundant check
> > > > mm/percpu.c: optimize the code in pcpu_setup_first_chunk() a little
> > > > bit
> > > > mm/percpu.c: print error message too if atomic alloc failed
> > > >
> > > > mm/percpu.c | 39 +++++++++++++++++----------------------
> > > > 1 file changed, 17 insertions(+), 22 deletions(-)
> > > >
> > > > --
> > > > 2.34.1
> > > >
> > >
> > > Thanks for these. I left a few comments. I think I might have some stuff
> > > for v6.6, I'll figure that out in a couple days. If that's so, I can
> > > pull 1, probably massage 2 and send that out again, and then I think
> > > you'll need to resend 3.
> >
> > Sure, thanks for careful reviewing and great suggestion. So I only need
> > to send v2 of patch 3, right? Or I should change and send v2 of the
> > whold series? I may not get it clear.
> >
>
> Sorry for the delay. I've pulled 1 and 2 (reworded the comment). Can you
> please resend patch 3.

Sent out v2 of patch 3, thanks.