2022-10-24 08:22:39

by Baoquan He

[permalink] [raw]
Subject: [PATCH 2/8] mm/percpu: use list_first_entry_or_null in pcpu_reclaim_populated()

To replace list_empty()/list_first_entry() pair to simplify code.

Signed-off-by: Baoquan He <[email protected]>
---
mm/percpu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 26d8cd2ca323..a3fde4ac03a4 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2143,9 +2143,9 @@ static void pcpu_reclaim_populated(void)
* other accessor is the free path which only returns area back to the
* allocator not touching the populated bitmap.
*/
- while (!list_empty(&pcpu_chunk_lists[pcpu_to_depopulate_slot])) {
- chunk = list_first_entry(&pcpu_chunk_lists[pcpu_to_depopulate_slot],
- struct pcpu_chunk, list);
+ while (chunk = list_first_entry_or_null(
+ &pcpu_chunk_lists[pcpu_to_depopulate_slot],
+ struct pcpu_chunk, list)) {
WARN_ON(chunk->immutable);

/*
--
2.34.1


2022-10-24 10:27:15

by kernel test robot

[permalink] [raw]
Subject: Re: [PATCH 2/8] mm/percpu: use list_first_entry_or_null in pcpu_reclaim_populated()

Hi Baoquan,

I love your patch! Perhaps something to improve:

[auto build test WARNING on vbabka-slab/for-next]
[also build test WARNING on akpm-mm/mm-everything linus/master v6.1-rc2 next-20221024]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url: https://github.com/intel-lab-lkp/linux/commits/Baoquan-He/Cleanup-and-optimization-patches-for-percpu/20221024-161636
base: git://git.kernel.org/pub/scm/linux/kernel/git/vbabka/slab.git for-next
patch link: https://lore.kernel.org/r/20221024081435.204970-3-bhe%40redhat.com
patch subject: [PATCH 2/8] mm/percpu: use list_first_entry_or_null in pcpu_reclaim_populated()
config: um-i386_defconfig
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/75b290e1aa943a113526ab78c22b38d07bf7a462
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Baoquan-He/Cleanup-and-optimization-patches-for-percpu/20221024-161636
git checkout 75b290e1aa943a113526ab78c22b38d07bf7a462
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=um SUBARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <[email protected]>

All warnings (new ones prefixed by >>):

mm/percpu.c: In function 'pcpu_reclaim_populated':
>> mm/percpu.c:2146:16: warning: suggest parentheses around assignment used as truth value [-Wparentheses]
2146 | while (chunk = list_first_entry_or_null(
| ^~~~~


vim +2146 mm/percpu.c

2114
2115 /**
2116 * pcpu_reclaim_populated - scan over to_depopulate chunks and free empty pages
2117 *
2118 * Scan over chunks in the depopulate list and try to release unused populated
2119 * pages back to the system. Depopulated chunks are sidelined to prevent
2120 * repopulating these pages unless required. Fully free chunks are reintegrated
2121 * and freed accordingly (1 is kept around). If we drop below the empty
2122 * populated pages threshold, reintegrate the chunk if it has empty free pages.
2123 * Each chunk is scanned in the reverse order to keep populated pages close to
2124 * the beginning of the chunk.
2125 *
2126 * CONTEXT:
2127 * pcpu_lock (can be dropped temporarily)
2128 *
2129 */
2130 static void pcpu_reclaim_populated(void)
2131 {
2132 struct pcpu_chunk *chunk;
2133 struct pcpu_block_md *block;
2134 int freed_page_start, freed_page_end;
2135 int i, end;
2136 bool reintegrate;
2137
2138 lockdep_assert_held(&pcpu_lock);
2139
2140 /*
2141 * Once a chunk is isolated to the to_depopulate list, the chunk is no
2142 * longer discoverable to allocations whom may populate pages. The only
2143 * other accessor is the free path which only returns area back to the
2144 * allocator not touching the populated bitmap.
2145 */
> 2146 while (chunk = list_first_entry_or_null(
2147 &pcpu_chunk_lists[pcpu_to_depopulate_slot],
2148 struct pcpu_chunk, list)) {
2149 WARN_ON(chunk->immutable);
2150
2151 /*
2152 * Scan chunk's pages in the reverse order to keep populated
2153 * pages close to the beginning of the chunk.
2154 */
2155 freed_page_start = chunk->nr_pages;
2156 freed_page_end = 0;
2157 reintegrate = false;
2158 for (i = chunk->nr_pages - 1, end = -1; i >= 0; i--) {
2159 /* no more work to do */
2160 if (chunk->nr_empty_pop_pages == 0)
2161 break;
2162
2163 /* reintegrate chunk to prevent atomic alloc failures */
2164 if (pcpu_nr_empty_pop_pages < PCPU_EMPTY_POP_PAGES_HIGH) {
2165 reintegrate = true;
2166 goto end_chunk;
2167 }
2168
2169 /*
2170 * If the page is empty and populated, start or
2171 * extend the (i, end) range. If i == 0, decrease
2172 * i and perform the depopulation to cover the last
2173 * (first) page in the chunk.
2174 */
2175 block = chunk->md_blocks + i;
2176 if (block->contig_hint == PCPU_BITMAP_BLOCK_BITS &&
2177 test_bit(i, chunk->populated)) {
2178 if (end == -1)
2179 end = i;
2180 if (i > 0)
2181 continue;
2182 i--;
2183 }
2184
2185 /* depopulate if there is an active range */
2186 if (end == -1)
2187 continue;
2188
2189 spin_unlock_irq(&pcpu_lock);
2190 pcpu_depopulate_chunk(chunk, i + 1, end + 1);
2191 cond_resched();
2192 spin_lock_irq(&pcpu_lock);
2193
2194 pcpu_chunk_depopulated(chunk, i + 1, end + 1);
2195 freed_page_start = min(freed_page_start, i + 1);
2196 freed_page_end = max(freed_page_end, end + 1);
2197
2198 /* reset the range and continue */
2199 end = -1;
2200 }
2201
2202 end_chunk:
2203 /* batch tlb flush per chunk to amortize cost */
2204 if (freed_page_start < freed_page_end) {
2205 spin_unlock_irq(&pcpu_lock);
2206 pcpu_post_unmap_tlb_flush(chunk,
2207 freed_page_start,
2208 freed_page_end);
2209 cond_resched();
2210 spin_lock_irq(&pcpu_lock);
2211 }
2212
2213 if (reintegrate || chunk->free_bytes == pcpu_unit_size)
2214 pcpu_reintegrate_chunk(chunk);
2215 else
2216 list_move_tail(&chunk->list,
2217 &pcpu_chunk_lists[pcpu_sidelined_slot]);
2218 }
2219 }
2220

--
0-DAY CI Kernel Test Service
https://01.org/lkp


Attachments:
(No filename) (5.73 kB)
config (42.02 kB)
Download all attachments

2022-10-24 20:26:12

by Dennis Zhou

[permalink] [raw]
Subject: Re: [PATCH 2/8] mm/percpu: use list_first_entry_or_null in pcpu_reclaim_populated()

On Mon, Oct 24, 2022 at 04:14:29PM +0800, Baoquan He wrote:
> To replace list_empty()/list_first_entry() pair to simplify code.
>
> Signed-off-by: Baoquan He <[email protected]>
> ---
> mm/percpu.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/percpu.c b/mm/percpu.c
> index 26d8cd2ca323..a3fde4ac03a4 100644
> --- a/mm/percpu.c
> +++ b/mm/percpu.c
> @@ -2143,9 +2143,9 @@ static void pcpu_reclaim_populated(void)
> * other accessor is the free path which only returns area back to the
> * allocator not touching the populated bitmap.
> */
> - while (!list_empty(&pcpu_chunk_lists[pcpu_to_depopulate_slot])) {
> - chunk = list_first_entry(&pcpu_chunk_lists[pcpu_to_depopulate_slot],
> - struct pcpu_chunk, list);
> + while (chunk = list_first_entry_or_null(
> + &pcpu_chunk_lists[pcpu_to_depopulate_slot],
> + struct pcpu_chunk, list)) {
> WARN_ON(chunk->immutable);
>
> /*
> --
> 2.34.1
>

With added parenthesis,

Acked-by: Dennis Zhou <[email protected]>

Thanks,
Dennis

2022-10-25 03:18:08

by Baoquan He

[permalink] [raw]
Subject: [PATCH v2 2/8] mm/percpu: use list_first_entry_or_null in pcpu_reclaim_populated()

To replace list_empty()/list_first_entry() pair to simplify code.

Signed-off-by: Baoquan He <[email protected]>
Acked-by: Dennis Zhou <[email protected]>
---
v1->v2:
- Add parentheses around assignment in the while condition. This mutes
the lkp warning.

mm/percpu.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index 26d8cd2ca323..841bb93aaae6 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -2143,9 +2143,9 @@ static void pcpu_reclaim_populated(void)
* other accessor is the free path which only returns area back to the
* allocator not touching the populated bitmap.
*/
- while (!list_empty(&pcpu_chunk_lists[pcpu_to_depopulate_slot])) {
- chunk = list_first_entry(&pcpu_chunk_lists[pcpu_to_depopulate_slot],
- struct pcpu_chunk, list);
+ while ((chunk = list_first_entry_or_null(
+ &pcpu_chunk_lists[pcpu_to_depopulate_slot],
+ struct pcpu_chunk, list))) {
WARN_ON(chunk->immutable);

/*
--
2.34.1