When testing shmem swapin, I encountered the warning below on my machine.
The reason is that replacing an old shmem folio with a new one causes
mem_cgroup_migrate() to clear the old folio's memcg data. As a result,
the old folio cannot get the correct memcg's lruvec needed to remove itself
from the LRU list when it is being freed. This could lead to possible serious
problems, such as LRU list crashes due to holding the wrong LRU lock, and
incorrect LRU statistics.
To fix this issue, we can fallback to use the mem_cgroup_replace_folio()
to replace the old shmem folio.
[ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960
[ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
[ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)
[ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
[ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
[ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
[ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000
[ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000
[ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
[ 5241.100338] ------------[ cut here ]------------
[ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150
[...]
[ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150
[ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150
[ 5241.100376] sp : ffff80008b38b930
[...]
[ 5241.100398] Call trace:
[ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150
[ 5241.100401] __page_cache_release+0x90/0x300
[ 5241.100404] __folio_put+0x50/0x108
[ 5241.100406] shmem_replace_folio+0x1b4/0x240
[ 5241.100409] shmem_swapin_folio+0x314/0x528
[ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930
[ 5241.100412] shmem_fault+0x74/0x160
[ 5241.100414] __do_fault+0x40/0x218
[ 5241.100417] do_shared_fault+0x34/0x1b0
[ 5241.100419] do_fault+0x40/0x168
[ 5241.100420] handle_pte_fault+0x80/0x228
[ 5241.100422] __handle_mm_fault+0x1c4/0x440
[ 5241.100424] handle_mm_fault+0x60/0x1f0
[ 5241.100426] do_page_fault+0x120/0x488
[ 5241.100429] do_translation_fault+0x4c/0x68
[ 5241.100431] do_mem_abort+0x48/0xa0
[ 5241.100434] el0_da+0x38/0xc0
[ 5241.100436] el0t_64_sync_handler+0x68/0xc0
[ 5241.100437] el0t_64_sync+0x14c/0x150
[ 5241.100439] ---[ end trace 0000000000000000 ]---
Fixes: 85ce2c517ade ("memcontrol: only transfer the memcg data for migration")
Signed-off-by: Baolin Wang <[email protected]>
---
mm/shmem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 99bd3c34f0fb..4acaf02bfe44 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -1908,7 +1908,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
xa_lock_irq(&swap_mapping->i_pages);
error = shmem_replace_entry(swap_mapping, swap_index, old, new);
if (!error) {
- mem_cgroup_migrate(old, new);
+ mem_cgroup_replace_folio(old, new);
__lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1);
__lruvec_stat_mod_folio(new, NR_SHMEM, 1);
__lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
--
2.39.3
On Thu, Jun 13, 2024 at 04:21:19PM GMT, Baolin Wang wrote:
> When testing shmem swapin, I encountered the warning below on my machine.
> The reason is that replacing an old shmem folio with a new one causes
> mem_cgroup_migrate() to clear the old folio's memcg data. As a result,
> the old folio cannot get the correct memcg's lruvec needed to remove itself
> from the LRU list when it is being freed. This could lead to possible serious
> problems, such as LRU list crashes due to holding the wrong LRU lock, and
> incorrect LRU statistics.
>
> To fix this issue, we can fallback to use the mem_cgroup_replace_folio()
> to replace the old shmem folio.
>
> [ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960
> [ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)
> [ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
> [ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
> [ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000
> [ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
> [ 5241.100338] ------------[ cut here ]------------
> [ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150
> [...]
> [ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150
> [ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150
> [ 5241.100376] sp : ffff80008b38b930
> [...]
> [ 5241.100398] Call trace:
> [ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150
> [ 5241.100401] __page_cache_release+0x90/0x300
> [ 5241.100404] __folio_put+0x50/0x108
> [ 5241.100406] shmem_replace_folio+0x1b4/0x240
> [ 5241.100409] shmem_swapin_folio+0x314/0x528
> [ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930
> [ 5241.100412] shmem_fault+0x74/0x160
> [ 5241.100414] __do_fault+0x40/0x218
> [ 5241.100417] do_shared_fault+0x34/0x1b0
> [ 5241.100419] do_fault+0x40/0x168
> [ 5241.100420] handle_pte_fault+0x80/0x228
> [ 5241.100422] __handle_mm_fault+0x1c4/0x440
> [ 5241.100424] handle_mm_fault+0x60/0x1f0
> [ 5241.100426] do_page_fault+0x120/0x488
> [ 5241.100429] do_translation_fault+0x4c/0x68
> [ 5241.100431] do_mem_abort+0x48/0xa0
> [ 5241.100434] el0_da+0x38/0xc0
> [ 5241.100436] el0t_64_sync_handler+0x68/0xc0
> [ 5241.100437] el0t_64_sync+0x14c/0x150
> [ 5241.100439] ---[ end trace 0000000000000000 ]---
>
> Fixes: 85ce2c517ade ("memcontrol: only transfer the memcg data for migration")
> Signed-off-by: Baolin Wang <[email protected]>
This is a good find. I wonder if we should put
VM_BUG_ON_FOLIO(folio_test_lru(folio), folio) in mem_cgroup_migrate().
> ---
> mm/shmem.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 99bd3c34f0fb..4acaf02bfe44 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1908,7 +1908,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
> xa_lock_irq(&swap_mapping->i_pages);
> error = shmem_replace_entry(swap_mapping, swap_index, old, new);
> if (!error) {
> - mem_cgroup_migrate(old, new);
> + mem_cgroup_replace_folio(old, new);
> __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1);
> __lruvec_stat_mod_folio(new, NR_SHMEM, 1);
> __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
> --
> 2.39.3
>
On Thu, Jun 13, 2024 at 04:21:19PM GMT, Baolin Wang wrote:
> When testing shmem swapin, I encountered the warning below on my machine.
> The reason is that replacing an old shmem folio with a new one causes
> mem_cgroup_migrate() to clear the old folio's memcg data. As a result,
> the old folio cannot get the correct memcg's lruvec needed to remove itself
> from the LRU list when it is being freed. This could lead to possible serious
> problems, such as LRU list crashes due to holding the wrong LRU lock, and
> incorrect LRU statistics.
>
> To fix this issue, we can fallback to use the mem_cgroup_replace_folio()
> to replace the old shmem folio.
>
> [ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960
> [ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)
> [ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
> [ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
> [ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000
> [ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
> [ 5241.100338] ------------[ cut here ]------------
> [ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150
> [...]
> [ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150
> [ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150
> [ 5241.100376] sp : ffff80008b38b930
> [...]
> [ 5241.100398] Call trace:
> [ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150
> [ 5241.100401] __page_cache_release+0x90/0x300
> [ 5241.100404] __folio_put+0x50/0x108
> [ 5241.100406] shmem_replace_folio+0x1b4/0x240
> [ 5241.100409] shmem_swapin_folio+0x314/0x528
> [ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930
> [ 5241.100412] shmem_fault+0x74/0x160
> [ 5241.100414] __do_fault+0x40/0x218
> [ 5241.100417] do_shared_fault+0x34/0x1b0
> [ 5241.100419] do_fault+0x40/0x168
> [ 5241.100420] handle_pte_fault+0x80/0x228
> [ 5241.100422] __handle_mm_fault+0x1c4/0x440
> [ 5241.100424] handle_mm_fault+0x60/0x1f0
> [ 5241.100426] do_page_fault+0x120/0x488
> [ 5241.100429] do_translation_fault+0x4c/0x68
> [ 5241.100431] do_mem_abort+0x48/0xa0
> [ 5241.100434] el0_da+0x38/0xc0
> [ 5241.100436] el0t_64_sync_handler+0x68/0xc0
> [ 5241.100437] el0t_64_sync+0x14c/0x150
> [ 5241.100439] ---[ end trace 0000000000000000 ]---
>
> Fixes: 85ce2c517ade ("memcontrol: only transfer the memcg data for migration")
> Signed-off-by: Baolin Wang <[email protected]>
Reviewed-by: Shakeel Butt <[email protected]>
One request: please add VM_BUG_ON_FOLIO(folio_test_lru(folio), folio) in
mem_cgroup_migrate().
On Thu, Jun 13, 2024 at 1:21 AM Baolin Wang
<[email protected]> wrote:
>
> When testing shmem swapin, I encountered the warning below on my machine.
> The reason is that replacing an old shmem folio with a new one causes
> mem_cgroup_migrate() to clear the old folio's memcg data. As a result,
> the old folio cannot get the correct memcg's lruvec needed to remove itself
> from the LRU list when it is being freed. This could lead to possible serious
> problems, such as LRU list crashes due to holding the wrong LRU lock, and
> incorrect LRU statistics.
>
> To fix this issue, we can fallback to use the mem_cgroup_replace_folio()
> to replace the old shmem folio.
>
> [ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960
> [ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
> [ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)
> [ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
> [ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
> [ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000
> [ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000
> [ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
> [ 5241.100338] ------------[ cut here ]------------
> [ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150
> [...]
> [ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150
> [ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150
> [ 5241.100376] sp : ffff80008b38b930
> [...]
> [ 5241.100398] Call trace:
> [ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150
> [ 5241.100401] __page_cache_release+0x90/0x300
> [ 5241.100404] __folio_put+0x50/0x108
> [ 5241.100406] shmem_replace_folio+0x1b4/0x240
> [ 5241.100409] shmem_swapin_folio+0x314/0x528
> [ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930
> [ 5241.100412] shmem_fault+0x74/0x160
> [ 5241.100414] __do_fault+0x40/0x218
> [ 5241.100417] do_shared_fault+0x34/0x1b0
> [ 5241.100419] do_fault+0x40/0x168
> [ 5241.100420] handle_pte_fault+0x80/0x228
> [ 5241.100422] __handle_mm_fault+0x1c4/0x440
> [ 5241.100424] handle_mm_fault+0x60/0x1f0
> [ 5241.100426] do_page_fault+0x120/0x488
> [ 5241.100429] do_translation_fault+0x4c/0x68
> [ 5241.100431] do_mem_abort+0x48/0xa0
> [ 5241.100434] el0_da+0x38/0xc0
> [ 5241.100436] el0t_64_sync_handler+0x68/0xc0
> [ 5241.100437] el0t_64_sync+0x14c/0x150
> [ 5241.100439] ---[ end trace 0000000000000000 ]---
>
> Fixes: 85ce2c517ade ("memcontrol: only transfer the memcg data for migration")
> Signed-off-by: Baolin Wang <[email protected]>
> ---
> mm/shmem.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 99bd3c34f0fb..4acaf02bfe44 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -1908,7 +1908,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
> xa_lock_irq(&swap_mapping->i_pages);
> error = shmem_replace_entry(swap_mapping, swap_index, old, new);
> if (!error) {
> - mem_cgroup_migrate(old, new);
> + mem_cgroup_replace_folio(old, new);
The comment above the definition of mem_cgroup_replace_folio() needs
to be updated with this change. It mentions that it is only used in
replace_page_cache_folio(), which is no longer true.
> __lruvec_stat_mod_folio(new, NR_FILE_PAGES, 1);
> __lruvec_stat_mod_folio(new, NR_SHMEM, 1);
> __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -1);
> --
> 2.39.3
>
>
On 2024/6/14 06:20, Shakeel Butt wrote:
> On Thu, Jun 13, 2024 at 04:21:19PM GMT, Baolin Wang wrote:
>> When testing shmem swapin, I encountered the warning below on my machine.
>> The reason is that replacing an old shmem folio with a new one causes
>> mem_cgroup_migrate() to clear the old folio's memcg data. As a result,
>> the old folio cannot get the correct memcg's lruvec needed to remove itself
>> from the LRU list when it is being freed. This could lead to possible serious
>> problems, such as LRU list crashes due to holding the wrong LRU lock, and
>> incorrect LRU statistics.
>>
>> To fix this issue, we can fallback to use the mem_cgroup_replace_folio()
>> to replace the old shmem folio.
>>
>> [ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960
>> [ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
>> [ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)
>> [ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
>> [ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
>> [ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
>> [ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
>> [ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000
>> [ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000
>> [ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
>> [ 5241.100338] ------------[ cut here ]------------
>> [ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150
>> [...]
>> [ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150
>> [ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150
>> [ 5241.100376] sp : ffff80008b38b930
>> [...]
>> [ 5241.100398] Call trace:
>> [ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150
>> [ 5241.100401] __page_cache_release+0x90/0x300
>> [ 5241.100404] __folio_put+0x50/0x108
>> [ 5241.100406] shmem_replace_folio+0x1b4/0x240
>> [ 5241.100409] shmem_swapin_folio+0x314/0x528
>> [ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930
>> [ 5241.100412] shmem_fault+0x74/0x160
>> [ 5241.100414] __do_fault+0x40/0x218
>> [ 5241.100417] do_shared_fault+0x34/0x1b0
>> [ 5241.100419] do_fault+0x40/0x168
>> [ 5241.100420] handle_pte_fault+0x80/0x228
>> [ 5241.100422] __handle_mm_fault+0x1c4/0x440
>> [ 5241.100424] handle_mm_fault+0x60/0x1f0
>> [ 5241.100426] do_page_fault+0x120/0x488
>> [ 5241.100429] do_translation_fault+0x4c/0x68
>> [ 5241.100431] do_mem_abort+0x48/0xa0
>> [ 5241.100434] el0_da+0x38/0xc0
>> [ 5241.100436] el0t_64_sync_handler+0x68/0xc0
>> [ 5241.100437] el0t_64_sync+0x14c/0x150
>> [ 5241.100439] ---[ end trace 0000000000000000 ]---
>>
>> Fixes: 85ce2c517ade ("memcontrol: only transfer the memcg data for migration")
>> Signed-off-by: Baolin Wang <[email protected]>
>
> Reviewed-by: Shakeel Butt <[email protected]>
Thanks for reviewing.
> One request: please add VM_BUG_ON_FOLIO(folio_test_lru(folio), folio) in
> mem_cgroup_migrate().
Good suggestion. I will create a separate patch to do this.
On 2024/6/14 06:24, Yosry Ahmed wrote:
> On Thu, Jun 13, 2024 at 1:21 AM Baolin Wang
> <[email protected]> wrote:
>>
>> When testing shmem swapin, I encountered the warning below on my machine.
>> The reason is that replacing an old shmem folio with a new one causes
>> mem_cgroup_migrate() to clear the old folio's memcg data. As a result,
>> the old folio cannot get the correct memcg's lruvec needed to remove itself
>> from the LRU list when it is being freed. This could lead to possible serious
>> problems, such as LRU list crashes due to holding the wrong LRU lock, and
>> incorrect LRU statistics.
>>
>> To fix this issue, we can fallback to use the mem_cgroup_replace_folio()
>> to replace the old shmem folio.
>>
>> [ 5241.100311] page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x5d9960
>> [ 5241.100317] head: order:4 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0
>> [ 5241.100319] flags: 0x17fffe0000040068(uptodate|lru|head|swapbacked|node=0|zone=2|lastcpupid=0x3ffff)
>> [ 5241.100323] raw: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
>> [ 5241.100325] raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
>> [ 5241.100326] head: 17fffe0000040068 fffffdffd6687948 fffffdffd69ae008 0000000000000000
>> [ 5241.100327] head: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000
>> [ 5241.100328] head: 17fffe0000000204 fffffdffd6665801 ffffffffffffffff 0000000000000000
>> [ 5241.100329] head: 0000000a00000010 0000000000000000 00000000ffffffff 0000000000000000
>> [ 5241.100330] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
>> [ 5241.100338] ------------[ cut here ]------------
>> [ 5241.100339] WARNING: CPU: 19 PID: 78402 at include/linux/memcontrol.h:775 folio_lruvec_lock_irqsave+0x140/0x150
>> [...]
>> [ 5241.100374] pc : folio_lruvec_lock_irqsave+0x140/0x150
>> [ 5241.100375] lr : folio_lruvec_lock_irqsave+0x138/0x150
>> [ 5241.100376] sp : ffff80008b38b930
>> [...]
>> [ 5241.100398] Call trace:
>> [ 5241.100399] folio_lruvec_lock_irqsave+0x140/0x150
>> [ 5241.100401] __page_cache_release+0x90/0x300
>> [ 5241.100404] __folio_put+0x50/0x108
>> [ 5241.100406] shmem_replace_folio+0x1b4/0x240
>> [ 5241.100409] shmem_swapin_folio+0x314/0x528
>> [ 5241.100411] shmem_get_folio_gfp+0x3b4/0x930
>> [ 5241.100412] shmem_fault+0x74/0x160
>> [ 5241.100414] __do_fault+0x40/0x218
>> [ 5241.100417] do_shared_fault+0x34/0x1b0
>> [ 5241.100419] do_fault+0x40/0x168
>> [ 5241.100420] handle_pte_fault+0x80/0x228
>> [ 5241.100422] __handle_mm_fault+0x1c4/0x440
>> [ 5241.100424] handle_mm_fault+0x60/0x1f0
>> [ 5241.100426] do_page_fault+0x120/0x488
>> [ 5241.100429] do_translation_fault+0x4c/0x68
>> [ 5241.100431] do_mem_abort+0x48/0xa0
>> [ 5241.100434] el0_da+0x38/0xc0
>> [ 5241.100436] el0t_64_sync_handler+0x68/0xc0
>> [ 5241.100437] el0t_64_sync+0x14c/0x150
>> [ 5241.100439] ---[ end trace 0000000000000000 ]---
>>
>> Fixes: 85ce2c517ade ("memcontrol: only transfer the memcg data for migration")
>> Signed-off-by: Baolin Wang <[email protected]>
>> ---
>> mm/shmem.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/mm/shmem.c b/mm/shmem.c
>> index 99bd3c34f0fb..4acaf02bfe44 100644
>> --- a/mm/shmem.c
>> +++ b/mm/shmem.c
>> @@ -1908,7 +1908,7 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
>> xa_lock_irq(&swap_mapping->i_pages);
>> error = shmem_replace_entry(swap_mapping, swap_index, old, new);
>> if (!error) {
>> - mem_cgroup_migrate(old, new);
>> + mem_cgroup_replace_folio(old, new);
>
> The comment above the definition of mem_cgroup_replace_folio() needs
> to be updated with this change. It mentions that it is only used in
> replace_page_cache_folio(), which is no longer true.
Indeed, will update the related comments in v2. Thanks.