2022-05-09 10:55:59

by Aisheng Dong

[permalink] [raw]
Subject: [PATCH 1/1] Revert "mm/cma.c: remove redundant cma_mutex lock"

This reverts commit a4efc174b382fcdb62e2d90d39e78a274a975e38 which
introduced a regression issue that when there're multiple processes
allocating dma memory in parallel by calling dma_alloc_coherent(), it
may fail sometimes as follows:

Error log:
cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
cma: number of available pages:
3@125+20@172+12@236+4@380+32@736+17@2287+23@2473+20@36076+99@40477+108@40852+44@41108+20@41196+108@41364+108@41620+
108@42900+108@43156+483@44061+1763@45341+1440@47712+20@49324+20@49388+5076@49452+2304@55040+35@58141+20@58220+20@58284+
7188@58348+84@66220+7276@66452+227@74525+6371@75549=> 33161 free of 81920 total pages

When issue happened, we saw there were still 33161 pages (129M) free CMA
memory and a lot available free slots for 148 pages in CMA bitmap that we
want to allocate.

When dumping memory info, we found that there was also ~342M normal memory,
but only 1352K CMA memory left in buddy system while a lot of pageblocks
were isolated.

Memory info log:
Normal free:351096kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB
active_anon:98060kB inactive_anon:98948kB active_file:60864kB inactive_file:31776kB
unevictable:0kB writepending:0kB present:1048576kB managed:1018328kB mlocked:0kB
bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB lowmem_reserve[]: 0 0 0
Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB (UMECI) 65*64kB (UMCI)
36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI) 4*2048kB (MI) 8*4096kB (EI)
8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kB

The root cause of this issue is that since commit a4efc174b382
("mm/cma.c: remove redundant cma_mutex lock"), CMA supports concurrent
memory allocation. It's possible that the memory range process A trying
to alloc has already been isolated by the allocation of process B during
memory migration.

The problem here is that the memory range isolated during one allocation
by start_isolate_page_range() could be much bigger than the real size we
want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.

Taking an ARMv7 platform with 1G memory as an example, when MAX_ORDER_NR_PAGES
is big (e.g. 32M with max_order 14) and CMA memory is relatively small
(e.g. 128M), there're only 4 MAX_ORDER slot, then it's very easy that
all CMA memory may have already been isolated by other processes when
one trying to allocate memory using dma_alloc_coherent().
Since current CMA code will only scan one time of whole available CMA
memory, then dma_alloc_coherent() may easy fail due to contention with
other processes.

This patch simply falls back to the original method that using cma_mutex
to make alloc_contig_range() run sequentially to avoid the issue.

Cc: Andrew Morton <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Lecopzer Chen <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Minchan Kim <[email protected]>
CC: [email protected] # 5.11+
Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
Signed-off-by: Dong Aisheng <[email protected]>
---
Patch is based on
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-stable
---
mm/cma.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/mm/cma.c b/mm/cma.c
index eaa4b5c920a2..4a978e09547a 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -37,6 +37,7 @@

struct cma cma_areas[MAX_CMA_AREAS];
unsigned cma_area_count;
+static DEFINE_MUTEX(cma_mutex);

phys_addr_t cma_get_base(const struct cma *cma)
{
@@ -468,9 +469,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
spin_unlock_irq(&cma->lock);

pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
+ mutex_lock(&cma_mutex);
ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
-
+ mutex_unlock(&cma_mutex);
if (ret == 0) {
page = pfn_to_page(pfn);
break;
--
2.25.1



2022-05-09 15:46:25

by Minchan Kim

[permalink] [raw]
Subject: Re: [PATCH 1/1] Revert "mm/cma.c: remove redundant cma_mutex lock"

On Mon, May 09, 2022 at 05:45:51PM +0800, Dong Aisheng wrote:
> This reverts commit a4efc174b382fcdb62e2d90d39e78a274a975e38 which
> introduced a regression issue that when there're multiple processes
> allocating dma memory in parallel by calling dma_alloc_coherent(), it
> may fail sometimes as follows:
>
> Error log:
> cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
> cma: number of available pages:
> 3@125+20@172+12@236+4@380+32@736+17@2287+23@2473+20@36076+99@40477+108@40852+44@41108+20@41196+108@41364+108@41620+
> 108@42900+108@43156+483@44061+1763@45341+1440@47712+20@49324+20@49388+5076@49452+2304@55040+35@58141+20@58220+20@58284+
> 7188@58348+84@66220+7276@66452+227@74525+6371@75549=> 33161 free of 81920 total pages
>
> When issue happened, we saw there were still 33161 pages (129M) free CMA
> memory and a lot available free slots for 148 pages in CMA bitmap that we
> want to allocate.
>
> When dumping memory info, we found that there was also ~342M normal memory,
> but only 1352K CMA memory left in buddy system while a lot of pageblocks
> were isolated.
>
> Memory info log:
> Normal free:351096kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB
> active_anon:98060kB inactive_anon:98948kB active_file:60864kB inactive_file:31776kB
> unevictable:0kB writepending:0kB present:1048576kB managed:1018328kB mlocked:0kB
> bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB lowmem_reserve[]: 0 0 0
> Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB (UMECI) 65*64kB (UMCI)
> 36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI) 4*2048kB (MI) 8*4096kB (EI)
> 8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kB
>
> The root cause of this issue is that since commit a4efc174b382
> ("mm/cma.c: remove redundant cma_mutex lock"), CMA supports concurrent
> memory allocation. It's possible that the memory range process A trying
> to alloc has already been isolated by the allocation of process B during
> memory migration.
>
> The problem here is that the memory range isolated during one allocation
> by start_isolate_page_range() could be much bigger than the real size we
> want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.
>
> Taking an ARMv7 platform with 1G memory as an example, when MAX_ORDER_NR_PAGES
> is big (e.g. 32M with max_order 14) and CMA memory is relatively small
> (e.g. 128M), there're only 4 MAX_ORDER slot, then it's very easy that
> all CMA memory may have already been isolated by other processes when
> one trying to allocate memory using dma_alloc_coherent().
> Since current CMA code will only scan one time of whole available CMA
> memory, then dma_alloc_coherent() may easy fail due to contention with
> other processes.
>
> This patch simply falls back to the original method that using cma_mutex
> to make alloc_contig_range() run sequentially to avoid the issue.
>


Link: https://lore.kernel.org/all/[email protected]/

Adding a link would be helpful why decided to revert.

> Cc: Andrew Morton <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Lecopzer Chen <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Minchan Kim <[email protected]>
> CC: [email protected] # 5.11+
> Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
> Signed-off-by: Dong Aisheng <[email protected]>
Acked-by: Minchan Kim <[email protected]>

2022-05-12 11:37:38

by David Hildenbrand

[permalink] [raw]
Subject: Re: [PATCH 1/1] Revert "mm/cma.c: remove redundant cma_mutex lock"

On 09.05.22 11:45, Dong Aisheng wrote:
> This reverts commit a4efc174b382fcdb62e2d90d39e78a274a975e38 which
> introduced a regression issue that when there're multiple processes
> allocating dma memory in parallel by calling dma_alloc_coherent(), it
> may fail sometimes as follows:
>
> Error log:
> cma: cma_alloc: linux,cma: alloc failed, req-size: 148 pages, ret: -16
> cma: number of available pages:
> 3@125+20@172+12@236+4@380+32@736+17@2287+23@2473+20@36076+99@40477+108@40852+44@41108+20@41196+108@41364+108@41620+
> 108@42900+108@43156+483@44061+1763@45341+1440@47712+20@49324+20@49388+5076@49452+2304@55040+35@58141+20@58220+20@58284+
> 7188@58348+84@66220+7276@66452+227@74525+6371@75549=> 33161 free of 81920 total pages
>
> When issue happened, we saw there were still 33161 pages (129M) free CMA
> memory and a lot available free slots for 148 pages in CMA bitmap that we
> want to allocate.
>
> When dumping memory info, we found that there was also ~342M normal memory,
> but only 1352K CMA memory left in buddy system while a lot of pageblocks
> were isolated.
>
> Memory info log:
> Normal free:351096kB min:30000kB low:37500kB high:45000kB reserved_highatomic:0KB
> active_anon:98060kB inactive_anon:98948kB active_file:60864kB inactive_file:31776kB
> unevictable:0kB writepending:0kB present:1048576kB managed:1018328kB mlocked:0kB
> bounce:0kB free_pcp:220kB local_pcp:192kB free_cma:1352kB lowmem_reserve[]: 0 0 0
> Normal: 78*4kB (UECI) 1772*8kB (UMECI) 1335*16kB (UMECI) 360*32kB (UMECI) 65*64kB (UMCI)
> 36*128kB (UMECI) 16*256kB (UMCI) 6*512kB (EI) 8*1024kB (UEI) 4*2048kB (MI) 8*4096kB (EI)
> 8*8192kB (UI) 3*16384kB (EI) 8*32768kB (M) = 489288kB
>
> The root cause of this issue is that since commit a4efc174b382
> ("mm/cma.c: remove redundant cma_mutex lock"), CMA supports concurrent
> memory allocation. It's possible that the memory range process A trying
> to alloc has already been isolated by the allocation of process B during
> memory migration.
>
> The problem here is that the memory range isolated during one allocation
> by start_isolate_page_range() could be much bigger than the real size we
> want to alloc due to the range is aligned to MAX_ORDER_NR_PAGES.
>
> Taking an ARMv7 platform with 1G memory as an example, when MAX_ORDER_NR_PAGES
> is big (e.g. 32M with max_order 14) and CMA memory is relatively small
> (e.g. 128M), there're only 4 MAX_ORDER slot, then it's very easy that
> all CMA memory may have already been isolated by other processes when
> one trying to allocate memory using dma_alloc_coherent().
> Since current CMA code will only scan one time of whole available CMA
> memory, then dma_alloc_coherent() may easy fail due to contention with
> other processes.
>
> This patch simply falls back to the original method that using cma_mutex
> to make alloc_contig_range() run sequentially to avoid the issue.
>
> Cc: Andrew Morton <[email protected]>
> Cc: Marek Szyprowski <[email protected]>
> Cc: Lecopzer Chen <[email protected]>
> Cc: David Hildenbrand <[email protected]>
> Cc: Vlastimil Babka <[email protected]>
> Cc: Minchan Kim <[email protected]>
> CC: [email protected] # 5.11+
> Fixes: a4efc174b382 ("mm/cma.c: remove redundant cma_mutex lock")
> Signed-off-by: Dong Aisheng <[email protected]>
> ---
> Patch is based on
> git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git mm-stable
> ---
> mm/cma.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/mm/cma.c b/mm/cma.c
> index eaa4b5c920a2..4a978e09547a 100644
> --- a/mm/cma.c
> +++ b/mm/cma.c
> @@ -37,6 +37,7 @@
>
> struct cma cma_areas[MAX_CMA_AREAS];
> unsigned cma_area_count;
> +static DEFINE_MUTEX(cma_mutex);
>
> phys_addr_t cma_get_base(const struct cma *cma)
> {
> @@ -468,9 +469,10 @@ struct page *cma_alloc(struct cma *cma, unsigned long count,
> spin_unlock_irq(&cma->lock);
>
> pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit);
> + mutex_lock(&cma_mutex);
> ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA,
> GFP_KERNEL | (no_warn ? __GFP_NOWARN : 0));
> -
> + mutex_unlock(&cma_mutex);
> if (ret == 0) {
> page = pfn_to_page(pfn);
> break;


Acked-by: David Hildenbrand <[email protected]>

--
Thanks,

David / dhildenb