Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757052AbcCCMt5 (ORCPT ); Thu, 3 Mar 2016 07:49:57 -0500 Received: from szxga01-in.huawei.com ([58.251.152.64]:60239 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756211AbcCCMtz (ORCPT ); Thu, 3 Mar 2016 07:49:55 -0500 Subject: Re: Suspicious error for CMA stress test To: Joonsoo Kim , Laura Abbott References: <56D6F008.1050600@huawei.com> <56D79284.3030009@redhat.com> CC: "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , Andrew Morton , Sasha Levin , Laura Abbott , qiuxishi , Catalin Marinas , Will Deacon , Arnd Bergmann , "thunder.leizhen@huawei.com" , dingtinahong , , "linux-mm@kvack.org" From: Hanjun Guo Message-ID: <56D832BD.5080305@huawei.com> Date: Thu, 3 Mar 2016 20:49:01 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.17.188] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A020204.56D832EA.01C2,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-06-18 04:22:30, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: ed41f6e7d2a049fd2d64899ea62c1475 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4139 Lines: 125 On 2016/3/3 15:42, Joonsoo Kim wrote: > 2016-03-03 10:25 GMT+09:00 Laura Abbott : >> (cc -mm and Joonsoo Kim) >> >> >> On 03/02/2016 05:52 AM, Hanjun Guo wrote: >>> Hi, >>> >>> I came across a suspicious error for CMA stress test: >>> >>> Before the test, I got: >>> -bash-4.3# cat /proc/meminfo | grep Cma >>> CmaTotal: 204800 kB >>> CmaFree: 195044 kB >>> >>> >>> After running the test: >>> -bash-4.3# cat /proc/meminfo | grep Cma >>> CmaTotal: 204800 kB >>> CmaFree: 6602584 kB >>> >>> So the freed CMA memory is more than total.. >>> >>> Also the the MemFree is more than mem total: >>> >>> -bash-4.3# cat /proc/meminfo >>> MemTotal: 16342016 kB >>> MemFree: 22367268 kB >>> MemAvailable: 22370528 kB [...] >> >> I played with this a bit and can see the same problem. The sanity >> check of CmaFree < CmaTotal generally triggers in >> __move_zone_freepage_state in unset_migratetype_isolate. >> This also seems to be present as far back as v4.0 which was the >> first version to have the updated accounting from Joonsoo. >> Were there known limitations with the new freepage accounting, >> Joonsoo? > I don't know. I also played with this and looks like there is > accounting problem, however, for my case, number of free page is slightly less > than total. I will take a look. > > Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't > look like your case. I tested with malloc_size with 2M, and it grows much bigger than 1M, also I did some other test: - run with single thread with 100000 times, everything is fine. - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with the same test with 100 multi-thread, then I got: -bash-4.3# cat /proc/meminfo | grep Cma CmaTotal: 204800 kB CmaFree: 225112 kB It only increased about 30M for free, not 6G+ in previous test, although the problem is not solved, the problem is less serious, is it a synchronization problem? Thanks Hanjun [1]: index ea506eb..4447494 100644 --- a/mm/cma.c +++ b/mm/cma.c @@ -379,6 +379,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align) if (!count) return NULL; + mutex_lock(&cma_mutex); mask = cma_bitmap_aligned_mask(cma, align); offset = cma_bitmap_aligned_offset(cma, align); bitmap_maxno = cma_bitmap_maxno(cma); @@ -402,17 +403,16 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align) mutex_unlock(&cma->lock); pfn = cma->base_pfn + (bitmap_no << cma->order_per_bit); - mutex_lock(&cma_mutex); ret = alloc_contig_range(pfn, pfn + count, MIGRATE_CMA); - mutex_unlock(&cma_mutex); if (ret == 0) { page = pfn_to_page(pfn); break; } cma_clear_bitmap(cma, pfn, count); - if (ret != -EBUSY) + if (ret != -EBUSY) { break; + } pr_debug("%s(): memory range at %p is busy, retrying\n", __func__, pfn_to_page(pfn)); @@ -420,6 +420,7 @@ struct page *cma_alloc(struct cma *cma, size_t count, unsigned int align) start = bitmap_no + mask + 1; } + mutex_unlock(&cma_mutex); trace_cma_alloc(pfn, page, count, align); pr_debug("%s(): returned %p\n", __func__, page); @@ -445,15 +446,19 @@ bool cma_release(struct cma *cma, const struct page *pages, unsigned int count) pr_debug("%s(page %p)\n", __func__, (void *)pages); + mutex_lock(&cma_mutex); pfn = page_to_pfn(pages); - if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) + if (pfn < cma->base_pfn || pfn >= cma->base_pfn + cma->count) { + mutex_unlock(&cma_mutex); return false; + } VM_BUG_ON(pfn + count > cma->base_pfn + cma->count); free_contig_range(pfn, count); cma_clear_bitmap(cma, pfn, count); + mutex_unlock(&cma_mutex); trace_cma_release(pfn, pages, count); return true;