Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751523AbcCDFes (ORCPT ); Fri, 4 Mar 2016 00:34:48 -0500 Received: from szxga03-in.huawei.com ([119.145.14.66]:58826 "EHLO szxga03-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750824AbcCDFeq (ORCPT ); Fri, 4 Mar 2016 00:34:46 -0500 Subject: Re: Suspicious error for CMA stress test To: Joonsoo Kim References: <56D6F008.1050600@huawei.com> <56D79284.3030009@redhat.com> <56D832BD.5080305@huawei.com> <20160304020232.GA12036@js1304-P5Q-DELUXE> CC: Laura Abbott , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , Andrew Morton , Sasha Levin , Laura Abbott , qiuxishi , Catalin Marinas , Will Deacon , Arnd Bergmann , "thunder.leizhen@huawei.com" , dingtinahong , , "linux-mm@kvack.org" From: Hanjun Guo Message-ID: <56D91E18.1020807@huawei.com> Date: Fri, 4 Mar 2016 13:33:12 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:38.0) Gecko/20100101 Thunderbird/38.5.1 MIME-Version: 1.0 In-Reply-To: <20160304020232.GA12036@js1304-P5Q-DELUXE> Content-Type: text/plain; charset="windows-1252" Content-Transfer-Encoding: 7bit X-Originating-IP: [10.177.17.188] X-CFilter-Loop: Reflected X-Mirapoint-Virus-RAPID-Raw: score=unknown(0), refid=str=0001.0A090201.56D91E71.0009,ss=1,re=0.000,recu=0.000,reip=0.000,cl=1,cld=1,fgs=0, ip=0.0.0.0, so=2013-05-26 15:14:31, dmn=2013-03-21 17:37:32 X-Mirapoint-Loop-Id: 556ddb963eaf3284e9e7783a2b3d93e5 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2590 Lines: 74 Hi Joonsoo, On 2016/3/4 10:02, Joonsoo Kim wrote: > On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote: >> On 2016/3/3 15:42, Joonsoo Kim wrote: >>> 2016-03-03 10:25 GMT+09:00 Laura Abbott : >>>> (cc -mm and Joonsoo Kim) >>>> >>>> >>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote: >>>>> Hi, >>>>> >>>>> I came across a suspicious error for CMA stress test: >>>>> >>>>> Before the test, I got: >>>>> -bash-4.3# cat /proc/meminfo | grep Cma >>>>> CmaTotal: 204800 kB >>>>> CmaFree: 195044 kB >>>>> >>>>> >>>>> After running the test: >>>>> -bash-4.3# cat /proc/meminfo | grep Cma >>>>> CmaTotal: 204800 kB >>>>> CmaFree: 6602584 kB >>>>> >>>>> So the freed CMA memory is more than total.. >>>>> >>>>> Also the the MemFree is more than mem total: >>>>> >>>>> -bash-4.3# cat /proc/meminfo >>>>> MemTotal: 16342016 kB >>>>> MemFree: 22367268 kB >>>>> MemAvailable: 22370528 kB >> [...] >>>> I played with this a bit and can see the same problem. The sanity >>>> check of CmaFree < CmaTotal generally triggers in >>>> __move_zone_freepage_state in unset_migratetype_isolate. >>>> This also seems to be present as far back as v4.0 which was the >>>> first version to have the updated accounting from Joonsoo. >>>> Were there known limitations with the new freepage accounting, >>>> Joonsoo? >>> I don't know. I also played with this and looks like there is >>> accounting problem, however, for my case, number of free page is slightly less >>> than total. I will take a look. >>> >>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't >>> look like your case. >> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I >> did some other test: > Thanks! Now, I can re-generate erronous situation you mentioned. > >> - run with single thread with 100000 times, everything is fine. >> >> - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with >> the same test with 100 multi-thread, then I got: > [1] would not be sufficient to close this race. > > Try following things [A]. And, for more accurate test, I changed code a bit more > to prevent kernel page allocation from cma area [B]. This will prevent kernel > page allocation from cma area completely so we can focus cma_alloc/release race. > > Although, this is not correct fix, it could help that we can guess > where the problem is. > > Thanks. > > [A] I tested this solution [A], it can fix the problem, as you are posting a new patch, I will test that one and leave [B] alone :) Thanks Hanjun