Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755678AbcCDGhn (ORCPT ); Fri, 4 Mar 2016 01:37:43 -0500 Received: from LGEAMRELO12.lge.com ([156.147.23.52]:37046 "EHLO lgeamrelo12.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751088AbcCDGhm (ORCPT ); Fri, 4 Mar 2016 01:37:42 -0500 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Fri, 4 Mar 2016 15:38:07 +0900 From: Joonsoo Kim To: Hanjun Guo Cc: Laura Abbott , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , Andrew Morton , Sasha Levin , Laura Abbott , qiuxishi , Catalin Marinas , Will Deacon , Arnd Bergmann , "thunder.leizhen@huawei.com" , dingtinahong , chenjie6@huawei.com, "linux-mm@kvack.org" Subject: Re: Suspicious error for CMA stress test Message-ID: <20160304063807.GA13317@js1304-P5Q-DELUXE> References: <56D6F008.1050600@huawei.com> <56D79284.3030009@redhat.com> <56D832BD.5080305@huawei.com> <20160304020232.GA12036@js1304-P5Q-DELUXE> <20160304043232.GC12036@js1304-P5Q-DELUXE> <56D92595.60709@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56D92595.60709@huawei.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2908 Lines: 73 On Fri, Mar 04, 2016 at 02:05:09PM +0800, Hanjun Guo wrote: > On 2016/3/4 12:32, Joonsoo Kim wrote: > > On Fri, Mar 04, 2016 at 11:02:33AM +0900, Joonsoo Kim wrote: > >> On Thu, Mar 03, 2016 at 08:49:01PM +0800, Hanjun Guo wrote: > >>> On 2016/3/3 15:42, Joonsoo Kim wrote: > >>>> 2016-03-03 10:25 GMT+09:00 Laura Abbott : > >>>>> (cc -mm and Joonsoo Kim) > >>>>> > >>>>> > >>>>> On 03/02/2016 05:52 AM, Hanjun Guo wrote: > >>>>>> Hi, > >>>>>> > >>>>>> I came across a suspicious error for CMA stress test: > >>>>>> > >>>>>> Before the test, I got: > >>>>>> -bash-4.3# cat /proc/meminfo | grep Cma > >>>>>> CmaTotal: 204800 kB > >>>>>> CmaFree: 195044 kB > >>>>>> > >>>>>> > >>>>>> After running the test: > >>>>>> -bash-4.3# cat /proc/meminfo | grep Cma > >>>>>> CmaTotal: 204800 kB > >>>>>> CmaFree: 6602584 kB > >>>>>> > >>>>>> So the freed CMA memory is more than total.. > >>>>>> > >>>>>> Also the the MemFree is more than mem total: > >>>>>> > >>>>>> -bash-4.3# cat /proc/meminfo > >>>>>> MemTotal: 16342016 kB > >>>>>> MemFree: 22367268 kB > >>>>>> MemAvailable: 22370528 kB > >>> [...] > >>>>> I played with this a bit and can see the same problem. The sanity > >>>>> check of CmaFree < CmaTotal generally triggers in > >>>>> __move_zone_freepage_state in unset_migratetype_isolate. > >>>>> This also seems to be present as far back as v4.0 which was the > >>>>> first version to have the updated accounting from Joonsoo. > >>>>> Were there known limitations with the new freepage accounting, > >>>>> Joonsoo? > >>>> I don't know. I also played with this and looks like there is > >>>> accounting problem, however, for my case, number of free page is slightly less > >>>> than total. I will take a look. > >>>> > >>>> Hanjun, could you tell me your malloc_size? I tested with 1 and it doesn't > >>>> look like your case. > >>> I tested with malloc_size with 2M, and it grows much bigger than 1M, also I > >>> did some other test: > >> Thanks! Now, I can re-generate erronous situation you mentioned. > >> > >>> - run with single thread with 100000 times, everything is fine. > >>> > >>> - I hack the cam_alloc() and free as below [1] to see if it's lock issue, with > >>> the same test with 100 multi-thread, then I got: > >> [1] would not be sufficient to close this race. > >> > >> Try following things [A]. And, for more accurate test, I changed code a bit more > >> to prevent kernel page allocation from cma area [B]. This will prevent kernel > >> page allocation from cma area completely so we can focus cma_alloc/release race. > >> > >> Although, this is not correct fix, it could help that we can guess > >> where the problem is. > > More correct fix is something like below. > > Please test it. > > Hmm, this is not working: Sad to hear that. Could you tell me your system's MAX_ORDER and pageblock_order? Thanks.