Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933365AbcCNGs0 (ORCPT ); Mon, 14 Mar 2016 02:48:26 -0400 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:46856 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750925AbcCNGsZ (ORCPT ); Mon, 14 Mar 2016 02:48:25 -0400 X-Original-SENDERIP: 156.147.1.127 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Mon, 14 Mar 2016 15:49:26 +0900 From: Joonsoo Kim To: Vlastimil Babka Cc: "Leizhen (ThunderTown)" , Laura Abbott , Hanjun Guo , "linux-arm-kernel@lists.infradead.org" , "linux-kernel@vger.kernel.org" , Andrew Morton , Sasha Levin , Laura Abbott , qiuxishi , Catalin Marinas , Will Deacon , Arnd Bergmann , dingtinahong , chenjie6@huawei.com, "linux-mm@kvack.org" Subject: Re: Suspicious error for CMA stress test Message-ID: <20160314064925.GA27587@js1304-P5Q-DELUXE> References: <56D92595.60709@huawei.com> <20160304063807.GA13317@js1304-P5Q-DELUXE> <56D93ABE.9070406@huawei.com> <20160307043442.GB24602@js1304-P5Q-DELUXE> <56DD38E7.3050107@huawei.com> <56DDCB86.4030709@redhat.com> <56DE30CB.7020207@huawei.com> <56DF7B28.9060108@huawei.com> <56E2FB5C.1040602@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <56E2FB5C.1040602@suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3616 Lines: 101 On Fri, Mar 11, 2016 at 06:07:40PM +0100, Vlastimil Babka wrote: > On 03/11/2016 04:00 PM, Joonsoo Kim wrote: > > 2016-03-09 10:23 GMT+09:00 Leizhen (ThunderTown) : > >> > >> Hi, Joonsoo: > >> This new patch worked well. Do you plan to upstream it in the near furture? > > > > Of course! > > But, I should think more because it touches allocator's fastpatch and > > I'd like to detour. > > If I fail to think a better solution, I will send it as is, soon. > > How about something like this? Just and idea, probably buggy (off-by-one etc.). > Should keep away cost from relatively fewer >pageblock_order iterations. Hmm... I tested this and found that it's code size is a little bit larger than mine. I'm not sure why this happens exactly but I guess it would be related to compiler optimization. In this case, I'm in favor of my implementation because it looks like well abstraction. It adds one unlikely branch to the merge loop but compiler would optimize it to check it once. Thanks. > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index ff1e3cbc8956..b8005a07b2a1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -685,21 +685,13 @@ static inline void __free_one_page(struct page *page, > unsigned long combined_idx; > unsigned long uninitialized_var(buddy_idx); > struct page *buddy; > - unsigned int max_order = MAX_ORDER; > + unsigned int max_order = pageblock_order + 1; > > VM_BUG_ON(!zone_is_initialized(zone)); > VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page); > > VM_BUG_ON(migratetype == -1); > - if (is_migrate_isolate(migratetype)) { > - /* > - * We restrict max order of merging to prevent merge > - * between freepages on isolate pageblock and normal > - * pageblock. Without this, pageblock isolation > - * could cause incorrect freepage accounting. > - */ > - max_order = min_t(unsigned int, MAX_ORDER, pageblock_order + 1); > - } else { > + if (likely(!is_migrate_isolate(migratetype))) { > __mod_zone_freepage_state(zone, 1 << order, migratetype); > } > > @@ -708,11 +700,12 @@ static inline void __free_one_page(struct page *page, > VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page); > VM_BUG_ON_PAGE(bad_range(zone, page), page); > > +continue_merging: > while (order < max_order - 1) { > buddy_idx = __find_buddy_index(page_idx, order); > buddy = page + (buddy_idx - page_idx); > if (!page_is_buddy(page, buddy, order)) > - break; > + goto done_merging; > /* > * Our buddy is free or it is CONFIG_DEBUG_PAGEALLOC guard page, > * merge with it and move up one order. > @@ -729,6 +722,26 @@ static inline void __free_one_page(struct page *page, > page_idx = combined_idx; > order++; > } > + if (max_order < MAX_ORDER) { > + if (IS_ENABLED(CONFIG_CMA) && > + unlikely(has_isolate_pageblock(zone))) { > + > + int buddy_mt; > + > + buddy_idx = __find_buddy_index(page_idx, order); > + buddy = page + (buddy_idx - page_idx); > + buddy_mt = get_pageblock_migratetype(buddy); > + > + if (migratetype != buddy_mt && > + (is_migrate_isolate(migratetype) || > + is_migrate_isolate(buddy_mt))) > + goto done_merging; > + } > + max_order++; > + goto continue_merging; > + } > + > +done_merging: > set_page_order(page, order); > > /* > > -- > To unsubscribe, send a message with 'unsubscribe linux-mm' in > the body to majordomo@kvack.org. For more info on Linux MM, > see: http://www.linux-mm.org/ . > Don't email: email@kvack.org