Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754121AbbLGICA (ORCPT ); Mon, 7 Dec 2015 03:02:00 -0500 Received: from LGEAMRELO11.lge.com ([156.147.23.51]:48315 "EHLO lgeamrelo11.lge.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750775AbbLGIB7 (ORCPT ); Mon, 7 Dec 2015 03:01:59 -0500 X-Original-SENDERIP: 156.147.1.121 X-Original-MAILFROM: iamjoonsoo.kim@lge.com X-Original-SENDERIP: 10.177.222.138 X-Original-MAILFROM: iamjoonsoo.kim@lge.com Date: Mon, 7 Dec 2015 17:03:07 +0900 From: Joonsoo Kim To: Vlastimil Babka Cc: Andrew Morton , Mel Gorman , Rik van Riel , David Rientjes , Minchan Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH v3 4/7] mm/compaction: update defer counter when allocation is expected to succeed Message-ID: <20151207080307.GC27292@js1304-P5Q-DELUXE> References: <1449126681-19647-1-git-send-email-iamjoonsoo.kim@lge.com> <1449126681-19647-5-git-send-email-iamjoonsoo.kim@lge.com> <5661C4C5.2020901@suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5661C4C5.2020901@suse.cz> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 3028 Lines: 71 On Fri, Dec 04, 2015 at 05:52:21PM +0100, Vlastimil Babka wrote: > On 12/03/2015 08:11 AM, Joonsoo Kim wrote: > >It's rather strange that compact_considered and compact_defer_shift aren't > >updated but compact_order_failed is updated when allocation is expected > >to succeed. Regardless actual allocation success, deferring for current > >order will be disabled so it doesn't result in much difference to > >compaction behaviour. > > The difference is that if the defer reset was wrong, the next > compaction attempt that fails would resume the deferred counters? Right. But, perhaps, if we wrongly reset order_failed due to difference of check criteria, it could happen again and again on next compaction attempt so defer would not work as intended. > > >Moreover, in the past, there is a gap between expectation for allocation > >succeess in compaction and actual success in page allocator. But, now, > >this gap would be diminished due to providing classzone_idx and alloc_flags > >to watermark check in compaction and changed watermark check criteria > >for high-order allocation. Therfore, it's not a big problem to update > >defer counter when allocation is expected to succeed. This change > >will help to simplify defer logic. > > I guess that's true. But at least some experiment would be better. Yeah, I tested it today and found that there is a difference. Allocation is more successful(really minor, 0.25%) than checking in compaction. Reason is that watermark check in try_to_compact_pages() uses low_wmark_pages but get_page_from_freelist() after direct compaction uses min_wmark_pages. When I change low_wmark_pages to min_wmark_pages, I can't find any difference. It seems reasonable to change low_wmark_pages to min_wmark_pages in some places where checking compaction finish condition. I will add the patch on next spin. > > >Signed-off-by: Joonsoo Kim > >--- > > include/linux/compaction.h | 2 -- > > mm/compaction.c | 27 ++++++++------------------- > > mm/page_alloc.c | 1 - > > 3 files changed, 8 insertions(+), 22 deletions(-) > > > > ... > > >diff --git a/mm/page_alloc.c b/mm/page_alloc.c > >index 7002c66..f3605fd 100644 > >--- a/mm/page_alloc.c > >+++ b/mm/page_alloc.c > >@@ -2815,7 +2815,6 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order, > > struct zone *zone = page_zone(page); > > > > zone->compact_blockskip_flush = false; > > While we are here, I wonder if this is useful at all? I think it's useful. We still have some cases that premature compaction complete happens (e.g. async compaction). In this case, if next sync compaction succeed, compact_blockskip_flush is cleared and pageblock skip bit will not be reset so overhead would be reduced. Thanks. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/