Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932250AbbLNJzF (ORCPT ); Mon, 14 Dec 2015 04:55:05 -0500 Received: from mx2.suse.de ([195.135.220.15]:35788 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932065AbbLNJzC (ORCPT ); Mon, 14 Dec 2015 04:55:02 -0500 Subject: Re: [PATCH v3 7/7] mm/compaction: replace compaction deferring with compaction limit To: Joonsoo Kim , Andrew Morton References: <1449126681-19647-1-git-send-email-iamjoonsoo.kim@lge.com> <1449126681-19647-8-git-send-email-iamjoonsoo.kim@lge.com> Cc: Mel Gorman , Rik van Riel , David Rientjes , Minchan Kim , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Joonsoo Kim From: Vlastimil Babka Message-ID: <566E91F4.3000709@suse.cz> Date: Mon, 14 Dec 2015 10:55:00 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.3.0 MIME-Version: 1.0 In-Reply-To: <1449126681-19647-8-git-send-email-iamjoonsoo.kim@lge.com> Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2540 Lines: 49 On 12/03/2015 08:11 AM, Joonsoo Kim wrote: > Compaction deferring effectively reduces compaction overhead if > compaction success isn't expected. But, it is implemented that > skipping a number of compaction requests until compaction is re-enabled. > Due to this implementation, unfortunate compaction requestor will get > whole compaction overhead unlike others have zero overhead. And, after > deferring start to work, even if compaction success possibility is > restored, we should skip to compaction in some number of times. > > This patch try to solve above problem by using compaction limit. > Instead of imposing compaction overhead to one unfortunate requestor, > compaction limit distributes overhead to all compaction requestors. > All requestors have a chance to migrate some amount of pages and > after limit is exhausted compaction will be stopped. This will fairly > distributes overhead to all compaction requestors. And, because we don't > defer compaction request, someone will succeed to compact as soon as > possible if compaction success possiblility is restored. > > Following is whole workflow enabled by this change. > > - if sync compaction fails, compact_order_failed is set to current order > - if it fails again, compact_defer_shift is adjusted > - with positive compact_defer_shift, migration_scan_limit is assigned and > compaction limit is activated > - if compaction limit is activated, compaction would be stopped when > migration_scan_limit is exhausted > - when success, compact_defer_shift and compact_order_failed is reset and > compaction limit is deactivated > - compact_defer_shift can be grown up to COMPACT_MAX_DEFER_SHIFT > > Most of changes are mechanical ones to remove compact_considered which > is not needed now. Note that, after restart, compact_defer_shift is > subtracted by 1 to avoid invoking __reset_isolation_suitable() > repeatedly. > > I tested this patch on my compaction benchmark and found that high-order > allocation latency is evenly distributed and there is no latency spike > in the situation where compaction success isn't possible. > > Signed-off-by: Joonsoo Kim Looks fine overal, looking forward to next version :) (due to changes expected in preceding patches, I didn't review the code fully now). -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/