Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3413599imu; Fri, 18 Jan 2019 09:57:04 -0800 (PST) X-Google-Smtp-Source: ALg8bN4x1cZh8DFkD7ZPV6hhEP7AFBkYfQrwCn2k2B9E8Nzhcw1foK/w5oIZh5Xk/H1mSQSCL/na X-Received: by 2002:a17:902:7b91:: with SMTP id w17mr20116395pll.111.1547834224139; Fri, 18 Jan 2019 09:57:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547834224; cv=none; d=google.com; s=arc-20160816; b=tYpHogvue1wqkuHEKSMxvtcnB8KMA3eXyeyC2np7HGVwUWSuT03y3RlPYJNDrIHqmN FpDk33SjmFQe7uViC3NJq3dubtWjZgr0vbqALJXuVaHqs0MTG7KoTwfKabrdhY6t3k3N 5b2K6B5+ANLtUF55Hnwr1AcFxhfW6FUOupk3guBgnJ6Xlpq1QCeESa7vPvqcsBvKdoLp bG9csclH65PTJ0q4+FTnTIrHfFzL3Y1w71B0Parod1oKZpQBtkCCbWd913tAVqGwLzvu MGGRqlG+s2JtxOBKFqQstRX8wmv8beSqNnkIAsrsvJkPCcOfB6e5fFmSt2o/rLF+HA5O O3sQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=zQG6rQG/frHRZQkm9OpALx7BvtKeG7jt6cS8TBK6YUc=; b=E1KRqoYTQI2amx7J37j2RXwyB88lMzAP+/T8Pjf/6k1Nb879dCynfu/MeqPWlibzIW g1KPK64EMHH/lot5PCexbVf/CbSE163ZvdYaKHxg8owlXSZu+0aMwSsFc307qzvfHsOr zPFKHgt9TH4YzcizSHmZlKpOi3pK0QLyQljiSjkJ4rkVhJVmmBxkilzHGh75Cf7gvlmS GHvrR2muSbaepRBPwbhtESVwBl8AXpSoGNyROOv2d4TA5nCyKTp0pwdIDUClThsIoiK9 UocUf4cw42VD2xSoXiXw/k0kYSNu3OmtRKXjqv4g5CpSSwHkor8N0JG/6jHU+tK9RmS/ lu7g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id r4si4895725pgi.387.2019.01.18.09.56.49; Fri, 18 Jan 2019 09:57:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729006AbfARRym (ORCPT + 99 others); Fri, 18 Jan 2019 12:54:42 -0500 Received: from outbound-smtp08.blacknight.com ([46.22.139.13]:41757 "EHLO outbound-smtp08.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728574AbfARRym (ORCPT ); Fri, 18 Jan 2019 12:54:42 -0500 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp08.blacknight.com (Postfix) with ESMTPS id EF0A11C3080 for ; Fri, 18 Jan 2019 17:54:39 +0000 (GMT) Received: (qmail 6988 invoked from network); 18 Jan 2019 17:54:39 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.96]) by 81.17.254.9 with ESMTPA; 18 Jan 2019 17:54:39 -0000 From: Mel Gorman To: Andrew Morton Cc: David Rientjes , Andrea Arcangeli , Vlastimil Babka , Linux List Kernel Mailing , Linux-MM , Mel Gorman Subject: [PATCH 17/22] mm, compaction: Do not consider a need to reschedule as contention Date: Fri, 18 Jan 2019 17:51:31 +0000 Message-Id: <20190118175136.31341-18-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190118175136.31341-1-mgorman@techsingularity.net> References: <20190118175136.31341-1-mgorman@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Scanning on large machines can take a considerable length of time and eventually need to be rescheduled. This is treated as an abort event but that's not appropriate as the attempt is likely to be retried after making numerous checks and taking another cycle through the page allocator. This patch will check the need to reschedule if necessary but continue the scanning. The main benefit is reduced scanning when compaction is taking a long time or the machine is over-saturated. It also avoids an unnecessary exit of compaction that ends up being retried by the page allocator in the outer loop. 5.0.0-rc1 5.0.0-rc1 synccached-v3r16 noresched-v3r17 Amean fault-both-1 0.00 ( 0.00%) 0.00 * 0.00%* Amean fault-both-3 2958.27 ( 0.00%) 2965.68 ( -0.25%) Amean fault-both-5 4091.90 ( 0.00%) 3995.90 ( 2.35%) Amean fault-both-7 5803.05 ( 0.00%) 5842.12 ( -0.67%) Amean fault-both-12 9481.06 ( 0.00%) 9550.87 ( -0.74%) Amean fault-both-18 14141.51 ( 0.00%) 13304.72 ( 5.92%) Amean fault-both-24 16438.00 ( 0.00%) 14618.59 ( 11.07%) Amean fault-both-30 17531.72 ( 0.00%) 16650.96 ( 5.02%) Amean fault-both-32 17101.96 ( 0.00%) 17145.15 ( -0.25%) Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/compaction.c | 23 ++++------------------- 1 file changed, 4 insertions(+), 19 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 9aa71945255d..293d9a9e6f00 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -404,21 +404,6 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } -/* - * Aside from avoiding lock contention, compaction also periodically checks - * need_resched() and records async compaction as contended if necessary. - */ -static inline void compact_check_resched(struct compact_control *cc) -{ - /* async compaction aborts if contended */ - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - - cond_resched(); - } -} - /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -447,7 +432,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock, return true; } - compact_check_resched(cc); + cond_resched(); return false; } @@ -736,7 +721,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, return 0; } - compact_check_resched(cc); + cond_resched(); if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) { skip_on_failure = true; @@ -1370,7 +1355,7 @@ static void isolate_freepages(struct compact_control *cc) * suitable migration targets, so periodically check resched. */ if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) - compact_check_resched(cc); + cond_resched(); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); @@ -1664,7 +1649,7 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, * need to schedule. */ if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) - compact_check_resched(cc); + cond_resched(); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); -- 2.16.4