Received: by 2002:ad5:474a:0:0:0:0:0 with SMTP id i10csp3413516imu; Fri, 18 Jan 2019 09:56:56 -0800 (PST) X-Google-Smtp-Source: ALg8bN47VYL5vpKXOryKzqF35Bkr110R/tIEZ0R8rPcwQDtCXdSLBnFWm6O6EeixN4YzNl/dEGvE X-Received: by 2002:a63:2905:: with SMTP id p5mr18812888pgp.178.1547834216587; Fri, 18 Jan 2019 09:56:56 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1547834216; cv=none; d=google.com; s=arc-20160816; b=uy2j6oIWWDmJ+grHbl7UnwlRdAszWFAayFRnSltf6LvR0sND23z2A9ZZWRuwZDnQqp bF3AB75nJicwRhuOL+vPAuNhevfTLidNunjl8AbwTnQIOzW5DywimUamq8LflKNsurLm IrJSOLBH8MnWpvhk1rFI1ZnPZFp4sMDy4HYxotL2KpomO1/aYlhvnZnjmO8XlfeO4w4h OM7A9MKVq3at26zIxcAqNZSBS200opCc8tp7I9woVT8XAbbzRUTNQn5XzPqJba1HT3QL aY7WFt+CeQm/dwhmEd7EgckStzfd9aGroLV0FmdUwG9SryQuK58vtTOm9ev4WVy/heAD Lt4w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from; bh=svAeuwrNxudtwt6CtwTN2Hw03wYxbsBKu/JBqRy4INM=; b=KAWAplikpCFW/wiuGdEwm8R3RmuoJqXnALiuQVFzkWk1p6vsuZfEFCN4obOBwXil7c H6QyU6Gxu4H7x0nbqfOhWCZWSjPWCmEOZ2CmdQ2S4L5C7k884yxqGWsOPIrOFbxoT7tE N9QKNhHLuel/NhMdnHtlXftHoof81zfg7BJoh85/gw1XHTENbZ1osF6qBIag+8Gam0CA 8Yfi2i5YV3LzG1xJlqTOuAGSzX7M+pCtiW59t67U83AfZqWYfmmKLy2MlaHcHCyz0Qw/ 08kqT8hpgrjEQsJR+Qc+CYCS8cce2Mij96+xYLWd0tWEGhjEHN0s7f/NyahZrmkhFf9K ZlCg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 92si4988294pld.84.2019.01.18.09.56.41; Fri, 18 Jan 2019 09:56:56 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728993AbfARRyd (ORCPT + 99 others); Fri, 18 Jan 2019 12:54:33 -0500 Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:52809 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728574AbfARRyc (ORCPT ); Fri, 18 Jan 2019 12:54:32 -0500 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id BFCDD98C6A for ; Fri, 18 Jan 2019 17:54:29 +0000 (UTC) Received: (qmail 6551 invoked from network); 18 Jan 2019 17:54:29 -0000 Received: from unknown (HELO stampy.163woodhaven.lan) (mgorman@techsingularity.net@[37.228.229.96]) by 81.17.254.9 with ESMTPA; 18 Jan 2019 17:54:29 -0000 From: Mel Gorman To: Andrew Morton Cc: David Rientjes , Andrea Arcangeli , Vlastimil Babka , Linux List Kernel Mailing , Linux-MM , Mel Gorman Subject: [PATCH 16/22] mm, compaction: Rework compact_should_abort as compact_check_resched Date: Fri, 18 Jan 2019 17:51:30 +0000 Message-Id: <20190118175136.31341-17-mgorman@techsingularity.net> X-Mailer: git-send-email 2.16.4 In-Reply-To: <20190118175136.31341-1-mgorman@techsingularity.net> References: <20190118175136.31341-1-mgorman@techsingularity.net> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org With incremental changes, compact_should_abort no longer makes any documented sense. Rename to compact_check_resched and update the associated comments. There is no benefit other than reducing redundant code and making the intent slightly clearer. It could potentially be merged with earlier patches but it just makes the review slightly harder. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/compaction.c | 61 ++++++++++++++++++++++----------------------------------- 1 file changed, 23 insertions(+), 38 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index 829540f6f3da..9aa71945255d 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -404,6 +404,21 @@ static bool compact_lock_irqsave(spinlock_t *lock, unsigned long *flags, return true; } +/* + * Aside from avoiding lock contention, compaction also periodically checks + * need_resched() and records async compaction as contended if necessary. + */ +static inline void compact_check_resched(struct compact_control *cc) +{ + /* async compaction aborts if contended */ + if (need_resched()) { + if (cc->mode == MIGRATE_ASYNC) + cc->contended = true; + + cond_resched(); + } +} + /* * Compaction requires the taking of some coarse locks that are potentially * very heavily contended. The lock should be periodically unlocked to avoid @@ -432,33 +447,7 @@ static bool compact_unlock_should_abort(spinlock_t *lock, return true; } - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - cond_resched(); - } - - return false; -} - -/* - * Aside from avoiding lock contention, compaction also periodically checks - * need_resched() and either schedules in sync compaction or aborts async - * compaction. This is similar to what compact_unlock_should_abort() does, but - * is used where no lock is concerned. - * - * Returns false when no scheduling was needed, or sync compaction scheduled. - * Returns true when async compaction should abort. - */ -static inline bool compact_should_abort(struct compact_control *cc) -{ - /* async compaction aborts if contended */ - if (need_resched()) { - if (cc->mode == MIGRATE_ASYNC) - cc->contended = true; - - cond_resched(); - } + compact_check_resched(cc); return false; } @@ -747,8 +736,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn, return 0; } - if (compact_should_abort(cc)) - return 0; + compact_check_resched(cc); if (cc->direct_compaction && (cc->mode == MIGRATE_ASYNC)) { skip_on_failure = true; @@ -1379,12 +1367,10 @@ static void isolate_freepages(struct compact_control *cc) isolate_start_pfn = block_start_pfn) { /* * This can iterate a massively long zone without finding any - * suitable migration targets, so periodically check if we need - * to schedule, or even abort async compaction. + * suitable migration targets, so periodically check resched. */ - if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)) - && compact_should_abort(cc)) - break; + if (!(block_start_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) + compact_check_resched(cc); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); @@ -1675,11 +1661,10 @@ static isolate_migrate_t isolate_migratepages(struct zone *zone, /* * This can potentially iterate a massively long zone with * many pageblocks unsuitable, so periodically check if we - * need to schedule, or even abort async compaction. + * need to schedule. */ - if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages)) - && compact_should_abort(cc)) - break; + if (!(low_pfn % (SWAP_CLUSTER_MAX * pageblock_nr_pages))) + compact_check_resched(cc); page = pageblock_pfn_to_page(block_start_pfn, block_end_pfn, zone); -- 2.16.4