Received: by 10.223.176.46 with SMTP id f43csp1771721wra; Wed, 24 Jan 2018 23:22:06 -0800 (PST) X-Google-Smtp-Source: AH8x224V5uRtsaAFi4Zxk9ZHPs+pBX0x6XLAsBRyB07DunD+kP43qJi3X3fp8S1QM+SXMSf3krGl X-Received: by 2002:a17:902:42a5:: with SMTP id h34-v6mr10110126pld.265.1516864926713; Wed, 24 Jan 2018 23:22:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516864926; cv=none; d=google.com; s=arc-20160816; b=FXbBOTbQmYEYxEsT90p3Ol4/IqIxSxaojEGVrPVqHlO3g09NSyAPe8k3abdv47pA0z uJL411NK1ckQUyyd9eVZBv1Px1bYOL/ItE+3JMWpirjeWtF7bqiqfW1ynas7zHcTP5b1 Cp7jtDMVhvrD9riHuZBWX9oo9gAowMniW+s2XpCDk8H/YHN2NsGdvRPHUNWSKh1UHpj4 QJ3ihUP9Xzm6RyEbrocDSDbWE7TGxhksu4Kc3nIKBJehYshUF4wcgQI7fTs5ZTGJqghl rJxi2rW/NEQjf71L67Dj+0zgSzp1iLzfWwoDbUBEyUyAvQN/mMkiSh9VGOSjXPGOVP9Y V8Fg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=r7WND+iY+3H2scR4x+A2FCTnEbMmiC0n/yfi0tPQZnQ=; b=yMMwtYwMXbN9cLrGQEkSkbO3QfmSId/ea5tYq/pTa8d+0ZgiNRLEAyrF9qI7NXSbaX e4KvqPMubjaRnzvkSoQK3xM7VUU9reCRbgZmLjATPk/sRd8l15Fd5m6YAywQzLMCoQT5 LdxTWbPEX79sCg5fdMiBCKAXksG8e39PYBYk6n9kGftcv5V5HgVj/SCscCaBSqwZwzrp fKwUaymsroKzjj1F2YmcVkkscyiBvU+UMG3m7vi72LomdXbQgSIkg6sIYXjc5Wwl4LPB ngmNgFSeUD2CdvVYz7IPn8qc1WWFvFEioMINR5m8XHy5++9gEYjw3Tc9ReCXT0LSsO2y KEbQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id a6-v6si1533516pll.559.2018.01.24.23.21.51; Wed, 24 Jan 2018 23:22:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751171AbeAYHV2 (ORCPT + 99 others); Thu, 25 Jan 2018 02:21:28 -0500 Received: from mga03.intel.com ([134.134.136.65]:51119 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750763AbeAYHV1 (ORCPT ); Thu, 25 Jan 2018 02:21:27 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga103.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 24 Jan 2018 23:21:26 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,409,1511856000"; d="scan'208";a="26193751" Received: from aaronlu.sh.intel.com (HELO intel.com) ([10.239.159.135]) by orsmga001.jf.intel.com with ESMTP; 24 Jan 2018 23:21:23 -0800 Date: Thu, 25 Jan 2018 15:21:44 +0800 From: Aaron Lu To: Mel Gorman Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka Subject: [PATCH v2 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Message-ID: <20180125072144.GA27678@intel.com> References: <20180124023050.20097-1-aaron.lu@intel.com> <20180124163926.c7ptagn655aeiut3@techsingularity.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180124163926.c7ptagn655aeiut3@techsingularity.net> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org When freeing a batch of pages from Per-CPU-Pages(PCP) back to buddy, the zone->lock is held and then pages are chosen from PCP's migratetype list. While there is actually no need to do this 'choose part' under lock since it's PCP pages, the only CPU that can touch them is us and irq is also disabled. Moving this part outside could reduce lock held time and improve performance. Test with will-it-scale/page_fault1 full load: kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S) v4.15-rc4 9037332 8000124 13642741 15728686 this patch 9608786 +6.3% 8368915 +4.6% 14042169 +2.9% 17433559 +10.8% What the test does is: starts $nr_cpu processes and each will repeatedly do the following for 5 minutes: 1 mmap 128M anonymouse space; 2 write access to that space; 3 munmap. The score is the aggregated iteration. https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault1.c Acked-by: Mel Gorman Signed-off-by: Aaron Lu --- v2: use LIST_HEAD(head) as suggested by Mel Gorman. mm/page_alloc.c | 33 ++++++++++++++++++--------------- 1 file changed, 18 insertions(+), 15 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 4093728f292e..c9e5ded39b16 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1113,12 +1113,10 @@ static void free_pcppages_bulk(struct zone *zone, int count, int migratetype = 0; int batch_free = 0; bool isolated_pageblocks; - - spin_lock(&zone->lock); - isolated_pageblocks = has_isolate_pageblock(zone); + struct page *page, *tmp; + LIST_HEAD(head); while (count) { - struct page *page; struct list_head *list; /* @@ -1140,26 +1138,31 @@ static void free_pcppages_bulk(struct zone *zone, int count, batch_free = count; do { - int mt; /* migratetype of the to-be-freed page */ - page = list_last_entry(list, struct page, lru); /* must delete as __free_one_page list manipulates */ list_del(&page->lru); - mt = get_pcppage_migratetype(page); - /* MIGRATE_ISOLATE page should not go to pcplists */ - VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); - /* Pageblock could have been isolated meanwhile */ - if (unlikely(isolated_pageblocks)) - mt = get_pageblock_migratetype(page); - if (bulkfree_pcp_prepare(page)) continue; - __free_one_page(page, page_to_pfn(page), zone, 0, mt); - trace_mm_page_pcpu_drain(page, 0, mt); + list_add_tail(&page->lru, &head); } while (--count && --batch_free && !list_empty(list)); } + + spin_lock(&zone->lock); + isolated_pageblocks = has_isolate_pageblock(zone); + + list_for_each_entry_safe(page, tmp, &head, lru) { + int mt = get_pcppage_migratetype(page); + /* MIGRATE_ISOLATE page should not go to pcplists */ + VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); + /* Pageblock could have been isolated meanwhile */ + if (unlikely(isolated_pageblocks)) + mt = get_pageblock_migratetype(page); + + __free_one_page(page, page_to_pfn(page), zone, 0, mt); + trace_mm_page_pcpu_drain(page, 0, mt); + } spin_unlock(&zone->lock); } -- 2.14.3