Received: by 10.223.176.46 with SMTP id f43csp975781wra; Wed, 24 Jan 2018 08:42:29 -0800 (PST) X-Google-Smtp-Source: AH8x2256jJWX2L1lK/I4O0gTV60gjW8XgaaD1+fXmvbiAFShxmBN5E3IVmInaNBxsF+f8ullnzjM X-Received: by 2002:a17:902:f:: with SMTP id 15-v6mr8660676pla.419.1516812149674; Wed, 24 Jan 2018 08:42:29 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1516812149; cv=none; d=google.com; s=arc-20160816; b=OuckzC88UVnH3LMUkO1JMdrCaypt3y6kkpxZiUdKZWnMWMh5UKmfeKcLVmaqUGbGs0 FoHb8U3Md7uEXo4ED8ULZk7n72X21TavUv8e0UO5xbUOC1G96CnLdFiTkCv7YCwODolz vYkHvyv3iNoi+5vxfvvr0+f/6dd2a/yeL0LTlVrLmkp1s1e57U07nI29bX0NDgo17kwB ptANbbguKwsXChXwaaiTytzSz8BLxH1l79Y388PPhdBuf8ccNoEFFJsAOl1P+JatstOo BwXtW1hmS3+ixzsnNDVmZEuKbiQFtRMFEPubi2P/KYY/WlmdQDpeXn2u+OToxK8vxFve o+jQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=IqxcA/AzWtIv64334tHE6TYgB73iel5T+ucEF1660Uk=; b=XGejwRPVVv3LYnCrQpkaqIuRfXlrGd85IxD6O2pADFdFd4Z4lGxNkPzUa9LZ1XHUeg 0Nt/B9RGXYYKbVs2noZrpsAxzaNaYmfZR6FoJe+u59Z3ge7iA7Wru8fZ9thAprx6NTsL l0Ygd19MG0v+6oAXsbqXw2tjM34koFY5Oiq1WJ72GeUfLudHm+k66RvuPQbfTNx6Dapq k8mq+82MsUlD352MptIGw0sBLUzajHAooRj0ws6YYsWanLxFUUqhwWQ1scB5I/TzAvHG IMY8vQYMuQ1fmUT737EhYfyOroRCKgAB9oNUXDFEU5MsLKXrtA0+ttW/B+WqkwwYlUaf Z6Pg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x14si325936pgo.812.2018.01.24.08.42.16; Wed, 24 Jan 2018 08:42:29 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S964910AbeAXQkQ (ORCPT + 99 others); Wed, 24 Jan 2018 11:40:16 -0500 Received: from outbound-smtp04.blacknight.com ([81.17.249.35]:50083 "EHLO outbound-smtp04.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964837AbeAXQkM (ORCPT ); Wed, 24 Jan 2018 11:40:12 -0500 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp04.blacknight.com (Postfix) with ESMTPS id CC63298D65 for ; Wed, 24 Jan 2018 16:40:10 +0000 (UTC) Received: (qmail 26572 invoked from network); 24 Jan 2018 16:40:10 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[37.228.237.61]) by 81.17.254.9 with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 24 Jan 2018 16:40:10 -0000 Date: Wed, 24 Jan 2018 16:40:06 +0000 From: Mel Gorman To: Aaron Lu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka Subject: Re: [PATCH 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Message-ID: <20180124163926.c7ptagn655aeiut3@techsingularity.net> References: <20180124023050.20097-1-aaron.lu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20180124023050.20097-1-aaron.lu@intel.com> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, Jan 24, 2018 at 10:30:49AM +0800, Aaron Lu wrote: > When freeing a batch of pages from Per-CPU-Pages(PCP) back to buddy, > the zone->lock is held and then pages are chosen from PCP's migratetype > list. While there is actually no need to do this 'choose part' under > lock since it's PCP pages, the only CPU that can touch them is us and > irq is also disabled. > > Moving this part outside could reduce lock held time and improve > performance. Test with will-it-scale/page_fault1 full load: > > kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S) > v4.15-rc4 9037332 8000124 13642741 15728686 > this patch 9608786 +6.3% 8368915 +4.6% 14042169 +2.9% 17433559 +10.8% > > What the test does is: starts $nr_cpu processes and each will repeated > do the following for 5 minutes: > 1 mmap 128M anonymouse space; > 2 write access to that space; > 3 munmap. > The score is the aggregated iteration. > > https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault1.c > > Signed-off-by: Aaron Lu > --- > mm/page_alloc.c | 33 +++++++++++++++++++-------------- > 1 file changed, 19 insertions(+), 14 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 4093728f292e..a076f754dac1 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1113,12 +1113,12 @@ static void free_pcppages_bulk(struct zone *zone, int count, > int migratetype = 0; > int batch_free = 0; > bool isolated_pageblocks; > + struct list_head head; > + struct page *page, *tmp; > > - spin_lock(&zone->lock); > - isolated_pageblocks = has_isolate_pageblock(zone); > + INIT_LIST_HEAD(&head); > Declare head as LIST_HEAD(head) and avoid INIT_LIST_HEAD. Otherwise I think this is safe Acked-by: Mel Gorman -- Mel Gorman SUSE Labs