Received: by 10.223.185.116 with SMTP id b49csp108286wrg; Thu, 22 Feb 2018 17:42:48 -0800 (PST) X-Google-Smtp-Source: AH8x226KVwEY+NtWiKzPe024h73UVIU0GGYxb/B2aoxIwRqvo93YuB/vz+82yMgG8ID8FsW4dXdM X-Received: by 10.101.74.135 with SMTP id b7mr32838pgu.260.1519350168824; Thu, 22 Feb 2018 17:42:48 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519350168; cv=none; d=google.com; s=arc-20160816; b=kroHgF28BFJNVlkHFzBtm7IHpFEOElCdTG20kouqRPTagGxp3e4OD4LFhkNx9sYgki Y/KUWsjpevZZ1nX1cBcOaH5n7AO4O3gIpErU3OWt1SqClN711YS9UOZtyBOvSEfD6ZSc lBwuShfA1loqtEyzOjUX4pRqUSBhwwFr+j6NQOuahrSsbeY5OQyWJ+He05MlfhV2INjb 9jRBIW1PQwjBwjwnRgB4xCjsgNPzy1f4Nsnr9MvDTIDkBEt2KMJ+30BPVcgO3QI9mAhJ rJcG7yyZexmqvfysN2QIPq8WxddNej0+kXqoGDeH8FacSeb2YssNScHXZFks02dfBDNy /W3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=mzE4EqvZszaf+BIL3r5aAEm3YUss7x60LWXkXtOCh6Q=; b=p7Cgh6yf5pMOw9ntd6DJgg7AMMlQR9aQZAmQUJzb7Vo6WvBg9MKqMNOcmPpXY5xSki d2I/G6g71oMroEdmB8mt68ShspuMx6Lqrytpj2+HqZRdr+Zbgnv0y6MKHaeKFY4nKs7P qY6ALzK6dgOQQl1zbVXXCjUrvEpy6CmpKh29HSe26GSQPjvd4yqVLXxiVsYF7xEyPpbw k3uzzDvkEJ2KSkQXmh6dhR6PsC7/ljUQ4D2G6A3HdCkCSLRXQOk+/d4MZb2TlRo7j8H7 SP9ZYmJuJLS9rxbtgysm+yKtGnejcl9doa450LpNTuFMEDKhoIEBGdhEaCsxRHe9tq0o FoxA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id b10-v6si910793plz.484.2018.02.22.17.42.33; Thu, 22 Feb 2018 17:42:48 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751385AbeBWBlw (ORCPT + 99 others); Thu, 22 Feb 2018 20:41:52 -0500 Received: from mga07.intel.com ([134.134.136.100]:14249 "EHLO mga07.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750916AbeBWBlv (ORCPT ); Thu, 22 Feb 2018 20:41:51 -0500 X-Amp-Result: UNSCANNABLE X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 22 Feb 2018 17:41:51 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,381,1515484800"; d="scan'208";a="177406629" Received: from aaronlu.sh.intel.com (HELO intel.com) ([10.239.159.135]) by orsmga004.jf.intel.com with ESMTP; 22 Feb 2018 17:41:48 -0800 Date: Fri, 23 Feb 2018 09:42:46 +0800 From: Aaron Lu To: Matthew Wilcox Cc: Mel Gorman , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka Subject: Re: [PATCH v2 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Message-ID: <20180223014245.GB4338@intel.com> References: <20180124023050.20097-1-aaron.lu@intel.com> <20180124163926.c7ptagn655aeiut3@techsingularity.net> <20180125072144.GA27678@intel.com> <20180215124644.GA12360@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180215124644.GA12360@bombadil.infradead.org> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 15, 2018 at 04:46:44AM -0800, Matthew Wilcox wrote: > On Thu, Jan 25, 2018 at 03:21:44PM +0800, Aaron Lu wrote: > > When freeing a batch of pages from Per-CPU-Pages(PCP) back to buddy, > > the zone->lock is held and then pages are chosen from PCP's migratetype > > list. While there is actually no need to do this 'choose part' under > > lock since it's PCP pages, the only CPU that can touch them is us and > > irq is also disabled. > > I have no objection to this patch. If you're looking for ideas for > future improvement though, I wonder whether using a LIST_HEAD is the > best way to store these pages temporarily. If you batch them into a > pagevec and then free the entire pagevec, the CPU should be a little > faster scanning a short array than walking a linked list. Thanks for the suggestion. > It would also puts a hard boundary on how long zone->lock is held, as > you'd drop it and go back for another batch after 15 pages. That might > be bad, of course. Yes that's a concern. As Mel reponded in another email, I think I'll just keep using list here. > > Another minor change I'd like to see is free_pcpages_bulk updating > pcp->count itself; all of the callers do it currently. Sounds good, I'll prepare a separate patch for this, thanks!