Received: by 10.223.185.116 with SMTP id b49csp2121783wrg; Thu, 15 Feb 2018 06:56:47 -0800 (PST) X-Google-Smtp-Source: AH8x226Y7S5xhrbE2G8zEA7mNp+3JAK22Nj+r8cKNImQ6ey/BVNBa89fieMJ/kQkte3QGe7Q7FFB X-Received: by 2002:a17:902:710e:: with SMTP id a14-v6mr2695910pll.291.1518706606916; Thu, 15 Feb 2018 06:56:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1518706606; cv=none; d=google.com; s=arc-20160816; b=ISOWpN2YNbpGnnte/7W0NvEsogfpEeCZO568iHyaVaQb6RJUB2Lc9gTa0Phst51Z4m 6CagDIkPtJF3FtOeavzPcG2Ud/PXnlNAjDEIyZRxMZQ6IscoUVhDIL5H4z3wxBxFE/gr N3b7q/9cDwfMgfvNyrfQZCL2acxWbiQrL4748QsKappxng3nJHTkaQuhaeVxLbYG0lW2 2OFAXe0EyQMuLxqNNFeC3y1AFEJBkMehRxSC1C57HKVSkB0XYxbojwUy17A4vT/B5PIo ArBlV6AtXD97gH1c6v7CtCgWTrl5DhRNJ2wvO+5t+KQDpJdcKWFCGikHLUjIa/O0j9YE 38GA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=GuBNTPvMF6r9xw+iwxMmatCbdGzmK63CSjtEYnZWWes=; b=OR7MwUcYbmUsVdVeqT2rpTenyrs+J2iitf7RKytIbikYVDHZY92M9/aQaVVe3zAU7p w4VajYP+0+3WHtcOXOaYrlw7yyEcWv4XhewlXUqQb3JloVBfe0LbkGNlPMCvF3oZrl9h bQ49uoqQ5NfP46LLTu9gxuTZuiMc4Hw7AJbb00rL2AKnp+8MI6YOVzT2nkSkJqAsCmdr rQmwhPDC+V/AZfmvBPAuwCQ1wxYaBQqZoUG3KM01VGu13mWSIM5/yPWvXd/O1p3yFqQV h39cN1HP4CkVeIpyl1WonCc13NlTvFkw1x8t2BBHcibTR46LgvTXGLRfeNIXETtHjYtd kDzw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q10si1146613pgd.335.2018.02.15.06.56.32; Thu, 15 Feb 2018 06:56:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1033716AbeBOOz1 (ORCPT + 99 others); Thu, 15 Feb 2018 09:55:27 -0500 Received: from outbound-smtp02.blacknight.com ([81.17.249.8]:60764 "EHLO outbound-smtp02.blacknight.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1033438AbeBOOzZ (ORCPT ); Thu, 15 Feb 2018 09:55:25 -0500 Received: from mail.blacknight.com (pemlinmail03.blacknight.ie [81.17.254.16]) by outbound-smtp02.blacknight.com (Postfix) with ESMTPS id 83E321DC19D for ; Thu, 15 Feb 2018 14:55:24 +0000 (UTC) Received: (qmail 9937 invoked from network); 15 Feb 2018 14:55:24 -0000 Received: from unknown (HELO techsingularity.net) (mgorman@techsingularity.net@[37.228.237.61]) by 81.17.254.9 with ESMTPSA (DHE-RSA-AES256-SHA encrypted, authenticated); 15 Feb 2018 14:55:24 -0000 Date: Thu, 15 Feb 2018 14:55:23 +0000 From: Mel Gorman To: Matthew Wilcox Cc: Aaron Lu , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka Subject: Re: [PATCH v2 1/2] free_pcppages_bulk: do not hold lock when picking pages to free Message-ID: <20180215145523.btoutbrskdvizkqk@techsingularity.net> References: <20180124023050.20097-1-aaron.lu@intel.com> <20180124163926.c7ptagn655aeiut3@techsingularity.net> <20180125072144.GA27678@intel.com> <20180215124644.GA12360@bombadil.infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-15 Content-Disposition: inline In-Reply-To: <20180215124644.GA12360@bombadil.infradead.org> User-Agent: NeoMutt/20170912 (1.9.0) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Thu, Feb 15, 2018 at 04:46:44AM -0800, Matthew Wilcox wrote: > On Thu, Jan 25, 2018 at 03:21:44PM +0800, Aaron Lu wrote: > > When freeing a batch of pages from Per-CPU-Pages(PCP) back to buddy, > > the zone->lock is held and then pages are chosen from PCP's migratetype > > list. While there is actually no need to do this 'choose part' under > > lock since it's PCP pages, the only CPU that can touch them is us and > > irq is also disabled. > > I have no objection to this patch. If you're looking for ideas for > future improvement though, I wonder whether using a LIST_HEAD is the > best way to store these pages temporarily. If you batch them into a > pagevec and then free the entire pagevec, the CPU should be a little > faster scanning a short array than walking a linked list. > > It would also puts a hard boundary on how long zone->lock is held, as > you'd drop it and go back for another batch after 15 pages. That might > be bad, of course. > It's not a guaranteed win. You're trading a list traversal for increased stack usage of 128 bytes (unless you stick a static one in the pgdat and incur a cache miss penalty instead or a per-cpu pagevec and increase memory consumption) and 2 spin lock acquire/releases in the common case which may or may not be contended. It might make more sense if the PCP's themselves where statically sized but that would limit tuning options and increase the per-cpu footprint of the pcp structures. Maybe I'm missing something obvious and it really would be a universal win but right now I find it hard to imagine that it's a win. > Another minor change I'd like to see is free_pcpages_bulk updating > pcp->count itself; all of the callers do it currently. > That should be reasonable, it's not even particularly difficult. -- Mel Gorman SUSE Labs