Received: by 10.223.185.116 with SMTP id b49csp4470525wrg; Mon, 26 Feb 2018 19:12:41 -0800 (PST) X-Google-Smtp-Source: AH8x227556lIbUD9NrY9DuFjX5POH1bZHaxf+T3wx6NyQdgdx1oShFnSBpZ04bzWcN/kkfRV4r0b X-Received: by 10.99.131.198 with SMTP id h189mr10284704pge.25.1519701161782; Mon, 26 Feb 2018 19:12:41 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1519701161; cv=none; d=google.com; s=arc-20160816; b=h202xibJGqECoXpvOmXTuE1W/k7D1QiPWZbXXSrpc4PSGmE+8Z8IhYXNMKYhJxPOOR szwKvQN1vQ/OBzDKeDiPubw5yF+eBZHds1/2wuoTetpB58Mn+BOrg3hkNYktjGDiFer6 cKIui5rkLfKmL6BI5FXDmnBXrfDbBWwkfvIRv/AyaPU7kvdtPQsel02jsH0QhDsedg5K IKz2ZQjoSbOGzfUFAdhFu+64Gf9A7BKZd/DoyWKVl++VbGfn1lAgyu+M/IAAXk96OxJT sE9+CfelBqjHrlrvyWlXo6uCYh2uh9lO5/8PHXYKfIjMWME1E1i/8dV46XGK4GTTXnDt aYdQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:arc-authentication-results; bh=+wskLUa6RlQ2BDv849nlc+ZOYV3+15lWlq8j5RlRyf0=; b=sKhNy0u8+DP7ibzZg00beOK4fe7mWRoG/AMMhNzxaYk0wjpzBnldo0vQqKexy1++fR WvWBh4CHzZOthSD/nrbMrXNbM6o1rWAfsGSNawTlqyYTTMEe+pku1hEr/MXpr/MwWBUi tnxndFmZH6QiFx3QHsCyzwJvFCPBxPJdR3Da1I7AKpHfiMmx9aZr2mZ7n0RFxMI1RylL +r8wQu1no9yPr2cv7DMtN9sjpr3SVRWiQ7HF2s4wFxv1PCb3xXlK7NcoGlOH3EEn11nT GEr2VYjzhAi4mgXdYA6tApAd0rPmoEVwoCWEo/k4QumCIp8DmJov/TJ2Gt6OjUub3CaH 7CIA== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f5si7799186pfn.258.2018.02.26.19.12.26; Mon, 26 Feb 2018 19:12:41 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751617AbeB0DLs (ORCPT + 99 others); Mon, 26 Feb 2018 22:11:48 -0500 Received: from mga14.intel.com ([192.55.52.115]:32670 "EHLO mga14.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751487AbeB0DLr (ORCPT ); Mon, 26 Feb 2018 22:11:47 -0500 X-Amp-Result: UNKNOWN X-Amp-Original-Verdict: FILE UNKNOWN X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 26 Feb 2018 19:11:46 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.47,398,1515484800"; d="scan'208";a="37702756" Received: from aaronlu.sh.intel.com (HELO intel.com) ([10.239.159.135]) by orsmga002.jf.intel.com with ESMTP; 26 Feb 2018 19:11:43 -0800 Date: Tue, 27 Feb 2018 11:12:44 +0800 From: Aaron Lu To: David Rientjes Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka , Mel Gorman , Matthew Wilcox Subject: Re: [PATCH v3 1/3] mm/free_pcppages_bulk: update pcp->count inside Message-ID: <20180227031244.GA28977@intel.com> References: <20180226135346.7208-1-aaron.lu@intel.com> <20180226135346.7208-2-aaron.lu@intel.com> <20180227015613.GA9141@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180227015613.GA9141@intel.com> User-Agent: Mutt/1.9.2 (2017-12-15) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Feb 27, 2018 at 09:56:13AM +0800, Aaron Lu wrote: > On Mon, Feb 26, 2018 at 01:48:14PM -0800, David Rientjes wrote: > > On Mon, 26 Feb 2018, Aaron Lu wrote: > > > > > Matthew Wilcox found that all callers of free_pcppages_bulk() currently > > > update pcp->count immediately after so it's natural to do it inside > > > free_pcppages_bulk(). > > > > > > No functionality or performance change is expected from this patch. > > > > > > Suggested-by: Matthew Wilcox > > > Signed-off-by: Aaron Lu > > > --- > > > mm/page_alloc.c | 10 +++------- > > > 1 file changed, 3 insertions(+), 7 deletions(-) > > > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > > index cb416723538f..3154859cccd6 100644 > > > --- a/mm/page_alloc.c > > > +++ b/mm/page_alloc.c > > > @@ -1117,6 +1117,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, > > > int batch_free = 0; > > > bool isolated_pageblocks; > > > > > > + pcp->count -= count; > > > spin_lock(&zone->lock); > > > isolated_pageblocks = has_isolate_pageblock(zone); > > > > > > > Why modify pcp->count before the pages have actually been freed? > > When count is still count and not zero after pages have actually been > freed :-) > > > > > I doubt that it matters too much, but at least /proc/zoneinfo uses > > zone->lock. I think it should be done after the lock is dropped. > > Agree that it looks a bit weird to do it beforehand and I just want to > avoid adding one more local variable here. > > pcp->count is not protected by zone->lock though so even we do it after > dropping the lock, it could still happen that zoneinfo shows a wrong > value of pcp->count while it should be zero(this isn't a problem since > zoneinfo doesn't need to be precise). > > Anyway, I'll follow your suggestion here to avoid confusion. What about this: update pcp->count when page is dropped off pcp list. diff --git a/mm/page_alloc.c b/mm/page_alloc.c index cb416723538f..faa33eac1635 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1148,6 +1148,7 @@ static void free_pcppages_bulk(struct zone *zone, int count, page = list_last_entry(list, struct page, lru); /* must delete as __free_one_page list manipulates */ list_del(&page->lru); + pcp->count--; mt = get_pcppage_migratetype(page); /* MIGRATE_ISOLATE page should not go to pcplists */ @@ -2416,10 +2417,8 @@ void drain_zone_pages(struct zone *zone, struct per_cpu_pages *pcp) local_irq_save(flags); batch = READ_ONCE(pcp->batch); to_drain = min(pcp->count, batch); - if (to_drain > 0) { + if (to_drain > 0) free_pcppages_bulk(zone, to_drain, pcp); - pcp->count -= to_drain; - } local_irq_restore(flags); } #endif @@ -2441,10 +2440,8 @@ static void drain_pages_zone(unsigned int cpu, struct zone *zone) pset = per_cpu_ptr(zone->pageset, cpu); pcp = &pset->pcp; - if (pcp->count) { + if (pcp->count) free_pcppages_bulk(zone, pcp->count, pcp); - pcp->count = 0; - } local_irq_restore(flags); } @@ -2668,7 +2665,6 @@ static void free_unref_page_commit(struct page *page, unsigned long pfn) if (pcp->count >= pcp->high) { unsigned long batch = READ_ONCE(pcp->batch); free_pcppages_bulk(zone, batch, pcp); - pcp->count -= batch; } }