Received: by 10.213.65.68 with SMTP id h4csp81990imn; Mon, 12 Mar 2018 07:24:31 -0700 (PDT) X-Google-Smtp-Source: AG47ELsOp70vBNxJ/7GjcVWWFqLvImk7lUqZa1C9jRguCoy7fEN+olhYT41HdcrzhG5GVvCvk5ih X-Received: by 10.98.75.129 with SMTP id d1mr8226128pfj.19.1520864671506; Mon, 12 Mar 2018 07:24:31 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1520864671; cv=none; d=google.com; s=arc-20160816; b=1JN2W3+4ugIz28sYw5pn7+rYAjk77xtUgFzycDxa31a+ELGOuso0Cxr2LwSwkOOr1U 4SlZVy42hrA1zENJFG/RErVeZih66zjEwJuugSMgQbVRata6fnHZMguLG9j6gQa8kFK6 vTv9jIH1LrJUv3gqFfMF7SdPQqjdGOaEfRNT2rNnSzGCZfJU9ufXoWl7raj4vsHIAaVF 3JYSdYX3aMSLBcIpoya0oq0Y1TYvtadNlXVVI5Ha0JvDC+f/hx8+2XJyFFHbtr5HKxQF ozCKGWy4ZxQOMX/empzaBUsc2b/ws8HAfMTyugt5oznyetYA7RyFqi3NJdzbYLj6EW0G /c3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=//juA7ffUAJHRi37Pf7fDX3C3MdMdx/xM7nH2nF1js8=; b=FIqlaCwsyVjdH5FItcKSE/tqt27JkJ4lww7zG34Qt2nQR0tHYEWnUDSa4SnQMK/oND k6fcTwND5EV/kgFJzssp8zcHh5GPZlB7+BJLwzHtVDYrOWwBq5dDc7EbwQFk6Cmm8/d3 0V4Dkl+qLHt9rC4yKScQDWre8WUdMhp1UbEnakFFmULnxnEtPNlYcnjViOmVT5JEP9cJ ORBTKBLv7X3L+AOq7wDHHo2QVp9Sk1ZCKaLJwbRfHnHNdBm7I52CJxnv8z+bU26PIrNF 7EK+TNUJXKLtNWm7WyXP52WuEHdUyuL5kd03xXllwxGrlUxVTqHt/Dz80JD0ChHPBmj6 Yl1g== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id x4si5118691pgq.11.2018.03.12.07.24.16; Mon, 12 Mar 2018 07:24:31 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751709AbeCLOW6 (ORCPT + 99 others); Mon, 12 Mar 2018 10:22:58 -0400 Received: from mx2.suse.de ([195.135.220.15]:37285 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751280AbeCLOW4 (ORCPT ); Mon, 12 Mar 2018 10:22:56 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id 265ECAC92; Mon, 12 Mar 2018 14:22:55 +0000 (UTC) Subject: Re: [PATCH v4 2/3] mm/free_pcppages_bulk: do not hold lock when picking pages to free To: Aaron Lu , linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Andrew Morton , Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Mel Gorman , Matthew Wilcox , David Rientjes References: <20180301062845.26038-1-aaron.lu@intel.com> <20180301062845.26038-3-aaron.lu@intel.com> From: Vlastimil Babka Message-ID: <9cad642d-9fe5-b2c3-456c-279065c32337@suse.cz> Date: Mon, 12 Mar 2018 15:22:53 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180301062845.26038-3-aaron.lu@intel.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/01/2018 07:28 AM, Aaron Lu wrote: > When freeing a batch of pages from Per-CPU-Pages(PCP) back to buddy, > the zone->lock is held and then pages are chosen from PCP's migratetype > list. While there is actually no need to do this 'choose part' under > lock since it's PCP pages, the only CPU that can touch them is us and > irq is also disabled. > > Moving this part outside could reduce lock held time and improve > performance. Test with will-it-scale/page_fault1 full load: > > kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S) > v4.16-rc2+ 9034215 7971818 13667135 15677465 > this patch 9536374 +5.6% 8314710 +4.3% 14070408 +3.0% 16675866 +6.4% > > What the test does is: starts $nr_cpu processes and each will repeatedly > do the following for 5 minutes: > 1 mmap 128M anonymouse space; > 2 write access to that space; > 3 munmap. > The score is the aggregated iteration. > > https://github.com/antonblanchard/will-it-scale/blob/master/tests/page_fault1.c > > Acked-by: Mel Gorman > Signed-off-by: Aaron Lu > --- > mm/page_alloc.c | 39 +++++++++++++++++++++++---------------- > 1 file changed, 23 insertions(+), 16 deletions(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index faa33eac1635..dafdcdec9c1f 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1116,12 +1116,10 @@ static void free_pcppages_bulk(struct zone *zone, int count, > int migratetype = 0; > int batch_free = 0; > bool isolated_pageblocks; > - > - spin_lock(&zone->lock); > - isolated_pageblocks = has_isolate_pageblock(zone); > + struct page *page, *tmp; > + LIST_HEAD(head); > > while (count) { > - struct page *page; > struct list_head *list; > > /* > @@ -1143,27 +1141,36 @@ static void free_pcppages_bulk(struct zone *zone, int count, > batch_free = count; > > do { > - int mt; /* migratetype of the to-be-freed page */ > - > page = list_last_entry(list, struct page, lru); > - /* must delete as __free_one_page list manipulates */ > + /* must delete to avoid corrupting pcp list */ > list_del(&page->lru); Well, since bulkfree_pcp_prepare() doesn't care about page->lru, you could maybe use list_move_tail() instead of list_del() + list_add_tail()? That avoids temporarily writing poison values. Hm actually, you are reversing the list in the process, because page is obtained by list_last_entry and you use list_add_tail. That could have unintended performance consequences? Also maybe list_cut_position() could be faster than shuffling pages one by one? I guess not really, because batch_free will be generally low? > pcp->count--; > > - mt = get_pcppage_migratetype(page); > - /* MIGRATE_ISOLATE page should not go to pcplists */ > - VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); > - /* Pageblock could have been isolated meanwhile */ > - if (unlikely(isolated_pageblocks)) > - mt = get_pageblock_migratetype(page); > - > if (bulkfree_pcp_prepare(page)) > continue; > > - __free_one_page(page, page_to_pfn(page), zone, 0, mt); > - trace_mm_page_pcpu_drain(page, 0, mt); > + list_add_tail(&page->lru, &head); > } while (--count && --batch_free && !list_empty(list)); > } > + > + spin_lock(&zone->lock); > + isolated_pageblocks = has_isolate_pageblock(zone); > + > + /* > + * Use safe version since after __free_one_page(), > + * page->lru.next will not point to original list. > + */ > + list_for_each_entry_safe(page, tmp, &head, lru) { > + int mt = get_pcppage_migratetype(page); > + /* MIGRATE_ISOLATE page should not go to pcplists */ > + VM_BUG_ON_PAGE(is_migrate_isolate(mt), page); > + /* Pageblock could have been isolated meanwhile */ > + if (unlikely(isolated_pageblocks)) > + mt = get_pageblock_migratetype(page); > + > + __free_one_page(page, page_to_pfn(page), zone, 0, mt); > + trace_mm_page_pcpu_drain(page, 0, mt); > + } > spin_unlock(&zone->lock); > } > >