Received: by 10.223.185.111 with SMTP id b44csp797472wrg; Fri, 9 Mar 2018 14:01:04 -0800 (PST) X-Google-Smtp-Source: AG47ELteS9KwRX2DV/kw2C67PaBp/TsrPwrDYYq0ZXkz5vAySuTZz7tzsBa3+rAislUseyi/0g3k X-Received: by 2002:a17:902:51ee:: with SMTP id y101-v6mr29599052plh.157.1520632864206; Fri, 09 Mar 2018 14:01:04 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1520632864; cv=none; d=google.com; s=arc-20160816; b=RO1YXntB6MyrcTzqckURhXxemv1sqxsE01RqkNAm65i+MeTSb9neKcGR5ZOECiRWvi 41sP+Nb27+b0TgzfDjDRPs/uh05WsvBQVvxtB0KepTnLbFZaDN2pF7+2rlq0lNojLdn1 ZPkJrljIYcdDsDziMfQtUGeb51ovFdzgVekdGj63rgNVNtbBfcZk1/bJY5sRfJmNDKFj RWuGJMJ8bLhzgDPyxRO2uVXy3E/B0hOKSVNe1artukN2jTP+Fd3WPOGLRL0kN/fN//ap XteaJlmi/HUM0WpSv6UV3s52qxaemLsN0/SJhlSccWSh96Je6Sp5mGxCYu+lnDUjuTb6 NH+Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:subject:cc:to:from:date :arc-authentication-results; bh=pWpwqVb0mMjYqXd2AlIkpQuS8AERQJJuSReSLasY1mc=; b=o3g6V5OkRrbL71L2IZANPGHaKziNvEQVPArxCpxJOYmrA2Ykb3J457d+RCiuQSRydA 4mHE/+NtQhtSw9UNHl9eXmD4LgCRVb52Vkm8TDZ5WABG1vraZesr7nPIEJwCqolnuMxI HWgvgqCausPeQCee6EIAzyO9XMr9jAk20ehIz3deUk+fP7lA72AjX8W6j+Pw7vpFlhxl YrW5bhWM9XfV7aD8iKbmmxNGoCLc7X5P3n5SjF6yXUZ5FRlWFIsw8PYCqXkcw1x5uK48 o+S0uOxzXBi+WZv5s9rDu/fL1XTBeO10fuADF1j19q7r9CgtD6Gm46LG7ZHY/J3V33xM CpzQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id v8si1306960pgs.356.2018.03.09.14.00.49; Fri, 09 Mar 2018 14:01:04 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932687AbeCIV6v (ORCPT + 99 others); Fri, 9 Mar 2018 16:58:51 -0500 Received: from mail.linuxfoundation.org ([140.211.169.12]:48886 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932276AbeCIV6r (ORCPT ); Fri, 9 Mar 2018 16:58:47 -0500 Received: from akpm3.svl.corp.google.com (unknown [104.133.9.71]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 1EF87CFD; Fri, 9 Mar 2018 21:58:33 +0000 (UTC) Date: Fri, 9 Mar 2018 13:58:32 -0800 From: Andrew Morton To: Aaron Lu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Vlastimil Babka , Mel Gorman , Matthew Wilcox , David Rientjes Subject: Re: [PATCH v4 3/3 update] mm/free_pcppages_bulk: prefetch buddy while not holding lock Message-Id: <20180309135832.988ab6d3d986658d531a79ef@linux-foundation.org> In-Reply-To: <20180309082431.GB30868@intel.com> References: <20180301062845.26038-1-aaron.lu@intel.com> <20180301062845.26038-4-aaron.lu@intel.com> <20180301160950.b561d6b8b561217bad511229@linux-foundation.org> <20180302082756.GC6356@intel.com> <20180309082431.GB30868@intel.com> X-Mailer: Sylpheed 3.6.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > > When a page is freed back to the global pool, its buddy will be checked > to see if it's possible to do a merge. This requires accessing buddy's > page structure and that access could take a long time if it's cache cold. > > This patch adds a prefetch to the to-be-freed page's buddy outside of > zone->lock in hope of accessing buddy's page structure later under > zone->lock will be faster. Since we *always* do buddy merging and check > an order-0 page's buddy to try to merge it when it goes into the main > allocator, the cacheline will always come in, i.e. the prefetched data > will never be unused. > > Normally, the number of to-be-freed pages(i.e. count) equals to > pcp->batch (default=31 and has an upper limit of (PAGE_SHIFT * 8)=96 on > x86_64) but in the case of pcp's pages getting all drained, it will be > pcp->count which has an upper limit of pcp->high. pcp->high, although > has a default value of 186 (pcp->batch=31 * 6), can be changed by user > through /proc/sys/vm/percpu_pagelist_fraction and there is no software > upper limit so could be large, like several thousand. For this reason, > only the last pcp->batch number of page's buddy structure is prefetched > to avoid excessive prefetching. pcp-batch is used because: > 1 most often, count == pcp->batch; > 2 it has an upper limit itself so we won't prefetch excessively. > > Considering the possible large value of pcp->high, it also makes > sense to free the last added page first for cache hot's reason. > That's where the change of list_add_tail() to list_add() comes in > as we will free them from head to tail one by one. > > In the meantime, there are two concerns: > 1 the prefetch could potentially evict existing cachelines, especially > for L1D cache since it is not huge; > 2 there is some additional instruction overhead, namely calculating > buddy pfn twice. > > For 1, it's hard to say, this microbenchmark though shows good result but > the actual benefit of this patch will be workload/CPU dependant; > For 2, since the calculation is a XOR on two local variables, it's expected > in many cases that cycles spent will be offset by reduced memory latency > later. This is especially true for NUMA machines where multiple CPUs are > contending on zone->lock and the most time consuming part under zone->lock > is the wait of 'struct page' cacheline of the to-be-freed pages and their > buddies. > > Test with will-it-scale/page_fault1 full load: > > kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S) > v4.16-rc2+ 9034215 7971818 13667135 15677465 > patch2/3 9536374 +5.6% 8314710 +4.3% 14070408 +3.0% 16675866 +6.4% > this patch 10180856 +6.8% 8506369 +2.3% 14756865 +4.9% 17325324 +3.9% > Note: this patch's performance improvement percent is against patch2/3. > > (Changelog stolen from Dave Hansen and Mel Gorman's comments at > http://lkml.kernel.org/r/148a42d8-8306-2f2f-7f7c-86bc118f8ccd@intel.com) > > Link: http://lkml.kernel.org/r/20180301062845.26038-4-aaron.lu@intel.com > > ... > > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -1141,6 +1141,9 @@ static void free_pcppages_bulk(struct zone *zone, int count, > batch_free = count; > > do { > + unsigned long pfn, buddy_pfn; > + struct page *buddy; > + > page = list_last_entry(list, struct page, lru); > /* must delete to avoid corrupting pcp list */ > list_del(&page->lru); > @@ -1149,7 +1152,23 @@ static void free_pcppages_bulk(struct zone *zone, int count, > if (bulkfree_pcp_prepare(page)) > continue; > > - list_add_tail(&page->lru, &head); > + list_add(&page->lru, &head); The result here will be that free_pcppages_bulk() frees the pages in the reverse order? I don't immediately see a downside to that. In the (distant) past we had issues when successive alloc_page() calls would return pages in descending address order - that totally screwed up scatter-gather page merging. But this is the page-freeing path. Still, something to be thought about and monitored. > + > + /* > + * We are going to put the page back to the global > + * pool, prefetch its buddy to speed up later access > + * under zone->lock. It is believed the overhead of > + * an additional test and calculating buddy_pfn here > + * can be offset by reduced memory latency later. To > + * avoid excessive prefetching due to large count, only > + * prefetch buddy for the last pcp->batch nr of pages. > + */ > + if (count > pcp->batch) > + continue; > + pfn = page_to_pfn(page); > + buddy_pfn = __find_buddy_pfn(pfn, 0); > + buddy = page + (buddy_pfn - pfn); > + prefetch(buddy); > } while (--count && --batch_free && !list_empty(list)); This loop hurts my brain, mainly the handling of `count': while (count) { do { batch_free++; } while (list_empty(list)); /* This is the only non-empty list. Free them all. */ if (batch_free == MIGRATE_PCPTYPES) batch_free = count; do { } while (--count && --batch_free && !list_empty(list)); } I guess it kinda makes sense - both loops terminate on count==0. But still. Can it be clarified?