Received: by 10.213.65.68 with SMTP id h4csp105605imn; Mon, 12 Mar 2018 08:07:44 -0700 (PDT) X-Google-Smtp-Source: AG47ELsL+HJ62t+MVUJGzEYzSKMdOWtvjblVmJV4u5FwJkrl62UUB30oTdqk0yzv9heVyvXfD4iF X-Received: by 10.98.229.21 with SMTP id n21mr8285695pff.158.1520867264277; Mon, 12 Mar 2018 08:07:44 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1520867264; cv=none; d=google.com; s=arc-20160816; b=v/0hstHBk8fslBs3CGuC4Gmu2KM9aYyCn14pnCfCHpLrTNhLHKitmHXJ/SDA+rtF5L sOjAfJxEraYL+M7UOKJjir6a4/DI+aUEr3W+wlbWDxW2JovJGuO93Uesl9w8swWCr8rT 9BXfub3A3QXm6sjzE66UtjsJq6lKFqXmvGbis3k4s1zaUs4JP+GXrm2vhMHOal9oVorN CJvaqvOUxvarLgdyqszJuBXHfVFiwrkDNNb+lrdAv61X69rVi7yQxXS6qIGwNeSyGVno MASpeUFCw/IEdlVgEuV/fA/iD6eXipAK2CHK+44B/Y3MUOS+xc2Ylat1N2x9YbCpotyR aAdg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding :content-language:in-reply-to:mime-version:user-agent:date :message-id:from:references:cc:to:subject:arc-authentication-results; bh=MFlEGLIibJUvLcCu35Mu3zZxEW3Pmg3mpVtVyPM/WZQ=; b=rrbH1lUKfCCLC3E5Ksom//MhK/S+wsnr4H2+fCg0rqH5J2lNUNQS5ycFDx3GGrCa1U NHgDZrkDOu14odTrnj4MUchqR/oPTR4RI4egHNcrZAdpYbDYgtEQSBoB0SqtTyOCWCKQ O1672VSdgm7fjiesnaqlL4+IvHCQV06Z0dg9btPaDRUfcIA1i5luZrueVjPMQhl6kWG7 K2c4RHF2s7xXXaS0OW6A+1PBfxvzjmrrq+Vs4rNnXLPxc3u3yYDpXX3O9rWJNVj4y/tZ +i6J7uct7TryAJn1k1HDl5qOqcFTR9++tIKVASibpxm0AYulYN3f5zKRoQQpRrVpr5yN 573Q== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id o5si5022969pgv.825.2018.03.12.08.07.29; Mon, 12 Mar 2018 08:07:44 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932274AbeCLPFw (ORCPT + 99 others); Mon, 12 Mar 2018 11:05:52 -0400 Received: from mx2.suse.de ([195.135.220.15]:41553 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751247AbeCLPFv (ORCPT ); Mon, 12 Mar 2018 11:05:51 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay1.suse.de (charybdis-ext.suse.de [195.135.220.254]) by mx2.suse.de (Postfix) with ESMTP id BD2C7AF0C; Mon, 12 Mar 2018 15:05:49 +0000 (UTC) Subject: Re: [PATCH v4 3/3 update] mm/free_pcppages_bulk: prefetch buddy while not holding lock To: Andrew Morton , Aaron Lu Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Dave Hansen , Kemi Wang , Tim Chen , Andi Kleen , Michal Hocko , Mel Gorman , Matthew Wilcox , David Rientjes References: <20180301062845.26038-1-aaron.lu@intel.com> <20180301062845.26038-4-aaron.lu@intel.com> <20180301160950.b561d6b8b561217bad511229@linux-foundation.org> <20180302082756.GC6356@intel.com> <20180309082431.GB30868@intel.com> <20180309135832.988ab6d3d986658d531a79ef@linux-foundation.org> From: Vlastimil Babka Message-ID: <8818156a-f4a9-ac8a-7179-e0b5e4225e38@suse.cz> Date: Mon, 12 Mar 2018 16:05:47 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0 MIME-Version: 1.0 In-Reply-To: <20180309135832.988ab6d3d986658d531a79ef@linux-foundation.org> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 03/09/2018 10:58 PM, Andrew Morton wrote: >> >> When a page is freed back to the global pool, its buddy will be checked >> to see if it's possible to do a merge. This requires accessing buddy's >> page structure and that access could take a long time if it's cache cold. >> >> This patch adds a prefetch to the to-be-freed page's buddy outside of >> zone->lock in hope of accessing buddy's page structure later under >> zone->lock will be faster. Since we *always* do buddy merging and check >> an order-0 page's buddy to try to merge it when it goes into the main >> allocator, the cacheline will always come in, i.e. the prefetched data >> will never be unused. >> >> Normally, the number of to-be-freed pages(i.e. count) equals to >> pcp->batch (default=31 and has an upper limit of (PAGE_SHIFT * 8)=96 on >> x86_64) but in the case of pcp's pages getting all drained, it will be >> pcp->count which has an upper limit of pcp->high. pcp->high, although >> has a default value of 186 (pcp->batch=31 * 6), can be changed by user >> through /proc/sys/vm/percpu_pagelist_fraction and there is no software >> upper limit so could be large, like several thousand. For this reason, >> only the last pcp->batch number of page's buddy structure is prefetched >> to avoid excessive prefetching. pcp-batch is used because: >> 1 most often, count == pcp->batch; >> 2 it has an upper limit itself so we won't prefetch excessively. >> >> Considering the possible large value of pcp->high, it also makes >> sense to free the last added page first for cache hot's reason. >> That's where the change of list_add_tail() to list_add() comes in >> as we will free them from head to tail one by one. >> >> In the meantime, there are two concerns: >> 1 the prefetch could potentially evict existing cachelines, especially >> for L1D cache since it is not huge; >> 2 there is some additional instruction overhead, namely calculating >> buddy pfn twice. >> >> For 1, it's hard to say, this microbenchmark though shows good result but >> the actual benefit of this patch will be workload/CPU dependant; >> For 2, since the calculation is a XOR on two local variables, it's expected >> in many cases that cycles spent will be offset by reduced memory latency >> later. This is especially true for NUMA machines where multiple CPUs are >> contending on zone->lock and the most time consuming part under zone->lock >> is the wait of 'struct page' cacheline of the to-be-freed pages and their >> buddies. >> >> Test with will-it-scale/page_fault1 full load: >> >> kernel Broadwell(2S) Skylake(2S) Broadwell(4S) Skylake(4S) >> v4.16-rc2+ 9034215 7971818 13667135 15677465 >> patch2/3 9536374 +5.6% 8314710 +4.3% 14070408 +3.0% 16675866 +6.4% >> this patch 10180856 +6.8% 8506369 +2.3% 14756865 +4.9% 17325324 +3.9% >> Note: this patch's performance improvement percent is against patch2/3. >> >> (Changelog stolen from Dave Hansen and Mel Gorman's comments at >> http://lkml.kernel.org/r/148a42d8-8306-2f2f-7f7c-86bc118f8ccd@intel.com) >> >> Link: http://lkml.kernel.org/r/20180301062845.26038-4-aaron.lu@intel.com >> >> ... >> >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -1141,6 +1141,9 @@ static void free_pcppages_bulk(struct zone *zone, int count, >> batch_free = count; >> >> do { >> + unsigned long pfn, buddy_pfn; >> + struct page *buddy; >> + >> page = list_last_entry(list, struct page, lru); >> /* must delete to avoid corrupting pcp list */ >> list_del(&page->lru); >> @@ -1149,7 +1152,23 @@ static void free_pcppages_bulk(struct zone *zone, int count, >> if (bulkfree_pcp_prepare(page)) >> continue; >> >> - list_add_tail(&page->lru, &head); >> + list_add(&page->lru, &head); > > The result here will be that free_pcppages_bulk() frees the pages in > the reverse order? I actually think it restores the order, compared to the previous version (see my earlier reply). > I don't immediately see a downside to that. In the (distant) past we > had issues when successive alloc_page() calls would return pages in > descending address order - that totally screwed up scatter-gather page > merging. But this is the page-freeing path. Still, something to be > thought about and monitored. > >> + >> + /* >> + * We are going to put the page back to the global >> + * pool, prefetch its buddy to speed up later access >> + * under zone->lock. It is believed the overhead of >> + * an additional test and calculating buddy_pfn here >> + * can be offset by reduced memory latency later. To >> + * avoid excessive prefetching due to large count, only >> + * prefetch buddy for the last pcp->batch nr of pages. >> + */ >> + if (count > pcp->batch) >> + continue; You could also go to the locked part after pcp->batch pages and then return back, but maybe let's not complicate it further for corner cases :) >> + pfn = page_to_pfn(page); >> + buddy_pfn = __find_buddy_pfn(pfn, 0); >> + buddy = page + (buddy_pfn - pfn); >> + prefetch(buddy); >> } while (--count && --batch_free && !list_empty(list)); > > This loop hurts my brain, mainly the handling of `count': > > while (count) { > do { > batch_free++; > } while (list_empty(list)); > > /* This is the only non-empty list. Free them all. */ > if (batch_free == MIGRATE_PCPTYPES) > batch_free = count; > > do { > } while (--count && --batch_free && !list_empty(list)); > } > > I guess it kinda makes sense - both loops terminate on count==0. But > still. Can it be clarified? Yeah this is rather far from straightforward :(