Received: by 2002:a05:6a10:22f:0:0:0:0 with SMTP id 15csp204660pxk; Thu, 24 Sep 2020 03:40:04 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyHt3PuWeX1re7YuPHQuyTZmIxlKQZ5+VZq74RsjZZbRtclxtS+pvzLIr5Bj3qxaHih2MUl X-Received: by 2002:a17:906:7d5:: with SMTP id m21mr295137ejc.538.1600944004521; Thu, 24 Sep 2020 03:40:04 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1600944004; cv=none; d=google.com; s=arc-20160816; b=GJllBUdvetgPvebleGSHO9SE1r1rQFQb1gtwlg4Jxe10qxt6FjzU0uUPmKGNYZxRwB eOF38oojpPAmMBFnFLE8s1/3fMYlQbFsukmngHBLmL23OvWSSlCZFyht1MYqwCZWPUJW 3Rz9zNGdb4RcXQTH1pmKahIeeVYmGHdk3zps2Jpj04+YGGU9qs0lOGMkU8AeMAaFSg3p X91r12OWjr3ySIVBv52umgVNvp9fLx2TQjtev2lrT1jLJ0y3zbRVHLJygZoMBkCqhcYF QLDKyFAkabEwBct6lljD505arQMcjTul8VZSSO7XOINgNg/owLL2rqqhHC5nOaGK8NOT orcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=rwaY2Uie5JNaOrdrL3JSDy4U9m0Wb4quMlFl6XQGCaI=; b=d1M9SXlcOhRIi+lLuwQYPVWcSyjtF1n5HgpfjgniocZ86Hz9JxRkcSoYJrHT+UOlHs 1ZTAJI+zwBQEX/ycULd12FmvaVn4JlyHjOx8JBS1oLhcn2BjreIZn5v5GthHilEOO6Jq DDHJ7spJ38NSpIJvHRgBUkcE2EeSQNAxjirgktPHw/beMVqRxyxIDFt65Rouf+vTVx8j wR/zFGDG2Bdu6geVgNQDSMloT5hbEiJSSMkuD6c1AL5GTsXkQfXz2wASY1ROLLojGBLr 9/WD70c9Vu45X6dm4vB0MUhG7pI7EFphDJNTPGADTfr1se4OFlvYCq1XYU/70KKPxtPz Bxlg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id e14si1850209eds.529.2020.09.24.03.39.40; Thu, 24 Sep 2020 03:40:04 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726652AbgIXKhp (ORCPT + 99 others); Thu, 24 Sep 2020 06:37:45 -0400 Received: from mx2.suse.de ([195.135.220.15]:43776 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726154AbgIXKho (ORCPT ); Thu, 24 Sep 2020 06:37:44 -0400 X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 20DD2ADAB; Thu, 24 Sep 2020 10:38:21 +0000 (UTC) Subject: Re: [PATCH RFC 2/4] mm/page_alloc: place pages to tail in __putback_isolated_page() To: David Hildenbrand , linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, xen-devel@lists.xenproject.org, linux-acpi@vger.kernel.org, Andrew Morton , Alexander Duyck , Mel Gorman , Michal Hocko , Dave Hansen , Wei Yang , Oscar Salvador , Mike Rapoport , Scott Cheloha , Michael Ellerman References: <20200916183411.64756-1-david@redhat.com> <20200916183411.64756-3-david@redhat.com> From: Vlastimil Babka Message-ID: <6edfc921-eacc-23bd-befa-f947fbcb50ba@suse.cz> Date: Thu, 24 Sep 2020 12:37:42 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.12.0 MIME-Version: 1.0 In-Reply-To: <20200916183411.64756-3-david@redhat.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 9/16/20 8:34 PM, David Hildenbrand wrote: > __putback_isolated_page() already documents that pages will be placed to > the tail of the freelist - this is, however, not the case for > "order >= MAX_ORDER - 2" (see buddy_merge_likely()) - which should be > the case for all existing users. I think here should be a sentence saying something along "Thus this patch introduces a FOP_TO_TAIL flag to really ensure moving pages to tail." > This change affects two users: > - free page reporting > - page isolation, when undoing the isolation. > > This behavior is desireable for pages that haven't really been touched > lately, so exactly the two users that don't actually read/write page > content, but rather move untouched pages. > > The new behavior is especially desirable for memory onlining, where we > allow allocation of newly onlined pages via undo_isolate_page_range() > in online_pages(). Right now, we always place them to the head of the > free list, resulting in undesireable behavior: Assume we add > individual memory chunks via add_memory() and online them right away to > the NORMAL zone. We create a dependency chain of unmovable allocations > e.g., via the memmap. The memmap of the next chunk will be placed onto > previous chunks - if the last block cannot get offlined+removed, all > dependent ones cannot get offlined+removed. While this can already be > observed with individual DIMMs, it's more of an issue for virtio-mem > (and I suspect also ppc DLPAR). > > Note: If we observe a degradation due to the changed page isolation > behavior (which I doubt), we can always make this configurable by the > instance triggering undo of isolation (e.g., alloc_contig_range(), > memory onlining, memory offlining). > > Cc: Andrew Morton > Cc: Alexander Duyck > Cc: Mel Gorman > Cc: Michal Hocko > Cc: Dave Hansen > Cc: Vlastimil Babka > Cc: Wei Yang > Cc: Oscar Salvador > Cc: Mike Rapoport > Cc: Scott Cheloha > Cc: Michael Ellerman > Signed-off-by: David Hildenbrand > --- > mm/page_alloc.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index 91cefb8157dd..bba9a0f60c70 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -89,6 +89,12 @@ typedef int __bitwise fop_t; > */ > #define FOP_SKIP_REPORT_NOTIFY ((__force fop_t)BIT(0)) > > +/* > + * Place the freed page to the tail of the freelist after buddy merging. Will > + * get ignored with page shuffling enabled. > + */ > +#define FOP_TO_TAIL ((__force fop_t)BIT(1)) > + > /* prevent >1 _updater_ of zone percpu pageset ->high and ->batch fields */ > static DEFINE_MUTEX(pcp_batch_high_lock); > #define MIN_PERCPU_PAGELIST_FRACTION (8) > @@ -1040,6 +1046,8 @@ static inline void __free_one_page(struct page *page, unsigned long pfn, > > if (is_shuffle_order(order)) > to_tail = shuffle_pick_tail(); > + else if (fop_flags & FOP_TO_TAIL) > + to_tail = true; Should we really let random shuffling decision have a larger priority than explicit FOP_TO_TAIL request? Wei Yang mentioned that there's a call to shuffle_zone() anyway to process a freshly added memory, so we don't need to do that also during the process of addition itself? Might help with your goal of reducing dependencies even on systems that do have shuffling enabled? Thanks, Vlastimil > else > to_tail = buddy_merge_likely(pfn, buddy_pfn, page, order); > > @@ -3289,7 +3297,7 @@ void __putback_isolated_page(struct page *page, unsigned int order, int mt) > > /* Return isolated page to tail of freelist. */ > __free_one_page(page, page_to_pfn(page), zone, order, mt, > - FOP_SKIP_REPORT_NOTIFY); > + FOP_SKIP_REPORT_NOTIFY | FOP_TO_TAIL); > } > > /* >