Received: by 2002:ac0:946b:0:0:0:0:0 with SMTP id j40csp2754644imj; Mon, 11 Feb 2019 08:01:01 -0800 (PST) X-Google-Smtp-Source: AHgI3IalOwhRTvY+wu1VdvOyopDDxMwiyHeP0c3OGImdZ2wLzecjLf4vBL7TUkDKJ764KN/bawDp X-Received: by 2002:aa7:80c6:: with SMTP id a6mr9992802pfn.40.1549900861307; Mon, 11 Feb 2019 08:01:01 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1549900861; cv=none; d=google.com; s=arc-20160816; b=tb74M/S5jv+0vOT2P8Tr3/98s39/qu2rQwyTceyTQT//V+6Z0jVzDzUMOwM34q9GPh osIQdJf2X7GuR5TQX4/Q0sv8Csj8dUcDNL4o9w6KJsdFDJHU+t4W9nxiwnJx7Y1COsLh YyPnXU1uSgBzFxicGXqnm72Kx3+Vlk/den/LRiog6ijUTwOtB34/QgpL06hCV5j9svY2 xanwU30ezufVsRmZz1IEfrMOGpUAVRtKnHpfBx67yilzmS9Yra4TuaL5/nq7tLDLqkdC NIdutBXEki0B7uAcVsOxYe+d4XAwUNliqiMmo43DS7VyoTFVfsmP6AykMi7jhWtUOLCl q0ug== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:date:cc:to:from:subject:message-id; bh=q30GnaniTrg7kIY7fBNLlyCuQ47JKmP6sHOl3fLaWgA=; b=lElcBE4Rq/K2n7cGymnTbxnDRKsbf6H3ciWT7lo0c5cwybBJ3QerUDlOGhOtu9SKJW BdZPk7sXyZr8OXhOz9MbhkZWLMvsMnkso4vMGBpf835VWz4HLVklyBNnffWJIw3JsmsJ cZ1/AQcl3Fhi5bCyOYqDR7y1PpVW/SR6eVTa/hTUNhBZPckX78XHc25ptDruFcKiwuM6 t29rNYWElEzJB2APnFz2RHGwC00t0H8uB76yXb28NZWAiWrWBZZk8yb7Ub/iwd94HAgv OBtjc8b//gB+7U8xMKejhD/xGiNP+ihmUrBlpCEmJ1RNEjfYomXns9ZyDYbJa5VjBTtR bCyw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id q20si9694898pgl.268.2019.02.11.08.00.41; Mon, 11 Feb 2019 08:01:01 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=intel.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731137AbfBKP6b (ORCPT + 99 others); Mon, 11 Feb 2019 10:58:31 -0500 Received: from mga11.intel.com ([192.55.52.93]:58792 "EHLO mga11.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728524AbfBKP63 (ORCPT ); Mon, 11 Feb 2019 10:58:29 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga102.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 11 Feb 2019 07:58:28 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,359,1544515200"; d="scan'208";a="115325426" Received: from ahduyck-desk1.jf.intel.com ([10.7.198.76]) by orsmga006.jf.intel.com with ESMTP; 11 Feb 2019 07:58:28 -0800 Message-ID: Subject: Re: [RFC PATCH 4/4] mm: Add merge page notifier From: Alexander Duyck To: Aaron Lu , Alexander Duyck , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: rkrcmar@redhat.com, x86@kernel.org, mingo@redhat.com, bp@alien8.de, hpa@zytor.com, pbonzini@redhat.com, tglx@linutronix.de, akpm@linux-foundation.org Date: Mon, 11 Feb 2019 07:58:28 -0800 In-Reply-To: <5e6d22b2-0f14-43eb-846b-a940e629c02b@gmail.com> References: <20190204181118.12095.38300.stgit@localhost.localdomain> <20190204181558.12095.83484.stgit@localhost.localdomain> <5e6d22b2-0f14-43eb-846b-a940e629c02b@gmail.com> Content-Type: text/plain; charset="UTF-8" X-Mailer: Evolution 3.28.5 (3.28.5-2.fc28) Mime-Version: 1.0 Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, 2019-02-11 at 14:40 +0800, Aaron Lu wrote: > On 2019/2/5 2:15, Alexander Duyck wrote: > > From: Alexander Duyck > > > > Because the implementation was limiting itself to only providing hints on > > pages huge TLB order sized or larger we introduced the possibility for free > > pages to slip past us because they are freed as something less then > > huge TLB in size and aggregated with buddies later. > > > > To address that I am adding a new call arch_merge_page which is called > > after __free_one_page has merged a pair of pages to create a higher order > > page. By doing this I am able to fill the gap and provide full coverage for > > all of the pages huge TLB order or larger. > > > > Signed-off-by: Alexander Duyck > > --- > > arch/x86/include/asm/page.h | 12 ++++++++++++ > > arch/x86/kernel/kvm.c | 28 ++++++++++++++++++++++++++++ > > include/linux/gfp.h | 4 ++++ > > mm/page_alloc.c | 2 ++ > > 4 files changed, 46 insertions(+) > > > > diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h > > index 4487ad7a3385..9540a97c9997 100644 > > --- a/arch/x86/include/asm/page.h > > +++ b/arch/x86/include/asm/page.h > > @@ -29,6 +29,18 @@ static inline void arch_free_page(struct page *page, unsigned int order) > > if (static_branch_unlikely(&pv_free_page_hint_enabled)) > > __arch_free_page(page, order); > > } > > + > > +struct zone; > > + > > +#define HAVE_ARCH_MERGE_PAGE > > +void __arch_merge_page(struct zone *zone, struct page *page, > > + unsigned int order); > > +static inline void arch_merge_page(struct zone *zone, struct page *page, > > + unsigned int order) > > +{ > > + if (static_branch_unlikely(&pv_free_page_hint_enabled)) > > + __arch_merge_page(zone, page, order); > > +} > > #endif > > > > #include > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > > index 09c91641c36c..957bb4f427bb 100644 > > --- a/arch/x86/kernel/kvm.c > > +++ b/arch/x86/kernel/kvm.c > > @@ -785,6 +785,34 @@ void __arch_free_page(struct page *page, unsigned int order) > > PAGE_SIZE << order); > > } > > > > +void __arch_merge_page(struct zone *zone, struct page *page, > > + unsigned int order) > > +{ > > + /* > > + * The merging logic has merged a set of buddies up to the > > + * KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER. Since that is the case, take > > + * advantage of this moment to notify the hypervisor of the free > > + * memory. > > + */ > > + if (order != KVM_PV_UNUSED_PAGE_HINT_MIN_ORDER) > > + return; > > + > > + /* > > + * Drop zone lock while processing the hypercall. This > > + * should be safe as the page has not yet been added > > + * to the buddy list as of yet and all the pages that > > + * were merged have had their buddy/guard flags cleared > > + * and their order reset to 0. > > + */ > > + spin_unlock(&zone->lock); > > + > > + kvm_hypercall2(KVM_HC_UNUSED_PAGE_HINT, page_to_phys(page), > > + PAGE_SIZE << order); > > + > > + /* reacquire lock and resume freeing memory */ > > + spin_lock(&zone->lock); > > +} > > + > > #ifdef CONFIG_PARAVIRT_SPINLOCKS > > > > /* Kick a cpu by its apicid. Used to wake up a halted vcpu */ > > diff --git a/include/linux/gfp.h b/include/linux/gfp.h > > index fdab7de7490d..4746d5560193 100644 > > --- a/include/linux/gfp.h > > +++ b/include/linux/gfp.h > > @@ -459,6 +459,10 @@ static inline struct zonelist *node_zonelist(int nid, gfp_t flags) > > #ifndef HAVE_ARCH_FREE_PAGE > > static inline void arch_free_page(struct page *page, int order) { } > > #endif > > +#ifndef HAVE_ARCH_MERGE_PAGE > > +static inline void > > +arch_merge_page(struct zone *zone, struct page *page, int order) { } > > +#endif > > #ifndef HAVE_ARCH_ALLOC_PAGE > > static inline void arch_alloc_page(struct page *page, int order) { } > > #endif > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > > index c954f8c1fbc4..7a1309b0b7c5 100644 > > --- a/mm/page_alloc.c > > +++ b/mm/page_alloc.c > > @@ -913,6 +913,8 @@ static inline void __free_one_page(struct page *page, > > page = page + (combined_pfn - pfn); > > pfn = combined_pfn; > > order++; > > + > > + arch_merge_page(zone, page, order); > > Not a proper place AFAICS. > > Assume we have an order-8 page being sent here for merge and its order-8 > buddy is also free, then order++ became 9 and arch_merge_page() will do > the hint to host on this page as an order-9 page, no problem so far. > Then the next round, assume the now order-9 page's buddy is also free, > order++ will become 10 and arch_merge_page() will again hint to host on > this page as an order-10 page. The first hint to host became redundant. Actually the problem is even worse the other way around. My concern was pages being incrementally freed. With this setup I can catch when we have crossed the threshold from order 8 to 9, and specifically for that case provide the hint. This allows me to ignore orders above and below 9. If I move the hint to the spot after the merging I have no way of telling if I have hinted the page as a lower order or not. As such I will hint if it is merged up to orders 9 or greater. So for example if it merges up to order 9 and stops there then done_merging will report an order 9 page, then if another page is freed and merged with this up to order 10 you would be hinting on order 10. By placing the function here I can guarantee that no more than 1 hint is provided per 2MB page. > I think the proper place is after the done_merging tag. > > BTW, with arch_merge_page() at the proper place, I don't think patch3/4 > is necessary - any freed page will go through merge anyway, we won't > lose any hint opportunity. Or do I miss anything? You can refer to my comment above. What I want to avoid is us hinting a page multiple times if we aren't using MAX_ORDER - 1 as the limit. What I am avoiding by placing this where I did is us doing a hint on orders greater than our target hint order. So with this way I only perform one hint per 2MB page, otherwise I would be performing multiple hints per 2MB page as every order above that would also trigger hints.