Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752298AbdHHMMZ (ORCPT ); Tue, 8 Aug 2017 08:12:25 -0400 Received: from bombadil.infradead.org ([65.50.211.133]:38068 "EHLO bombadil.infradead.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752030AbdHHMMW (ORCPT ); Tue, 8 Aug 2017 08:12:22 -0400 Date: Tue, 8 Aug 2017 05:12:20 -0700 From: Matthew Wilcox To: "Huang, Ying" Cc: Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, Andrea Arcangeli , "Kirill A. Shutemov" , Nadia Yvette Chambers , Michal Hocko , Jan Kara , Matthew Wilcox , Hugh Dickins , Minchan Kim , Shaohua Li Subject: Re: [PATCH -mm] mm: Clear to access sub-page last when clearing huge page Message-ID: <20170808121220.GA31390@bombadil.infradead.org> References: <20170807072131.8343-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170807072131.8343-1-ying.huang@intel.com> User-Agent: Mutt/1.8.0 (2017-02-23) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 2544 Lines: 73 On Mon, Aug 07, 2017 at 03:21:31PM +0800, Huang, Ying wrote: > @@ -2509,7 +2509,8 @@ enum mf_action_page_type { > #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLBFS) > extern void clear_huge_page(struct page *page, > unsigned long addr, > - unsigned int pages_per_huge_page); > + unsigned int pages_per_huge_page, > + unsigned long addr_hint); I don't really like adding the extra argument to this function ... > +++ b/mm/huge_memory.c > @@ -549,7 +549,8 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page, > struct vm_area_struct *vma = vmf->vma; > struct mem_cgroup *memcg; > pgtable_t pgtable; > - unsigned long haddr = vmf->address & HPAGE_PMD_MASK; > + unsigned long address = vmf->address; > + unsigned long haddr = address & HPAGE_PMD_MASK; > > VM_BUG_ON_PAGE(!PageCompound(page), page); > > @@ -566,7 +567,7 @@ static int __do_huge_pmd_anonymous_page(struct vm_fault *vmf, struct page *page, > return VM_FAULT_OOM; > } > > - clear_huge_page(page, haddr, HPAGE_PMD_NR); > + clear_huge_page(page, haddr, HPAGE_PMD_NR, address); > /* > * The memory barrier inside __SetPageUptodate makes sure that > * clear_huge_page writes become visible before the set_pmd_at() How about calling: - clear_huge_page(page, haddr, HPAGE_PMD_NR); + clear_huge_page(page, address, HPAGE_PMD_NR); > +++ b/mm/memory.c > @@ -4363,10 +4363,10 @@ static void clear_gigantic_page(struct page *page, > clear_user_highpage(p, addr + i * PAGE_SIZE); > } > } > -void clear_huge_page(struct page *page, > - unsigned long addr, unsigned int pages_per_huge_page) > +void clear_huge_page(struct page *page, unsigned long addr, > + unsigned int pages_per_huge_page, unsigned long addr_hint) > { > - int i; > + int i, n, base, l; > > if (unlikely(pages_per_huge_page > MAX_ORDER_NR_PAGES)) { > clear_gigantic_page(page, addr, pages_per_huge_page); ... and doing this: void clear_huge_page(struct page *page, - unsigned long addr, unsigned int pages_per_huge_page) + unsigned long addr_hint, unsigned int pages_per_huge_page) { - int i; + int i, n, base, l; + unsigned long addr = addr_hint & + (1UL << (pages_per_huge_page + PAGE_SHIFT)); > @@ -4374,9 +4374,31 @@ void clear_huge_page(struct page *page, > } > > might_sleep(); > - for (i = 0; i < pages_per_huge_page; i++) { > + VM_BUG_ON(clamp(addr_hint, addr, addr + > + (pages_per_huge_page << PAGE_SHIFT)) != addr_hint); ... then you can ditch this check