Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752265AbdCCPWc (ORCPT ); Fri, 3 Mar 2017 10:22:32 -0500 Received: from gum.cmpxchg.org ([85.214.110.215]:38404 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751050AbdCCPTx (ORCPT ); Fri, 3 Mar 2017 10:19:53 -0500 Date: Fri, 3 Mar 2017 10:18:51 -0500 From: Johannes Weiner To: Minchan Kim Cc: linux-kernel@vger.kernel.org, shli@fb.com, hillf.zj@alibaba-inc.com, hughd@google.com, mgorman@techsingularity.net, mhocko@suse.com, riel@redhat.com, mm-commits@vger.kernel.org Subject: Re: + mm-reclaim-madv_free-pages.patch added to -mm tree Message-ID: <20170303151851.GA16835@cmpxchg.org> References: <58b616a6.hCl1D/BVn0fPDi+K%akpm@linux-foundation.org> <20170303025237.GB3503@bbox> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20170303025237.GB3503@bbox> User-Agent: Mutt/1.7.2 (2016-11-26) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Length: 4504 Lines: 118 On Fri, Mar 03, 2017 at 11:52:37AM +0900, Minchan Kim wrote: > On Tue, Feb 28, 2017 at 04:32:38PM -0800, akpm@linux-foundation.org wrote: > > > > The patch titled > > Subject: mm: reclaim MADV_FREE pages > > has been added to the -mm tree. Its filename is > > mm-reclaim-madv_free-pages.patch > > > > This patch should soon appear at > > http://ozlabs.org/~akpm/mmots/broken-out/mm-reclaim-madv_free-pages.patch > > and later at > > http://ozlabs.org/~akpm/mmotm/broken-out/mm-reclaim-madv_free-pages.patch > > > > Before you just go and hit "reply", please: > > a) Consider who else should be cc'ed > > b) Prefer to cc a suitable mailing list as well > > c) Ideally: find the original patch on the mailing list and do a > > reply-to-all to that, adding suitable additional cc's > > > > *** Remember to use Documentation/SubmitChecklist when testing your code *** > > > > The -mm tree is included into linux-next and is updated > > there every 3-4 working days > > > > ------------------------------------------------------ > > From: Shaohua Li > > Subject: mm: reclaim MADV_FREE pages > > > > When memory pressure is high, we free MADV_FREE pages. If the pages are > > not dirty in pte, the pages could be freed immediately. Otherwise we > > can't reclaim them. We put the pages back to anonumous LRU list (by > > setting SwapBacked flag) and the pages will be reclaimed in normal swapout > > way. > > > > We use normal page reclaim policy. Since MADV_FREE pages are put into > > inactive file list, such pages and inactive file pages are reclaimed > > according to their age. This is expected, because we don't want to > > reclaim too many MADV_FREE pages before used once pages. > > > > Based on Minchan's original patch > > > > Link: http://lkml.kernel.org/r/14b8eb1d3f6bf6cc492833f183ac8c304e560484.1487965799.git.shli@fb.com > > Signed-off-by: Shaohua Li > > Acked-by: Minchan Kim > > Acked-by: Michal Hocko > > Acked-by: Johannes Weiner > > Acked-by: Hillf Danton > > Cc: Hugh Dickins > > Cc: Rik van Riel > > Cc: Mel Gorman > > Signed-off-by: Andrew Morton > > --- > > < snip > > > > @@ -1419,11 +1413,21 @@ static int try_to_unmap_one(struct page > > VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page), > > page); > > > > - if (!PageDirty(page)) { > > + /* > > + * swapin page could be clean, it has data stored in > > + * swap. We can't silently discard it without setting > > + * swap entry in the page table. > > + */ > > + if (!PageDirty(page) && !PageSwapCache(page)) { > > /* It's a freeable page by MADV_FREE */ > > dec_mm_counter(mm, MM_ANONPAGES); > > - rp->lazyfreed++; > > goto discard; > > + } else if (!PageSwapBacked(page)) { > > + /* dirty MADV_FREE page */ > > + set_pte_at(mm, address, pvmw.pte, pteval); > > + ret = SWAP_DIRTY; > > + page_vma_mapped_walk_done(&pvmw); > > + break; > > } > > There is no point to make this logic complicated with clean swapin-page. > > Andrew, > Could you fold below patch into the mm-reclaim-madv_free-pages.patch > if others are not against? > > Thanks. > > From 0c28f6560fbc4e65da4f4a8cc4664ab9f7b11cf3 Mon Sep 17 00:00:00 2001 > From: Minchan Kim > Date: Fri, 3 Mar 2017 11:42:52 +0900 > Subject: [PATCH] mm: clean up lazyfree page handling > > We can make it simple to understand without need to be aware of > clean-swapin page. > This patch just clean up lazyfree page handling in try_to_unmap_one. > > Signed-off-by: Minchan Kim Agreed, this is a litle easier to follow. Acked-by: Johannes Weiner > --- > mm/rmap.c | 22 +++++++++++----------- > 1 file changed, 11 insertions(+), 11 deletions(-) > > diff --git a/mm/rmap.c b/mm/rmap.c > index bb45712..f7eab40 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -1413,17 +1413,17 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma, > VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page), > page); Since you're removing the PageSwapCache() check and we're now assuming that !swapbacked is not in the swapcache, can you modify this to check PageSwapBacked(page) != PageSwapCache(page)? Better yet, change it into a warning and SWAP_FAIL.