Received: by 2002:a05:6a10:f3d0:0:0:0:0 with SMTP id a16csp3888847pxv; Tue, 13 Jul 2021 06:14:37 -0700 (PDT) X-Google-Smtp-Source: ABdhPJyQcul9/Wp/zsdPTfXstoUYaS0KGhe8wWx/aTuAQ8uITxYIAktPRoxIK5S+SMWTNQzIRgWQ X-Received: by 2002:a05:6638:d0e:: with SMTP id q14mr3956227jaj.13.1626182077021; Tue, 13 Jul 2021 06:14:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1626182077; cv=none; d=google.com; s=arc-20160816; b=IB8XlXTGaLAxo5M8lrx/7M2TlI+xq5UsoJ0mjHdnWkIK+03ZSpB92V/4dwG/scLwj8 dz9vqQQDrrQGwXO9N0AHMtCKirpfcGL9J3oKoeYdmwtUPcf40PhzUQRuagQfXLXgsTQs oMhv6gbWsmcW7pKTz6uYkqZ18m0d0NfFknSuc317weuezZAmq8J4nVElYN7ZqSD+/w23 xH1uDuGNrC+1aAt2X8A6Bu85npe/VqpPx44m/4kWSf8GGdbaUcLYrlI+K4aPkVmy5Zdv Gz1oYPylIl00noz/VvVHtFnMKS7zK4w5edTBt5yWpiW6ixGV5qUFdddhgBeScf9Gxnvc QLqQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:content-transfer-encoding:content-language :in-reply-to:mime-version:user-agent:date:message-id:from:references :cc:to:subject; bh=ByBMbJtq1Q884EZqwk36Ym2IYhfW8956gHTgjd/pIUA=; b=bEp9CyfRo2CHriBhKJ+ruPm1igsbcl7H5gBfxpfs3mDFglIwiYmuDXmr5bsTd0ha3P Em2+DKpOCWn35I9kyOlBagLBt9sd03f5Zm+Gb7CruEAMUX5M82D9ZMw3gd6a4autU+RU QWGWfFnwEwrDe1fTTgHT87aDJE35m+ehEZOko2fxYQRmtHa8HCu7kzqE0mk8UlhrW57s EyyfaV0EWGRxteK2WD/YzglE5T53llBXCUrHw4dIbt9ySW/SPqHJ2AJD1zhXfDbH7+BK jp9Aecv0U0BXoaL/vNT1lSt1NU26egzZOrKplEr4pyqY/4gc79OzhDyWtj+NdFYTZzuV V7pw== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [23.128.96.18]) by mx.google.com with ESMTP id a10si7414201ilv.72.2021.07.13.06.14.22; Tue, 13 Jul 2021 06:14:37 -0700 (PDT) Received-SPF: pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) client-ip=23.128.96.18; Authentication-Results: mx.google.com; spf=pass (google.com: domain of linux-kernel-owner@vger.kernel.org designates 23.128.96.18 as permitted sender) smtp.mailfrom=linux-kernel-owner@vger.kernel.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=huawei.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236329AbhGMNQp (ORCPT + 99 others); Tue, 13 Jul 2021 09:16:45 -0400 Received: from szxga01-in.huawei.com ([45.249.212.187]:6810 "EHLO szxga01-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236265AbhGMNQo (ORCPT ); Tue, 13 Jul 2021 09:16:44 -0400 Received: from dggeme703-chm.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4GPLXv595lzXp3J; Tue, 13 Jul 2021 21:08:15 +0800 (CST) Received: from [10.174.178.125] (10.174.178.125) by dggeme703-chm.china.huawei.com (10.1.199.99) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Tue, 13 Jul 2021 21:13:51 +0800 Subject: Re: [PATCH 1/5] mm/vmscan: put the redirtied MADV_FREE pages back to anonymous LRU list To: Michal Hocko CC: , , , , , , , , , , , , , References: <20210710100329.49174-1-linmiaohe@huawei.com> <20210710100329.49174-2-linmiaohe@huawei.com> <9409189e-44f7-2608-68af-851629b6d453@huawei.com> From: Miaohe Lin Message-ID: Date: Tue, 13 Jul 2021 21:13:51 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.178.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggeme703-chm.china.huawei.com (10.1.199.99) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2021/7/13 17:30, Michal Hocko wrote: > On Mon 12-07-21 19:03:39, Miaohe Lin wrote: >> On 2021/7/12 15:22, Michal Hocko wrote: >>> On Sat 10-07-21 18:03:25, Miaohe Lin wrote: >>>> If the MADV_FREE pages are redirtied before they could be reclaimed, put >>>> the pages back to anonymous LRU list by setting SwapBacked flag and the >>>> pages will be reclaimed in normal swapout way. Otherwise MADV_FREE pages >>>> won't be reclaimed as expected. >>> >>> Could you describe problem which you are trying to address? What does it >>> mean that pages won't be reclaimed as expected? >>> >> >> In fact, this is not a bug and harmless. > > Fixes tag is then misleading and the changelog should be more clear > about this as well. Sure. > >> But it looks buggy as it didn't perform >> the expected ops from code view. Lazyfree (MADV_FREE) pages are clean anonymous >> pages. They have SwapBacked flag cleared to distinguish normal anonymous pages. > > yes. > >> When the MADV_FREE pages are redirtied before they could be reclaimed, the pages >> should be put back to anonymous LRU list by setting SwapBacked flag, thus the >> pages will be reclaimed in normal swapout way. > > Agreed. But the question is why this needs an explicit handling here > when we already do handle this case when trying to unmap the page. This makes me think more. It seems even the page_ref_freeze call is guaranteed to success as no one can grab the page refcnt after the page is successfully unmapped. Does the change below makes sense for you? Many Thanks. diff --git a/mm/vmscan.c b/mm/vmscan.c index 6e26b3c93242..c31925320b33 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1624,15 +1624,11 @@ static unsigned int shrink_page_list(struct list_head *page_list, } if (PageAnon(page) && !PageSwapBacked(page)) { - /* follow __remove_mapping for reference */ - if (!page_ref_freeze(page, 1)) - goto keep_locked; - if (PageDirty(page)) { - SetPageSwapBacked(page); - page_ref_unfreeze(page, 1); - goto keep_locked; - } - + /* + * No one can grab the page refcnt or redirty the page + * after the page is successfully unmapped. + */ + WARN_ON_ONCE(!page_ref_freeze(page, 1)); count_vm_event(PGLAZYFREED); count_memcg_page_event(page, PGLAZYFREED); } else if (!mapping || !__remove_mapping(mapping, page, true, > Please make sure to document the behavior you are observing, why it is > not desirable. > >> Many thanks for review and reply. >> >>> Also why is SetPageSwapBacked in shrink_page_list insufficient? > > Sorry I meant to say try_to_unmap path here > >>>> Fixes: 802a3a92ad7a ("mm: reclaim MADV_FREE pages") >>>> Signed-off-by: Miaohe Lin >>>> --- >>>> mm/vmscan.c | 1 + >>>> 1 file changed, 1 insertion(+) >>>> >>>> diff --git a/mm/vmscan.c b/mm/vmscan.c >>>> index a7602f71ec04..6483fe0e2065 100644 >>>> --- a/mm/vmscan.c >>>> +++ b/mm/vmscan.c >>>> @@ -1628,6 +1628,7 @@ static unsigned int shrink_page_list(struct list_head *page_list, >>>> if (!page_ref_freeze(page, 1)) >>>> goto keep_locked; >>>> if (PageDirty(page)) { >>>> + SetPageSwapBacked(page); >>>> page_ref_unfreeze(page, 1); >>>> goto keep_locked; >>>> } >>>> -- >>>> 2.23.0 >>> >